2024-12-24 08:33:04
Soundtrack: Spinnerette - The Walking Dead - (Alt: Postmodern Jukebox - Radioactive)
Thanks so much to everybody that has supported me in the last year. This newsletter started as a way for me to process the complex feelings I have about the technology industry, and still remains, in a way, as a kind of side project, as I still run my PR firm during the day. I also apologize for sending you another email.
Anyway, the newsletter has now grown into something altogether larger and more important to me, both personally and professionally. It has allowed me to meet and get to know some incredible people, as well as deepen friendships with others I’ve known for years, all while helping me understand what’s happening in an industry I find both fascinating and frustrating.
As I hit publish on this email, Where’s Your Ed At has over 49,250 subscribers, and regular read rates of 55-59%, numbers that have stayed consistent for years. Where’s Your Ed At - as a place on the web - had over 1.8 million unique visitors. In the last four years I’ve published over 800,000 words, and it is very cool to see it go anywhere. I had no plan and still do not have one.
It has been quite a year.
As a direct result of writing the newsletter, I was recruited by Cool Zone Media (best known for Behind The Bastards and It Could Happen Here), and worked with them to create Better Offline, which has been this year’s fastest-growing tech podcast, and was called one of the best podcasts of 2024 by Esquire, New York Magazine and The Information. It has turned into a bizarre mixture of talk radio and spoken word oratory, something truly unique, and I am extremely proud of it. Robert Evans and Sophie Lichterman have been incredible bosses, as have my producers Matt Osowski, Ian Johnson, Danl Goodman and Eva Warrender.
In the latter-half of the year, I sold a book to Penguin Random House - the upcoming Why Everything Stopped Working, which should be out sometime in 2026, and I intend it to be the best thing I’ve ever written. I will be honest, I am still shocked this happened. Nevertheless, I will write the shit out of it. My editor (Megan Wenerstrom) and agent (William Callahan) are, much like Robert and Sophie, fully behind me and what I believe in.
Also, if you missed my speech at Web Summit, do watch it.
It is still ridiculous to me that any of this happened. A year ago, I had 22,000 subscribers, had just signed the contract for Better Offline, and felt, if I’m honest, kind of lost, a feeling I’ve had on-and-off for about two years. I liked writing, but I didn’t love writing the newsletter. I’d also, for whatever reason, yet to really feel confident writing about big tech, because I figured there was something I was missing.
So I tried to work it out. I have never been a financial or investigative journalist (I was a games journalist over 16 years ago), so a lot of this was learning on the go. This is why I wrote an investigation about the NFT-turned-AI-doodad company Rabbit out of nowhere, which led to an interview with CoffeeZilla, which was both cool to be on and a sign that I could do “this,” even if “this” was a touch vague. Before this year I had never written any kind of financial analysis - I am very proud of both How Does OpenAI Survive? and OpenAI Is A Bad Business, along with more opinion-analysis pieces like The Subprime AI Crisis and The Other Bubble.
But, yeah, no matter how fancy I get with it, everything you’ve read this year has been me trying to work out what the fuck was going on - why these companies keep making money while their products get worse, which is why you’ve seen me dedicated tens of thousands of words trying to explain what happened to Google Search and Facebook, Shareholder Supremacy, and then why I’m so anxious about the AI bubble popping - because there’s nothing left afterwards. Every single AI piece I’ve written this year - my biggest being Pop Culture - has been me trying to work out what the hell is going on. This is why my latest AI piece sounds like the crazed scientist at the beginning of a disaster movie - I am alarmed! There will be measurable damage to the stock market, but more importantly tens of thousands of people will lose their jobs and a depression will begin in the tech industry.
Every time I am worried I’ve gone too hard, or been a little too emotional, I get surprised by the outpouring of support, from people who feel the same frustration and outrage. I felt like I’d gone a little too hard on Lost In The Future (and its sister episode of Better Offline, The Rot Society), but the reaction was people saying they felt the same way, something I’m happy to say happened again with Never Forgive Them, which is my favourite thing I’ve ever written, and the hardest I’ve gone.
Anyway, long story short, the format of this newsletter has matured into something weird and cool. I am so glad people enjoy it. It is slightly insane to regularly write 3000 to 5000 words in which I combine financial, cultural and economic analysis with calling people scumbags, but I enjoy doing it. I hope you continue reading. I work very hard on it.
It wouldn’t be my newsletter without me writing “as an aside” and then doing a quote:
While I am not doing a belabored list of thank yous, but there is one name that cannot be left out: Matt Hughes, my editor, who has edited over a hundred of my pieces and scripts, as well as being one of my closest friends, somebody that riffed with me for hours and made so many ideas sharper. Whenever I have flinched away from an idea, fearing I’d gone too far, Matt would hold me up and not just reassure me, but walk me through the logical backing of the argument - a true friend makes their friend stronger. Thank you Matt. You make this newsletter both possible and much, much better. I am eternally grateful. Let’s make these fucking people take responsibility for once in their lives.
I love what I do, and I am so lucky to be able to do it. I really am grateful for you reading my work, and though this is me trying to work stuff out as I go, I am also doing my best to deeply research and provide real, meaningful analysis. It is a deeply personal journey for me, one that has allowed me to develop more as a person and be able to speak with more clarity, purpose and vigor, and as wanky as that sounds, all I can tell you is that I’m exactly the same as this in person, except I tell my friends I love them way more.
I sound a little ridiculous writing about a blog and a podcast as if I’m talking about playing the violin (which I cannot do), but I am proud of my work and find it deeply meaningful, as well as something that has enriched my life. I will continue to use it to, at the very least, provide you with some clarity about the world, and promise to do so with sincerity.
I am genuinely so thankful for every minute you give anything I create.
At CES, I’ll be trying something new - I’ll be running a week-long live-to-tape radio show, with two 90 minute episodes a day taking a temperature check of the tech industry, joined by David Roth of Defector and tech critic Edward Ongweso Jr., joined by a host of different tech reporters, at least one priest, Robert Evans of Behind The Bastards and Gare Davis of It Could Happen Here.
It’s yet another ambitious and weird idea that I intend to at the very least make a lot of fun. If you’re a reporter reading this and want to join - [email protected], let’s make it happen, we’re recording Tuesday through end of Saturday. We’ll have food and drink.
Similarly, next year I’ll be spending a lot more of my life in New York City, I’ll be starting up Radio Better Offline, a regular tech talk show recorded at iHeartRadio’s NYC studios within the Better Offline podcast. Each week I’ll have two or three tech people in the studio, and want to create a kind of lively, meaningful and exciting talk radio setting for the tech industry.
The tech industry - especially within tech journalism - has such an incredible variety of people, both normal and otherwise, and I want to bring their voices to their ears. I feel like tech reporters regularly feel isolated and crushed by this industry, and I want Better Offline - and Radio Better Offline in particular - to help fight back. Reporters are also regularly robbed of the opportunity to build their own brands while at their publications - come on Better Offline, I'll put you on a great-sounding podcast with a huge audience where people will hear your voice, be directed to your work and social media, and remember you, not just the place you work. I will do my damndest to bring on as many of you as I can.
Anyway. So many people have helped me in so many ways this year, and I'm eternally grateful. Members of the tech, business and political media, software engineers, data analysts, academics, scientists, all endlessly generous and helpful. I will do my best to pay forward and back the generosity of time, love and support that I've received.
Outside of next week's 3-part year-end series of Better Offline, I am taking a break until mid-January. It has been a long year for all of us. Please take care of yourselves. Thanks for reading and listening to my stuff.
If you somehow haven’t subscribed to my podcast, please subscribe to my podcast, then download every episode. I need download numbers. I need you to help me. I need you to download every single one then force your family and friends to do it too.
Despite the size of this newsletter, email me at [email protected]. I do my best to respond to every reply, DM and email. I am super online.
I also realize I’ve never written out all my social handles.
Ones I actually use:
Bluesky: https://bsky.app/profile/edzitron.com (my least normal social media)
Instagram: http://instagram.com/edzitron (my most normal social media)
Ones that I don’t really touch much:
Twitter: http://www.twitter.com/edzitron
Threads: http://www.threads.net/edzitron
If you see an edzitron it’s probably me. I think I’m ezitron on TikTok?
2024-12-17 01:06:48
In the last year, I’ve spent about 200,000 words on a kind of personal journey where I’ve tried again and again to work out why everything digital feels so broken, and why it seems to keep getting worse, despite what tech’s “brightest” minds might promise. More regularly than not, I’ve found that the answer is fairly simple: the tech industry’s incentives no longer align with the user.
The people running the majority of internet services have used a combination of monopolies and a cartel-like commitment to growth-at-all-costs thinking to make war with the user, turning the customer into something between a lab rat and an unpaid intern, with the goal to juice as much value from the interaction as possible. To be clear, tech has always had an avaricious streak, and it would be naive to suggest otherwise, but this moment feels different. I’m stunned by the extremes tech companies are going to extract value from customers, but also by the insidious way they’ve gradually degraded their products.
To be clear, I don’t believe that this gradual enshittification is part of some grand, Machiavellian long game by the tech companies, but rather the product of multiple consecutive decisions made in response to short-term financial needs. Even if it was, the result would be the same — people wouldn’t notice how bad things have gotten until it’s too late, or they might just assume that tech has always sucked, or they’re just personally incapable of using the tools that are increasingly fundamental to living in a modern world.
You are the victim of a con — one so pernicious that you’ve likely tuned it out despite the fact it’s part of almost every part of your life. It hurts everybody you know in different ways, and it hurts people more based on their socioeconomic status. It pokes and prods and twists millions of little parts of your life, and it’s everywhere, so you have to ignore it, because complaining about it feels futile, like complaining about the weather.
It isn’t. You’re battered by the Rot Economy, and a tech industry that has become so obsessed with growth that you, the paying customer, are a nuisance to be mitigated far more than a participant in an exchange of value. A death cult has taken over the markets, using software as a mechanism to extract value at scale in the pursuit of growth at the cost of user happiness.
These people want everything from you — to control every moment you spend working with them so that you may provide them with more ways to make money, even if doing so doesn’t involve you getting anything else in return. Meta, Amazon, Apple, Microsoft and a majority of tech platforms are at war with the user, and, in the absence of any kind of consistent standards or effective regulations, the entire tech ecosystem has followed suit. A kind of Coalition of the Willing of the worst players in hyper-growth tech capitalism.
Things are being made linearly worse in the pursuit of growth in every aspect of our digital lives, and it’s because everything must grow, at all costs, at all times, unrelentingly, even if it makes the technology we use every day consistently harmful.
This year has, on some level, radicalized me, and today I’m going to explain why. It’s going to be a long one, because I need you to fully grasp the seriousness and widespread nature of the problem.
You have, more than likely, said to yourself sometime in the last ten years that you “didn’t get tech,” or that you are “getting too old,” or that tech has “gotten away from you” because you found a service, or an app, or a device annoying. You, or someone you love, have convinced yourself that your inability to use something is a sign that you’re deficient, that you’ve failed to “keep up with the times,” as if the things we use every day should be in a constant state of flux.
Sidenote: I’m sure there are exceptions. Some people really just don’t try and learn how to use a computer or smartphone, and naturally reject technology, or steadfastly refuse to pick it up because “it’s not for them.” These people exist, they’re real, we all know them, and I don’t think anybody reading this falls into this camp. Basic technological literacy is a requirement to live in society — and there is some responsibility on the user. But even if we assume that this is the case, and even if there are a lot of people that simply don’t try…should companies really take advantage of them?
The tools we use in our daily lives outside of our devices have mostly stayed the same. While buttons on our cars might have moved around — and I’m not even getting into Tesla’s designs right now — we generally have a brake, an accelerator, a wheel, and a turn signal. Boarding an airplane has worked mostly the same way since I started flying, other than moving from physical tickets to digital ones. We’re not expected to work out “the new way to use a toilet” every few months because somebody decided we were finishing too quickly.
Yet our apps and the platforms we use every day operate by a totally different moral and intellectual compass. While the idea of an update is fairly noble (and not always negative) — that something you’ve bought can be maintained and improved over time is a good thing — many tech platforms see it as a means to further extract and exploit, to push users into doing things that either keep them on the app longer or take more-profitable actions.
We as a society need to reckon with how this twists us up, makes us more paranoid, more judgmental, more aggressive, more reactionary, because when everything is subtly annoying, we all simmer and suffer in manifold ways. There is no digital world and physical world — they are, and have been, the same for quite some time, and reporting on tech as if this isn’t the case fails the user. It may seem a little dramatic, but take a second and really think about how many little digital irritations you deal with in a day. It’s time to wake up to the fact that our digital lives are rotten.
I’m not talking about one single product or company, but most digital experiences. The interference is everywhere, and we’ve all learned to accept conditions that are, when written out plainly, are kind of insane.
Back in 2023, Spotify redesigned its app to, and I quote The Verge, be “part TikTok, part Instagram, and part YouTube,” which in practice meant replacing a relatively clean and straightforward user interface with one made up of full-screen cards (like TikTok) and autoplaying video podcasts (like TikTok), which CEO Daniel Ek claimed would, to quote Sky News, make the platform “come alive” with different content on a platform built and sold as a place to listen to music.
The tech media waved off the redesign without really considering the significance of the fact that at the drop of a hat, hundreds of millions of people’s experience of listening to music would change based on the whims of a multi-billionaire, with the express purpose being to force these people to engage with completely different content as a means of increasing engagement metrics and revenue. By all means try and pretend this is “just an app,” but people’s relationships with music and entertainment are deeply important to their moods and motivations, and adding layers of frustration in an app they interact with for hours a day is consistently grating.
And no matter how you feel, this design was never for the customer. Nobody using Spotify was saying “ah man, I wish I could watch videos on this,” but that doesn’t matter because engagement and revenue must increase. It’s clear that Spotify, a company best-known for exploiting the artists on its platform, treats its customers (both paying and otherwise) with a similar level of contempt.
It’s far from alone. Earlier in the year, smart speaker company Sonos released a redesign of its app that removed accessibility features and the ability to edit song queues or play music from your phone in an attempt to “modernize” the interface, with WIRED suggesting that the changes could potentially open the door to adding a subscription of some sort to help Sonos’ ailing growth. Meta’s continual redesigns of Facebook and Instagram — the latest of which happened in October to “focus on Gen Z” — are probably the most egregious example of the constant chaos of our digital lives.
Sidenote: Some of Meta’s random redesigns are subtle and not announced with any particular fanfare. Try this: Using the iPhone app, go to a friend’s profile, tap “photos,” and then “videos.” Naturally, you’d expect these to be organized in chronological order. If your friend is a prolific uploader, that won’t be the case. You’ll find them organized in a scattershot, algorithmically-driven arrangement that doesn’t make any sense.
What does that mean in practice? Say you’re looking for videos from an important life event — like a birthday or a wedding. You can’t just scroll down until you reach them. You’ve got to parse your way through every single one. Which takes longer, but is presumably great for Facebook’s engagement numbers.
Also, there are two separate tabs that show videos (one on the profile page, another under the photo tab). You’d assume both would show the exact same things, and you’d be wrong. They’ll often show an entirely different selection of videos, with no obvious criteria as to why. And don’t get me started on Facebook’s retrospective conversion of certain older videos — some of which might be a few seconds long, others lasting several minutes — into reels, which also strips the ability to skip certain parts without installing a third-party browser plugin.
As every single platform we use is desperate to juice growth from every user, everything we interact with is hyper-monetized through plugins, advertising, microtransactions and other things that constantly gnaw at the user experience. We load websites expecting them to be broken, especially on mobile, because every single website has to have 15+ different ad trackers, video ads that cover large chunks of the screen, all while demanding our email or for us to let them send us notifications.
Every experience demands our email address, and giving out our email address adds another email to inboxes already stuffed with two types of spam — the actual “get the biggest laser” spam that hits the junk folder automatically, and the marketing emails we receive from clothing brands we wanted a discount from or newspapers we pay for that still feel it’s necessary to bother us 3 to 5 times a day. I’ve basically given up trying to fight back — how about you?
Every app we use is intentionally built to “growth hack” — a term that means “moving things around in such a way that a user does things that we want them to do” so they spend more money or time on the platform — which is why dating apps gate your best matches behind $1.99 microtransactions, or why Uber puts “suggestions” and massive banners throughout their apps to try and convince you to use one of its other apps (or accidentally hit them, which gives Uber a chance to get you to try them), or why Outlook puts advertisements in your email inbox that are near-indistinguishable from new emails (they’re at the top of your inbox too), or why Meta’s video carousels intentionally only play the first few seconds of a clip as a means of making you click.
Our digital lives are actively abusive and hostile, riddled with subtle and overt cons. Our apps are ever-changing, adapting not to our needs or conditions, but to the demands of investors and internal stakeholders that have reduced who we are and what we do to an ever-growing selection of manipulatable metrics.
It isn’t that you don’t “get” tech, it’s that the tech you use every day is no longer built for you, and as a result feels a very specific kind of insane.
Every app has a different design, almost every design is optimized based on your activity on said app, with each app trying to make you do different things in uniquely annoying ways. Meta has hundreds of people on its growth team perpetuating a culture that manipulates and tortures users to make company metrics improve, like limiting the amount of information in a notification to make a user browse deeper into the site, and deliberately promoting low-quality clickbait that promises “one amazing trick” because people click those links, even if they suck.
It’s everywhere.
After a coup by head of ads Prabhakar Raghavan in 2019, Google intentionally made search results worse as a means of increasing the amount of times that people would search for something on the site. Ever wonder why your workplace uses Sharepoint and other horrible Microsoft apps? That’s because Microsoft’s massive software monopoly meant that it was cheaper for your boss to buy all of it in one place, and thus its incentive is to make it good enough to convince your boss to sign up for all of their stuff rather than an app that makes your life easier or better.
Why does every website feel different, and why do some crash randomly or make your phone burn your hand? It’s because every publisher has pumped their sites full of as much ad tracking software as possible as a means of monetizing every single user in as many ways as possible, helping ads follow you across the entire internet. And why does everybody need your email? Because your inbox is one of the few places that advertisers haven’t found a consistent way to penetrate.
It’s digital tinnitus. It’s the pop-up from a shopping app that you downloaded to make one purchase, or the deceptive notification from Instagram that you have “new views” that doesn’t actually lead anywhere. It is the autoplaying video advertisement on your film review website. It is the repeated request for you to log back into a newspaper website that you logged into yesterday because everyone must pay and nothing must get through. It is the hundredth Black Friday sale you got from a company that you swear you unsubscribed from eight times, and perhaps even did, but there’s no real way to keep track. It’s the third time this year you’ve had to make a new password because another data breach happened and the company didn’t bother to encrypt it.
I’m not writing this to complain, but because I believe — as I hinted at a few weeks ago — that we are in the midst of the largest-scale ecological disaster of our time, because almost every single interaction with technology, which is required to live in modern society, has become actively adversarial to the user. These issues hit everything we do, all the time, a constant onslaught of interference, and I believe it’s so much bigger than just social media and algorithms — though they’re a big part of it, of course.
In plain terms, everybody is being fucked with constantly in tiny little ways by most apps and services, and I believe that billions of people being fucked with at once in all of these ways has profound psychological and social consequences that we’re not meaningfully discussing.
The average person’s experience with technology is one so aggressive and violative that I believe it leaves billions of people with a consistent low-grade trauma. We seem, as a society, capable of understanding that social media can hurt us, unsettle us, or make us feel crazed and angry, but I think it’s time to accept that the rest of the tech ecosystem undermines our wellbeing in an equally-insidious way. And most people don’t know it’s happening, because everybody has accepted deeply shitty conditions for the last ten years.
Now, some of you may scoff at this a little — after all, you’re smart, you know about disinformation, you know about the tricks of these companies, and thus most people do, right?
Wrong! Most people don’t think about the things they’re doing at all and are just trying to get by in a society that increasingly demands we make more money to buy the same things, with our lives both interfered with and judged by social networks with aggressive algorithms that feed us more things based on what we’ll engage with, which might mean said things piss us off or actively radicalize us. They’re nagged by constant notifications — an average of 46 a day — some useful, some advertisements, like Apple telling us there’s a nailbiter college football game regardless of whether we’ve ever interacted with anything football related, or a Slack message saying you haven’t joined a group you were invited to yet, or Etsy letting you know that you can buy things for an upcoming holiday. It’s relentless, and the more time you invest in using a device, the more of these notifications you get, making you less likely to turn them off. After all, how well are you doing keeping your inbox clean? Oh what’s that? You get 25 emails a day, many of them from a company owned by William Sonoma?
Your work software veers between “shit” and “just okay,” and never really seems to get better, nor does any part seem to smoothly connect to another. Your organization juggles anywhere from five to fifteen different pieces of software — Slack or Microsoft Teams and/or Zoom for communication, Asana or Monday or Basecamp for project management, or Jira, or Trello, or any number of other different ways that your organization or team wants to plan things. When you connect with another organization, you find they’re using a different product, or perhaps they’re using the same one — say, Slack — and that one requires you to join their organization, which may or may not work. I’m not even talking about the innumerable amount of tech infrastructure products that more-technical workers have to deal with, or how much worse this gets if you’ve got a slower device. Every organization does things differently, and some don’t put a lot of thought into how they do so.
Yet beyond the endless digital nags there’s the need to be constantly aware of scams and outright misinformation, both on social networks that don’t really care to stop it and on the chum box advertisements below major news publications — you know, the little weird stories at the bottom promising miracle cures.
It’s easy to assume that it’s natural that you’d know there are entities out there trying to scam you or trick you, and I’d argue most people don’t. To most, a video from Rumble.com may as well be the same thing as a video from CNN.com, and most people would believe that every advertisement on every website is somehow verified for its accuracy, versus “sold at scale all the time to whoever will pay the money.”
And when I say that, I’m really talking about CNN.com, a website that had 594 million visitors in October 2024. At the bottom is the “Paid Partner Content” section, including things from publications ‘like “FinanceBuzz” that tell you about the “9 Dumbest Things Smart People Waste Money On.” FinanceBuzz immediately asks for you to turn your notifications on — you know, so it can ping you when it has new articles — and each bullet point leads to one of its affiliate marketing arms trying to sell you car insurance and credit cards. You’re offered the chance to share your email address to receive “vetted side hustles and proven ways to earn extra cash sent to your inbox,” which I assume includes things like advertorial content telling you that yes, you could make money playing online bingo (such as “Bingo Cash”) against other people.
Papaya Games, developer of Bingo Cash, was sued in March by rival gaming company Skillz for using bots in allegedly skill-based games that are supposed to be between humans, and the Michigan Gaming Control Board issued a cease-and-desist order against the company for violating multiple gaming laws, including the Lawful Internet Gaming Act. To quote the lawsuit, “Papaya’s games are not skill-based and users are often not playing against live, actual opponents but against Papaya’s own bots that direct and rig the game so that Papaya itself wins its users’ money while leading them to believe that they lost to a live human opponent.”
This is a website and its associated content that has prime placement on the front page of a major news outlet. As a normal person, it’s reasonable to believe that CNN would not willfully allow advertisements for websites that are, in and of themselves, further advertisements masquerading as trustworthy third party entities. It’s reasonable that you would believe that FinanceBuzz was a reputable website, and that its intentions were to share great deals and secret tricks with you. If you think you’re not this stupid, you are privileged and need to have more solidarity with your fellow human beings.
Why wouldn’t you think that the content on one of the most notable media outlets in the entire world is trustworthy? Why wouldn’t you trust that CNN, a respected media outlet, had vetted its advertisers and made sure their content wasn’t actively tricking its users? I think it’s fair to say that CNN has likely led to thousands of people being duped by questionable affiliate marketing companies, and likely profited from doing so.
Why wouldn’t people feel insane? Why wouldn’t the internet, where we’re mostly forced to live, drive most people crazy? How are we not discussing the fact that so much of the internet is riddled with poison? How are we not treating the current state of the tech industry like an industrial chemical accident? Is it because there are too many people at fault? Is it because fixing it would require us to truly interrogate the fabric of a capitalist death cult?
Nothing I am writing is polemic or pessimistic or describing anything other than the shit that’s happening in front of my eyes and your eyes and the eyes of billions of people. Dismissing these things as “just how it is” allows powerful people with no real plan and no real goals other than growth to thrive, and sneering at people “dumb enough” to get tricked by an internet and tech industry built specifically to trick them suggests you have no idea how you are being scammed, because you’re smug and arrogant.
I need you to stop trying to explain away how fucking offensive using the internet and technology has become. I need you to stop making excuses for the powerful and consider the sheer scale of the societal ratfucking happening on almost every single device in the world, and consider the ramifications of the difficulty that a human being using the internet has trying to live an honest, dignified and reasonable life.
To exist in modern society requires you to use these devices, or otherwise sacrifice large parts of how you’d interact with other people. You need a laptop or a smartphone for work, for school, for anything really. You need messaging apps otherwise you don’t exist. As a result, there is a societal monopoly of sorts — or perhaps it’s more of a cartel, in the sense that, for the most part, every tech company has accepted these extremely aggressive, anti-user positions, all in pursuit of growth.
The stakes are so much higher than anyone — especially the tech media — is willing to discuss. The extent of the damage, the pain, the frustration, the terror is so constant that we are all on some level numb to its effects, because discussing it requires accepting that the vast majority of people live poisoned digital lives.
We all live in the ruins created by the Rot Economy, where the only thing that matters is growth. Growth of revenue, growth of the business, growth of metrics related to the business, growth of engagement, of clicks, of time on app, of purchases of micro-transactions, of impressions of ads, of things done that make executives feel happy.
I’ll give you a more direct example.
On November 21, I purchased the bestselling laptop from Amazon — a $238 Acer Aspire 1 with a four-year-old Celeron N4500 Processor, 4GB of DDR4 RAM, and 128GB of slow eMMC storage (which is, and I’m simplifying here, though not by much, basically an SD card soldered to the computer’s motherboard). Affordable and under-powered, I’d consider this a fairly representative sample of how millions of people interact with the internet.
I believe it’s also a powerful illustration of the damage caused by the Rot Economy, and the abusive, exploitative way in which the tech industry treats people at scale.
It took 1 minute and 50 seconds from hitting the power button for the laptop to get to the setup screen. It took another minute and a half to connect and begin downloading updates, which took several more minutes. After that, I was faced with a licensing agreement where I agreed to binding arbitration to use Windows, a 24 second pause, and then got shown a screen of different “ways I could unlock my Microsoft experience,” with animations that shuddered and jerked violently.
Aside: These cheap laptops use a version of Windows called “WIndows Home in S Mode,” which is a paired-down version of Windows where you can only use apps installed from the Microsoft Store. Microsoft claims that it’s a “streamlined version” of Windows, but the reality is it’s a cheap version of Windows for Microsoft to compete with Google’s Chromebook laptops.
Now, why do I know that? Because you’ll never guess who’s a big fan of Windows S? That’s right, Prabhakar Raghavan, The Man Who Killed Google Search, who said that Microsoft’s Windows S “validated” Google’s approach to cheap laptops back when he was Vice President of Google’s G Suite (and three years before he became Head of Search).
To be clear, Windows Home in S Mode is one of the worst operating systems of all time. It is ugly, slow, and actively painful to use, and (unless you deactivate S Mode) locks you into Microsoft’s ecosystem. This man went on to ruin Google Search by the way. How does this man keep turning up? Is it because I say his name so much?
Throughout, the laptop’s cheap trackpad would miss every few clicks. At this point, I was forced to create a Microsoft account and to hand over my cellphone number — or another email address — to receive a code, or I wouldn’t be able to use the laptop. Each menu screen takes 3-5 seconds to load, and I’m asked to “customize my experience” with things like “personalized ads, tips and recommendations,” with every option turned on by default, then to sign up for another account, this time with Acer. At one point I am simply shown an ad for Microsoft’s OneDrive cloud storage product with a QR code to download it on my phone, and then I’m told that Windows has to download a few updates, which I assume are different to the last time it did that.
Aside: With a normal version of Windows, it’s possible — although not easy — to set up and use the computer without a Microsoft account. On S Mode, however, you’re restricted to downloading apps through the Microsoft Store (which, as you’ve guessed, requires a Microsoft account).In essence, it’s virtually impossible to use this machine without handing over your personal data to Microsoft.
It has taken, at this point, around 20 minutes to get to this screen. It takes another 33 minutes for the updates to finish, and then another minute and 57 seconds to log in, at which point it pops up with a screen telling me to “set up my browser and discover the best of Windows,” including “finding the apps I love from the Microsoft Store” and the option to “create an AI-generated theme for your browser.” The laptop constantly struggles as I scroll through pages, the screen juddering, apps taking several seconds to load.
When I opened the start bar — ostensibly a place where you have apps you’d use — I saw some things that felt familiar, like Outlook, an email client that is not actually installed and requires you to download it, and an option for travel website Booking.com, along with a link to LinkedIn. One app, ClipChamp, was installed but immediately needed to be updated, which did not work when I hit “update,” forcing me to go to find the updates page, which showed me at least 40 different apps called things like “SweetLabs Inc.” I have no idea what any of this stuff is.
I type “sweetlabs” into the search bar, and it jankily interrupts into a menu that takes up a third of the screen, with half of that dedicated to “Mark Twain’s birthday,” two Mark Twain-related links, a “quiz of the day,” and four different games available for download.
The computer pauses slightly every time I type a letter. Every animation shudders. Even moving windows around feels painful. It is clunky, slow, it feels cheap, and the operating system — previously something I’d considered to be “the thing that operates the computer system” — is actively rotten, strewn with ads, sponsored content, suggested apps, and intrusive design choices that make the system slower and actively upset the user.
Another note: Windows in S Mode requires you to use Edge as your default browser and Bing as your default search engine. While you can download alternatives — like Firefox and Brave, though not Google Chrome, which was removed from the Microsoft Store in 2017 for unspecified terms of service violations — it’s clear that Microsoft wants you to spend as much time in its ecosystem as possible, where it can monetize you.
The reason I’m explaining this in such agonizing detail is that this experience is more indicative of the average person’s experience using a computer than anybody realizes. Though it’s tough to gauge how many of these things sold to make it a bestseller on Amazon, laptops in this pricepoint, with this specific version of Windows (Windows 11 Home in “S Mode” as discussed above), happen to dominate Amazon’s bestsellers along with Apple’s significantly-more-expensive MacBook Air and Pro series. It is reasonable to believe that a large amount of the laptops sold in America match this price point and spec — there are two similar ones on Best Buy’s bestsellers, and as of writing this sentence, multiple different laptops of this spec are on the front of Target’s laptop page.
And if I haven’t made it completely clear, this means that millions of people are likely using a laptop that’s burdensomely slow, and full of targeted advertisements and content baked into the operating system in a way that’s either impossible or difficult to remove. For millions of people — and it really could be tens of millions considering the ubiquity of these laptops in eCommerce stores alone — the experience of using the computer is both actively exploitative and incredibly slow. Even loading up MSN.com — the very first page you see when you open a web browser — immediately hits you with ads for eBay, QVC and QuickBooks, with icons that sometimes simply don’t load.
Every part of the operating system seems to be hounding you to use some sort of Microsoft product or some sort of product that Microsoft or the laptop manufacturer has been paid to make you see. While one can hope that the people buying these laptops have any awareness of anything, the reality is that they’re being dumped into a kind of TJ Maxx version of computing, except TJ Maxx clothes don’t sometimes scream at you to download TJ Maxx Plus or stop functioning because you used them too fast.
Again, this is how most people are experiencing modern computing, and it isn’t because this is big business — it’s because laptop sales have been falling for over a decade, and manufacturers (and Microsoft) need as many ways to grow revenue as possible, even if the choices they make are actively harmful to consumers.
Aside: I swear to god, if your answer here is “get a MacBook Air, they’re only $600,” I beg you — I plead with you — to speak with people outside of your income bracket at a time when an entire election was decided in part because everything’s more expensive.
At that point, said person using this laptop can now log onto the internet, and begin using websites like Facebook, Instagram, and YouTube, all of which have algorithms we don’t really understand, but that have been regularly proven to be actively — and deliberately — manipulative and harmful.
Now, I know reading about “algorithms” and “manipulation” makes some people’s eyes glaze over, but I want you to take a simpler approach for a second. I hypothesize that most people do not really think about how they interact with stuff — they load up YouTube, they type something in, they watch it, and maybe they click whatever is recommended next. They may know there’s an algorithm of sorts, but they’re not really sitting there thinking “okay so they want me to see this,” or they may even be grateful that the algorithm gave them something they like, and reinforce the algorithm with their own biases, some of which they might have gotten from the algorithm.
To be clear, none of this is mind control or hypnosis or voodoo. These algorithms and their associated entities are not sitting there with some vast agenda to execute — the algorithms are built to keep you on the website, even if it upsets you, pisses you off, or misinforms you. Their incentive isn’t really to make you make any one choice, other than one that involves you staying on their platform or interacting with an advertisement for somebody else’s, and the heavy flow of political — and particularly conservative — content is a result of platforms knowing that’s what keeps people doing stuff on the platform. The algorithms are constantly adapting in real time to try and find something that you might spend time on, with little regard for whether that content is good, let alone good for you.
Putting aside any moral responsibility, the experiences on these apps are discordant. Facebook, as I’ve written about in detail, is a complete nightmare — thousands of people being actively conned in supposed “help groups,” millions of people being scammed every day (with one man killing himself as a result of organized crime’s presence on Facebook), and bizarre AI slop is dominating feeds with Mark Zuckerberg promising that there’s more to come. That’s without mentioning a product experience that continually interrupts you with sponsored and suggested content, as these platforms always do, all algorithmically curated to keep you scrolling, while also hiding content from the people you care about, because Facebook thinks it won’t keep you on the platform for as long.
The picture I am trying to paint is one of terror and abuse. The average person’s experience of using a computer starts with aggressive interference delivered in a shoddy, sludge-like frame, and as the wider internet opens up to said user, already battered by a horrible user experience, they’re immediately thrown into heavily-algorithmic feeds each built to con them, feeding whatever holds their attention and chucking ads in as best they can. As they browse the web, websites like NBCnews.com feature stories from companies like “WorldTrending.com” with advertisements for bizarre toys written in the style of a blog, so intentional in their deceit that the page in question has a huge disclaimer at the bottom saying it’s an ad.
As their clunky, shuddering laptop hitches between every scroll, they go to ESPN.com, and the laptop slows to a crawl. Everything slows to a crawl. “God damnit, why is everything so fucking slow? I’ll just stay on Facebook or Instagram or YouTube. At least that place doesn’t crash half the time or trick me.”
Using the computer in the modern age is so inherently hostile that it pushes us towards corporate authoritarians like Apple, Microsoft, Google and Meta — and now that every single website is so desperate for our email and to show us as many ads as possible, it’s either harmful or difficult for the average person to exist online.
The biggest trick that these platforms played wasn’t any one algorithm, but the convenience of a “clean” digital experience — or, at least as clean as they feel it needs to be. In an internet so horribly poisoned by growth capitalism, these platforms show a degree of peace and consistency, even if they’re engineered to manipulate you, even if the experience gets worse seemingly every year, because at least it isn’t as bad as the rest of the internet. We use Gmail because, well, at least it’s not Outlook. We use YouTube to view videos from other websites because other websites are far more prone to crash, have quality issues, or simply don’t work on mobile. We use Google Search, despite the fact that it barely works anymore, to find things because actually browsing the web fucking sucks.
When every single website needs to make as much money as possible because their private equity or hedge fund or massive corporate owners need to make more money every year without fail, the incentives of building the internet veer away from providing a service and toward putting you, the reader, in silent service of a corporation.
ESPN’s app is a fucking mess — autoplaying videos, discordantly-placed scores, menus that appear to have been designed by M.C. Escher — and nothing changes because Disney needs you to use the app and find what you need, versus provide information in anything approaching a sensible way. It needs your effort. The paid subscription model for dating apps is so aggressive that there’s a lawsuit filed against Match Group — which owns Tinder and Hinge, and thus a great deal of the market — for “gamifying the platforms to transform users into gamblers locked in a search for psychological rewards,” likely as a means of recouping revenue after user numbers have begun to fall. And if you’re curious why these companies aren’t just making their products less horrible to use, I’m afraid that would reduce revenue, which is what they do care about.
If you’re wondering who else is okay with that, it’s Apple. Both Bumble and Tinder are regularly featured on the “Must-Have Apps” section of the App Store, most of which require a monthly fee to work. Each of these apps is run by a company with a “growth” team, and that team exists, on some level, to manipulate you — to move icons around so that you’ll interact with the things they want you to, see ads, or buy things. This is why HBO Max rebranded to Max and created an entirely new app experience — because the growth people said “if we do this in this way the people using it will do what we want.”
Now, what’s important to accept here is that absolutely none of this is done with any real consideration of the wider effects on the customer, as long as the customer continues doing the things that the company needs them to. We, as people, have been trained to accept a kind of digital transience — an inherent knowledge that things will change at random, that the changes may suck, and that we will just have to accept them because that’s how the computer works, and these companies work hard to suppress competition as a means of making sure they can do what they want.
In other words, internet users are perpetually thrown into a tornado of different corporate incentives, and the less economically stable or technologically savvy you are, the more likely you are to be at the mercy of them. Every experience is different, wants something, wants you to do something, and the less people know about why the more likely they are to — with good intentions — follow the paths laid out in front of them with little regard for what might be happening, in the same way people happily watch the same TV shows or listen to the same radio stations.
Even if you’re technologically savvy, you’re still dealing with these problems — fresh installs of Windows on new laptops, avoiding certain websites because you’ve learned what the dodgy ones look like, not interacting with random people in your DMs because you know what a spam bot looks like, and so on. It’s not that you’re immune. It’s that you’re instinctually ducking and weaving around an internet and digital ecosystem that continually tries to interrupt you, batting away pop-ups and silencing notifications knowing that they want something from you — and I need you to realize that most people are not like you and are actively victimized by the tech ecosystem.
As I said a few weeks ago, I believe that most people are continually harmed by their daily lives, as most people’s daily lives are on the computer or their smartphones, and those lives have been stripped of dignity. When they look to the media for clarity or validation, the best they’ll get is a degree of “hmm, maybe algorithm bad?” rather than a wholehearted acceptance that the state of our digital lives is obscene.
Yet it’s not just the algorithms — It’s the entirety of the digital ecosystem, from websites to apps to the devices we use every day. The fact that so many people likely use a laptop that is equal parts unfit for the task and stuffed full of growth hacked poison is utterly disgraceful, because it means that the only way to escape said poison is to simply have more money. Those who can’t afford $300 (at least) phones or $600 laptops are left to use offensively bad technology, and we have, at a societal scale, simply accepted that this is how things go.
Yet even on expensive devices you’re still the victim of algorithmic and growth-hacked manipulation, even if we’re aware of it. Knowing allows you to fight back, even if it’s just to stop yourself being overwhelmed by the mess, and means you can read things that can tell you what new horror we need to avoid next — but you are still the target, you are still receiving hundreds of marketing emails a week, you are still receiving spam calls, you are still unable to use Facebook or Instagram without being bombarded by ads and algorithmically-charged content.
I’ve written a lot about how the growth-at-all-costs mindset of The Rot Economy is what directly leads big tech companies to make their products worse, but what I’ve never really quantified is the scale of its damage.
Everything I’ve discussed around the chaos and pain of the web is a result of corporations and private equity firms buying media properties and immediately trying to make them grow, each in wildly different ways, all clamouring to be the next New York Times or Variety or other legacy media brand, despite those brands already existing, and the ideas for competing with them usually being built on unsustainably-large staffs and expensive consultants. Almost every single store you visit on the internet has a massive data layer on the background that feeds them data about what’s popular, or where they’re spending the most time on the site, and will in turn change things about their design to subtly encourage you to buy more stuff, all so that more money comes out, no matter the cost. Even if this data isn’t personalized, it’s still powerful, and turns so many experiences into subtle manipulations.
Every single weird thing that you’ve experienced with an app or service online is the dread hand of the Rot Economy — the gravitational pull of growth, the demands upon you, the user, to do something. And when everybody is trying to chase growth, nobody is thinking stability, and because everybody is trying to grow, everybody sort of copies everybody else’s ideas, which is why we see microtransactions and invasive ads and annoying tricks that all kind of feel the same in everything, though they’re all subtly different and customized just for that one app. It’s exhausting.
For a while, I’ve had the Rot Economy compared to Cory Doctorow’s (excellent) enshittification theory, and I think it’s a great time to compare (and separate) the two. To quote Cory in The Financial Times, Enshittification is “[his] theory explaining how the internet was colonised by platforms, why all those platforms are degrading so quickly and thoroughly, why it matters and what we can do about it.” He describes the three stages of decline:
“First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves.”
I agree with Cory on some levels, but I believe he gives far more credit to the platforms in question than they deserve, and sees far more intention or strategy than really exists. I fundamentally disagree about the business customers even being some elevated class in the equation — as we’ve seen with the Google Ads trial, Google didn’t really give a shit about its business customers to begin with, has always sought a monopoly, and made things worse for whoever it needed to as a means of increasing growth.
Perhaps that’s semantics. However, Cory’s theory lacks a real perpetrator beyond corporations that naturally say “alright we’re gonna do Enshittification now, watch this.” Where The Rot Economy separates is that growth is, in and of itself, the force that drives companies to enshittify. While enshittification neatly fits across companies like Spotify and Meta (and their ad-focused business models), it doesn’t really make sense when it comes to things where there isn’t a clear split between business customers and consumers, like Microsoft or Salesforce — because enshittification is ultimately one part of the larger Rot Economy, where everything must grow forever.
And I believe the phenomenon that captures both is a direct result of the work of men like Jack Welch and Milton Friedman. The Rot Economy is selfish and potently neoliberal — corporations are bowed down to like gods, and the powerful only seek more, at all times, at all costs, even if said cost is “the company might eventually die because we’ve burned out any value it actually has” or “people are harmed every time they pick up their phone.” The Rot Economy is neoliberalism’s true innovation: a kind of economic cancer that with few reasons to exist beyond “more” and few justifications beyond “if we don’t let it keep growing then everybody’s pensions blow up.”
To be clear, Cory is for the most part right. Enshittification successfully encapsulates how the modern web was destroyed in a way that nobody really has. I think it applies in a wide-ranging way to a wide range of tech companies and effects.
I, however, believe the wider problem is bigger, and the costs are far greater. It isn’t that “everything is enshittified.” It’s that everybody’s pursuit of growth has changed the incentive behind how we generate value in the world, and software enables a specific kind of growth-lust by creating virtual nation states with their own digital despots. While laws may stop Meta from tearing up people’s houses surrounding its offices on 1 Hacker Way, it can happily reroute traffic and engagement on Facebook and Instagram to make things an iota more profitable.
The Rot Economy isn’t simply growth-at-all-costs thinking — it’s a kind-of secular religion, something to believe in, that everything and anything can be more, should be more, must be more, that we are defined only by our pursuit of more growth, and that something that isn’t growing isn’t alive, and is in turn inferior.
No, perhaps not a religion. Religions are, for the most part, concerned with the hereafter, and contain an ethical dimension that says your present actions will affect your future — or your eternity. The Rot Economy is, by every metric, defined by its short-termism. I’m not just talking about undermining the long-term success of a business to juice immediate revenue numbers. I’m thinking in broad ecosystem terms.
The onslaught of AI-generated content — facilitated, in no small part, by Google and Microsoft — has polluted our information ecosystems. AI-generated images and machine-generated text is everywhere, and it’s impossible to avoid, as there is no reliable way to determine the provenance of a piece of content — with one exception, namely the considered scrutiny of a human. This has irreparably damaged the internet in ways I believe few fully understand. This stuff — websites that state falsehoods because an AI hallucinated, or fake pictures of mushrooms and dogs that now dominate Google Images — is not going away. Like microplastics or PFAS chemicals, they’re with us forever, constantly chipping away at our understanding of reality.
These companies unleashed generative AI on the world — or, in the case of Microsoft, facilitated its ascendency — without any consideration of what that would mean for the Internet as an ecosystem. Their concerns were purely short-term. Fiscal. The result? Over-leverage in an industry that has no real path to profitability, burning billions of dollars and the environment - both digital and otherwise - along with it.
I’m not saying that this is how everybody thinks, but I am convinced that everybody is burdened by The Rot Economy, and that digital ecosystems allow the poison of growth to find new and more destructive ways to dilute a human being to a series of numbers that can be made to grow or contract in the pursuit of capital.
Almost every corner of our lives has been turned into some sort of number, and increasing that number is important to us — bank account balances, sure, but also engagement numbers, followers, number of emails sent and received, open rates on newsletters, how many times something we’ve seen has been viewed, all numbers set by other people that we live our lives by while barely understanding what they mean. Human beings thrive on ways to define themselves, but metrics often rob us of our individuality. Products that boil us down to metrics are likely to fail to account for the true depth of anything they're capturing.
Sidenote: Here’s a good example: in an internal document I reviewed from 2017, a Facebook engineer revealed that engagement on the platform had started to dive, but because the company had focused so much energy on time spent on the app as a metric, nobody had noticed (and yes, that’s a quote). Years of changes — the consequences of which were felt by billions of people — were made not based on using the product or talking to users, but a series of numbers that nobody had bothered to check mattered.
The change in incentives toward driving more growth actively pushes out those with long-term thinking. It encourages hiring people who see growth as the driver of a company's success, and in turn investment, research and development into mechanisms for growth, which may sometimes be things that help you, but that isn't necessarily the reason they're doing it. Organisational culture and hiring stops prioritising people that fix customer problems, because that is neither the priority nor, sadly, how one makes a business continue to grow.
We are all pushed toward growth — personal growth, professional growth, growth in our network and our societal status — and the terms of this growth are often set by platforms and media outlets that are, in turn, pursuing growth. And as I've discussed, the way the terms of our growth is framed is almost entirely through a digital ecosystem of warring intents and different ways of pursuing growth — some ethical, many not.
Societal and cultural pressure is nothing new, but the ways we experience it are now elaborate and chaotic. Our relationships — professional, personal, and romantic — are processed through the funhouse mirror of the platforms, changing in ways both subtle and overt based on the signals we receive from the people we care about, each one twisted and processed through the lens of product managers and growth hackers. Changes to these platforms — even subtle ones — actively change the lives of billions of people, and it feels like we talk about it like being online is some hobbyist pursuit rather than something that many people do more than seeing real people in the real world.
I believe that we exist in a continual tension with the Rot Economy and the growth-at-all-costs mindset. I believe that the friction we feel on platforms and apps between what we want to do and what the app wants us to do is one of the most underdiscussed and significant cultural phenomena, where we, despite being customers, are continually berated and conned and swindled.
I believe billions of people are in active combat with their devices every day, swiping away notifications, dodging around intrusive apps, agreeing to privacy policies that they don’t understand, desperately trying to find where an option they used to use has been moved to because a product manager has decided that it needed to be somewhere else. I realize it’s tough to conceptualize because it’s so ubiquitous, but how much do you fight with your computer or smartphone every day? How many times does something break? How many times have you downloaded an app and found it didn’t really do the thing you wanted it to? How many times have you wanted to do something simple and found that it’s actually really annoying?
How much of your life is dodging digital debris, avoiding scams, ads, apps that demand permissions, and endless menu options that bury the simple things that you’re actually trying to do?
You are the victim of a con. You have spent years of your life explaining to yourself and others that “this is just how things are,” accepting conditions that are inherently exploitative and abusive. You are more than likely not deficient, stupid, or “behind the times,” and even if you are, there shouldn’t be multi-billion dollar enterprises that monetize your ignorance.
And it’s time to start holding those responsible accountable.
I’m fairly regularly asked why this all matters to me so much, so as I wrap up the year, I’m going to try and answer that question, and explain why it is I do what I do.
I spent a lot of time alone as a kid. I didn't have friends. I was insular, scared of the world, I felt ostracised and unnoticed, like I was out of place in humanity. The only place I found any kind of community — any kind of real identity — was being online. My life was (and is) defined by technology.
Had social networking not come along, I am not confident I’d have made many (if any) lasting friendships. For the first 25 or so years of my life, I struggled to make friends in the real world for a number of reasons, but made so many more online. I kept and nurtured friendships with people thousands of miles away, my physical shyness less of an issue when I could avoid the troublesome “hey I’m Ed” part that tripped me up so much.
Without the internet, I’d likely be a resentful hermit, disconnected from humanity, layers of scar tissue over whatever neurodivergence or unfortunate habits I'd gained from a childhood mostly spent alone.
Don't feel sorry for me. Technology has allowed me to thrive. I have a business, an upcoming book, this newsletter, and my podcast. I have so many wonderful, beautiful friends who I love that have come exclusively through technology of some sort, likely a social network or the result of a digital connection of some kind.
I am immensely grateful for everything I have, and grateful that technology allowed me to live a full and happy life. I imagine many of you feel the same way. Technology has found so many ways to make our lives better, perhaps more in some cases than others. I will never lie and say I don't love it.
However, the process of writing this newsletter and recording my podcast has made me intimately aware of the gratuitous, avaricious and intentional harm that the tech industry has caused to its customers, the horrifying and selfish decisions they’ve made, and the ruinous consequences that followed.
The things I have watched happen this year alone — which have been at times an enumeration of over a decade of rot — have turned my stomach, as has the outright cowardice of some people that claim to inform the public but choose instead to reinforce the structures of the powerful.
I am a user. I am a guy with a podcast and a newsletter, but I am behind the mic and the keyboard a person that uses the same services as you do, and I see the shit done to us, and I feel poison in my veins. I am not holding back, and neither should you. What is being done to us isn't just unfair — it's larcenous, cruel, exploitative and morally wrong.
Some may try to dismiss what I'm saying as "just social media" or "just how apps work" and if that's what you truly think, you're either a beaten dog or a willing (or unwilling) operative for the people running the con.
I will never forgive these people for what they’ve done to the computer, and the more I learn about both their intentions and actions the more certain I am that they are unrepentant and that their greed will never be sated. I have watched them take the things that made me human — social networking, digital communities, apps, and the other connecting fabric of our digital lives — and turned them into devices of torture, profitable mechanisms of abuse, and find it disgusting how many reporters seem to believe it's their responsibility to thank them and explain why it's good this is happening to their readers.
These are the people in charge. These are the people running the tech industry. These are the people who make decisions that affect billions of people every minute of every day, and their decisionmaking is so flagrantly selfish and abusive that I am regularly astonished by how little criticism they receive.
These men lace our digital lives with asbestos and get told they’re geniuses for doing so because money comes out.
I don’t know — or care — whether these men know who I am or read my work, because I only care that you do.
I don't give a shit if Sam Altman or Mark Zuckerberg knows my name. I don't care about any of their riches or their supposed achievements, I care that when given so many resources and opportunities to change the world they chose to make it worse. These men are tantamount to war criminals, except in 30 years Mark Zuckerberg may still be seen as a success — though I will spend the rest of my life telling you the damage he's caused.
I care about you. The user. The person reading this. The person that may have felt stupid, or deficient, or ignorant, all because the services you pay for or that monetize you have been intentionally rigged against you.
You aren't the failure. The services, the devices, and the executives are.
If you cannot see the significance of the problems I discuss every week, the sheer scale of the rot, the sheer damage caused by unregulated and unrepentant managerial parasites, you are living in a fantasy world and I both envy and worry for you. You're the frog in the pot, and trust me, the stove is on.
2025 will be a year of chaos, fear and a deficit of hope, but I will spend every breath I have telling you what I believe and telling you that I care, and you are not alone.
For years, I’ve watched the destruction of the services and the mechanisms that were responsible for allowing me to have a normal life, to thrive, to be able to speak with a voice that was truly mine. I’ve watched them burn, or worse, turned into abominable growth vehicles for men disconnected from society and humanity. I owe my life to an internet I've watched turned into multiple abuse factories worth multiple trillions of dollars and the people responsible get gladhandled and applauded.
I will scream at them until my dying fucking breath. I have had a blessed life, and I am lucky that I wasn't born even a year earlier or later, but the way I have grown up and seen things change has allowed me to fully comprehend how much damage is being done today, and how much worse is to come if we don't hold these people accountable. The least they deserve is a spoken or written record of their sins, and the least you deserve is to be reminded that you are the victim.
I don't think you realise how powerful it is being armed with knowledge — the clarity of what's being done to and why, and the names of the people responsible. This is an invisible war — and a series of invisible war crimes — perpetuated against billions of people in a trillion different ways every minute of every day, and it's everywhere, a constant in our lives, which makes enumerating and conceptualising it difficult.
But you can help.
You talking about the truth behind generative AI, or the harms of Facebook, or the gratuitous destruction of Google Search will change things, because these people are unprepared for a public that knows both what they’ve done and their sickening, loathsome, selfish and greedy intentions.
I realize this isn’t particularly satisfying to some, because you want big ideas, big changes that can be made. I don’t know what to tell you. I don’t know how to fix things. To quote Howard Beale in the movie Network, I don’t want you to write your Congressman because I don’t know what to tell you to write.
But what I can tell you is that you can live your life with a greater understanding of the incentives of those who control the internet and have made your digital lives worse as a means of making themselves rich. I can tell you to live with more empathy, understanding and clarity into the reasons that people around you might be angry at their circumstances, as even those unrelated to technology are made worse by exploitative, abusive and pernicious digital manipulation.
This is a moment of solidarity, as we are all harmed by the Rot Economy. We are all victims. It takes true opulence to escape it, and I'm guessing you don't have it. I certainly don't. But talking about it — refusing to go quietly, refusing to slurp down the slop willingly or pleasantly — is enough. The conversations are getting louder. The anger is getting too hard to ignore. These companies will be forced to change through public pressure and the knowledge of their deeds.
Holding these people to a higher standard at scale is what brings about change. Be the wrench in the machine. Be the person that explains to a friend why Facebook sucks now, and who chose to make it suck. Be the person to explain who Prabhakar Raghavan is and what his role was in making Google Search worse. Be the person who tells people that Sam Altman burns $5 billion a year on unsustainable software that destroys the environment and is built upon the large-scale larceny of creative works because he's desperate for power.
Every time you do this, you destabilise them. They have succeeded in a decades-long marketing campaign where they get called geniuses for making the things that are necessary to function in society worse. You can change that.
I don't even care if you cite me. Just tell them. Tell everybody. Spread the word. Say what they've done and say their names, say their names again and again and again so that it becomes a contagion. They have twisted and broken and hyper-monetised everything — how you make friends, fall in love, how you bank, how you listen to music, how you find information. Never let their names be spoken without disgust. Be the sandpaper in their veins and the graffiti on their legacies.
The forces I criticize see no beauty in human beings. They do not see us as remarkable things that generate ideas both stupid and incredible, they do not see talent or creativity as something that is innately human, but a commodity to be condensed and monetized and replicated so that they ultimately own whatever value we have, which is the kind of thing you’d only believe was possible (or want) if you were fully removed from the human race.
You deserve better than they’ve given you. You deserve better than I’ve given you, which is why I’m going to work even harder in 2025. Thank you, as ever, for your time.
2024-12-04 04:55:22
Before we get going — please enjoy my speech from Web Summit, Why Are All Tech Products Now Shit? I didn’t write the title.
What if what we're seeing today isn't a glimpse of the future, but the new terms of the present? What if artificial intelligence isn't actually capable of doing much more than what we're seeing today, and what if there's no clear timeline when it'll be able to do more? What if this entire hype cycle has been built, goosed by a compliant media ready and willing to take career-embellishers at their word?
Me, in March 2024.
I have been warning you for the best part of a year that generative AI has no killer apps and had no way of justifying its valuations (February), that generative AI had already peaked (March), and I have pleaded with people to consider an eventuality where the jump from GPT-4 to GPT-5 was not significant, in part due to a lack of training data (April).
I shared concerns in July that the transformer-based-architecture underpinning generative AI was a dead end, and that there were few ways we'd progress past the products we'd already seen, in part due to both the limits of training data and the limits of models that use said training data. In August, I summarized the Pale Horses of the AI Apocalypse — events, many that have since come to pass, that would signify that the end is indeed nigh — and again added that GPT-5 would likely "not change the game enough to matter, let alone [add] a new architecture to build future (and more capable) models on."
Throughout these pieces I have repeatedly made the point that — separate to any lack of a core value proposition, training data drought, or unsustainable economics — generative AI is a dead end due to the limitations of probabilistic models that hallucinate, where they authoritatively state things that aren't true. The hallucination problem is one that is nowhere closer to being solved — and, at least with the current technology — may never go away, and it makes it a non-starter for a great many business tasks, where you need a high level of reliability.
I have — since March — expressed great dismay about the credulousness of the media in their acceptance of the "inevitable" ways in which generative AI will change society, despite a lack of any truly meaningful product that might justify an environmentally-destructive industry led by a company that burns more than $5 billion a year and big tech firms spending $200 billion on data centers for products that people don't want.
The reason I'm repeating myself is that it's important to note how obvious the problems with generative AI have been, and for how long.
And you're going to need context for everything I'm about to throw at you.
Sidebar: To explain exactly what happened here, it's worth going over how these models work and are trained. I’ll keep it simple as it's a reminder.
A transformer-based generative AI model such as GPT — the technology behind ChatGPT — generates answers using "inference," which means it draws conclusions based off of its "training," which requires feeding it masses of training data (mostly text and images scraped from the internet). Both of these processes require you to use high-end GPUs (graphics processing units), and lots of them.
The theory was (is?) that the more training data and compute you throw at these models, the better they get. I have hypothesized for a while they'd have diminishing returns — both from running out of training data and based on the limitations of transformer-based models.
And there, as they say, is the rub.
A few weeks ago, Bloomberg reported that OpenAI, Google, and Anthropic are struggling to build more advanced AI, and that OpenAI's "Orion" model — otherwise known as GPT-5 — "did not hit the company's desired performance," and that "Orion is so far not considered to be as big a step up" as it was from GPT-3.5 to GPT-4, its current model. You'll be shocked to hear the reason is that because "it’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems," something I said would happen in March, while also adding that the "AGI bubble is bursting a little bit," something I said more forcefully in July.
I also want to stop and stare daggers at one particular point:
These issues challenge the gospel that has taken hold in Silicon Valley in recent years, particularly since OpenAI released ChatGPT two years ago. Much of the tech industry has bet on so-called scaling laws that say more computing power, data and larger models will inevitably pave the way for greater leaps forward in the power of AI.
The only people taking this as "gospel" have been members of the media unwilling to ask the tough questions and AI founders that don't know what the fuck they're talking about (or that intend to mislead). Generative AI's products have effectively been trapped in amber for over a year. There have been no meaningful, industry-defining products, because, as economist Daron Acemoglu said back in May, "more powerful" models do not unlock new features, or really change the experience, nor what you can build with transformer-based models. Or, put another way, a slightly better white elephant is still a white elephant.
Despite the billions of dollars burned and thousands of glossy headlines, it's difficult to point to any truly important generative-AI-powered product. Even Apple Intelligence, the only thing that Apple really had to add to the latest iPhone, is utterly dull, and largely based on on-device models.
Yes, there are people that use ChatGPT — 200 million of them a week, allegedly, losing the company money with every prompt — but there is little to suggest that there's widespread adoption of actual generative AI software. The Information reported in September that between 0.1% and 1% of the 440 million of Microsoft's business customers were paying for its AI-powered Copilot, and in late October, Microsoft claimed that "AI is on pace to be a $10 billion-a-year business," which sounds good until you consider a few things:
I must be clear that every single one of these investments and products has been hyped with the whisper that they would get exponentially better over time, and that eventually the $200 billion in capital expenditures would spit out remarkable productivity improvements and fascinating new products that consumers and enterprises would buy in droves. Instead, big tech has found itself peddling increasingly-more-expensive iterations of near-identical Large Language Models — a direct result of them all having to use the same training data, which it’s now running out of.
The other assumption — those so-called scaling laws — has been that by simply building bigger data centers with more GPUs (the expensive, power-hungry graphics processing units used to both run and train these models) and throwing as much training data at them as possible, they'd simply start sprouting new capabilities, despite there being little proof that they'd do so. Microsoft, Meta, Amazon, and Google have all burned billions on the assumption that doing so would create something — be it a human-level "artificial general intelligence" or, I dunno, a product that would justify the costs — and it's become painfully obvious that it isn't going to work.
As we speak, outlets are already desperate to try and prove that this isn't a problem. The Information, in a similar story to Bloomberg's, attempted to put lipstick on the pig of generative AI, framing the lack of meaningful progress with GPT-5 as fine, because OpenAI can combine its GPT-5 Model with its o-1 "reasoning" model, which will then do something of some sort, such as "write a lot more very difficult code" according to OpenAI CEO and career liar Sam Altman, who intimated that GPT-5 may function like a "virtual brain" in May.
Chief Valley Cheerleader Casey Newton wrote on Platformer last week that diminishing returns in training models "may not matter as much as you would guess," with his evidence being that Anthropic, who he claims "has not been prone to hyperbole," do not think that scaling laws are ending. To be clear, in a 14,000 op-ed that Newton wrote two pieces about, Anthropic CEO Dario Amodei said that "AI-accelerated neuroscience is likely to vastly improve treatments for, or even cure, most mental illness," the kind of hyperbole that should have you tarred and feathered in public.
So, let me summarize:
The entire tech industry has become oriented around a dead-end technology that requires burning billions of dollars to provide inessential products that cost them more money to serve than anybody would ever pay. Their big strategy was to throw even more money at the problem until one of these transformer-based models created a new, more useful product — despite the fact that every iteration of GPT and other models has been, well, iterative. There has never been any proof (other than benchmarks that are increasingly easier to game) that GPT or other models would become conscious, nor that these models would do more than they do today, or three months ago, or even a year ago.
Yet things can, believe it or not, get worse.
The AI boom helped the S&P 500 hit record high levels in 2024, largely thanks to chip giant NVIDIA, a company that makes both the GPUs necessary to train and run generative AI models and the software architecture behind them. Part of NVIDIA's remarkable growth has been its ability to capitalize on the CUDA architecture — the software layer that lets you do complex computing with GPUs, rather than simply use them to render video games in increasingly higher resolution — and, of course, continually create new GPUs to sell for tens of thousands of dollars to tech companies that want to burn billions of dollars on generative AI, leading the company's stock to pop more than 179% over the last year.
Back in May, NVIDIA CEO and professional carnival barker Jensen Huang said that the company was now "on a one-year rhythm" in AI GPU production, with its latest "Blackwell" GPUs (specifically the B100, B200 and GB200 models used for generative AI) supposedly due at the end of 2024, but are now delayed until at least March 2025.Before we go any further, it's worth noting that when I say "GPU," I don't mean the one you'd find in a gaming PC, but a much larger chip put in a specialized server with multiple other GPUs, all integrated with specialized casing, cooling, and networking infrastructure. In simple terms, the things necessary to make sure all these chips work together efficiently, and also stop them from overheating, because they get extremely hot and are running at full speed, all the time.
The initial delay of the new Blackwell chips was caused by a (now-fixed) design flaw in production, but as I've suggested above, the problem isn't just creating the chips — it's making sure they actually work, at scale, for the jobs they're bought for.
But what if that, too, wasn't possible?
A few days ago, The Information reported that NVIDIA is grappling with the oldest problem in computing — how to cool the fucking things. According to the report, NVIDIA has been asking suppliers to change the design of its 3,000-pound, 75-GPU server racks "several times" to overcome overheating problems, which The Information calls "the most complicated design NVIDIA had ever come up with." According to the report, a few months after revealing the racks, engineers found that they...didn't work properly, even with Nvidia’s smaller 36-chip racks, and have been scrambling to fix it ever since.
While one can dazzle investors with buzzwords and charts, the laws of physics are a far harsher mistress, and if NVIDIA is struggling mere months before the first installations are to begin, it's unclear how it practically launches this generation of chips, let alone continues its yearly cadence. The Information reports that these changes have been made late in the production process, which is scaring customers that desperately need them so that their models can continue to do something they'll work out later. To quote The Information:
Two executives at large cloud providers that have ordered the new chips said they are concerned that such last-minute difficulties might push back the timeline for when they can get their GPU clusters up and running next year.
The fact that NVIDIA is having such significant difficulties with thermal performance is very, very bad. These chips are incredibly expensive — as much as $70,000 a piece — and will be running, as I've mentioned, at full speed, generating an incredible amount of heat that must be dissipated, while sat next to anywhere from 35 to 71 other chips, which will in turn be densely packed so that you can cram more servers into a data center. New, more powerful chips require entirely new methods to rack-mount, operate and cool them, and all of these parts must operate in sync, as overheating GPUs will die. While these units are big, some of their internal components are microscopic in size, and unless properly cooled, their circuits will start to crumble when roasted by a guy typing "Garfield with Gun" into ChatGPT.
Remember, Blackwell is supposed to represent a major leap forward in performance. If NVIDIA doesn’t solve its cooling problem — and solve it well — its customers will undoubtedly encounter thermal throttling, where the chip reduces speed in order to avoid causing permanent damage. It could eliminate any performance gains obtained from the new architecture and new manufacturing process, despite costing much, much more than its predecessor.
NVIDIA's problem isn't just bringing these thermal performance issues under control, but both keeping them under control and being able to educate their customers on how to do so. NVIDIA has, according to The Information, repeatedly tried to influence its customers' server integrations to follow its designs because it thinks it will "lead to better performance," but in this case, one has to worry if NVIDIA's Blackwell chips can be reliably cooled.
While NVIDIA might be able to fix this problem in isolation within its racks, it remains to be seen how this works at scale as they ship and integrate hundreds of thousands of Blackwell GPUs starting in the front half of 2025.
Things also get a little worse when you realize how these chips are being installed — in giant “supercomputer” data centers where tens of thousands, or as many as a hundred thousand in the case of Elon Musk’s “colossus” data center — of GPUs run in concert to power generative AI models. The Wall Street Journal reported a few weeks ago that building these vast data centers creates entirely new engineering challenges, with one expert saying that big tech companies could be using as much as half of their capital expenditures on replacing parts that have broken down, in large part because these clusters are running their GPUs at full speed, at all times.
Remember, the capital expenditures on generative AI and the associated infrastructure have gone over $200 billion in the last year. If half of that’s dedicated to replacing broken gear, what happens when there’s no path to profitability?
In any case, NVIDIA doesn’t care. It’s already made billions of dollars selling Blackwell GPUs — they're sold out for a year, after all — and will continue to do so for now, but any manufacturing or cooling issues will likely be costly.
And even then, at some point somebody has to ask the question: why do we need all these GPUs if we've reached peak AI? Despite the remarkable "power" of these chips, NVIDIA's entire enterprise GPU business model centers around the idea that throwing more power at these problems will finally create some solutions.
What if that isn't the case?
The tech industry is over-leveraged, having doubled, tripled, quadrupled down on generative AI — a technology that doesn't do much more than it did a few months ago and won't do much more than it can do now. Every single big tech company has piled tens of billions of dollars into building out massive data centers with the intent of "capturing AI demand," yet never seemed to think whether they were actually building things that people wanted, or would pay for, or would somehow make the company money.
While some have claimed that "agents are the next frontier," the reality is that agents may be the last generative AI product — multiple Large Language Models and integrations bouncing off of each other in an attempt to simulate what a human might do at a cost that won't be sustainable for the majority of businesses. While Anthropic's demo of its model allegedly controlling a few browser windows with a prompt might have seemed impressive to credulous people like Casey Newton, these were controlled demos which Anthropic added were "slow" and "made lots of mistakes." Hey, almost like it's hallucinating! I sure hope they fix that totally unfixable problem.
Even if it does, Anthropic has now successfully replaced...an entry-level data worker position at an indeterminate and likely unprofitable price. And in many organizations, those jobs had already been outsourced, or automated, or staffed with cheaper contractors.
The obscenity of this mass delusion is nauseating — a monolith to bad decision-making and the herd mentality of tech's most powerful people, as well as an outright attempt to manipulate the media into believing something was possible that wasn't. And the media bought it, hook, line, and sinker.
Hundreds of billions of dollars have been wasted building giant data centers to crunch numbers for software that has no real product-market fit, all while trying to hammer it into various shapes to make it pretend that it's alive, conscious, or even a useful product.
There is no path, from what I can see, to turn generative AI and its associated products into anything resembling sustainable businesses, and the only path that big tech appeared to have was to throw as much money, power, and data at the problem as possible, an avenue that appears to be another dead end.
And worse still, nothing has really come out of this movement. I've used a handful of AI products that I've found useful — an AI powered journal, for example — but these are not the products that one associates with "revolutions," but useful tools that would have been a welcome surprise if they didn't require burning billions of dollars, blowing past emissions targets and stealing the creative works of millions of people to train them.
I truly don't know what happens next, but I'll walk you through what I'm thinking.
If we're truly at the diminishing returns stage of transformer-based models, it will be extremely difficult to justify buying further iterations of NVIDIA GPUs past Blackwell. The entire generative AI movement lives and dies by the idea that more compute power and more training data makes these things better, and if that's no longer the case, there's little reason to keep buying bigger and better. After all, what's the point?
Even now, what exactly happens when Microsoft or Google has racks-worth of Blackwell GPUs? The models aren't going to get better.
This also makes the lives of OpenAI and Anthropic that much more difficult. Sam Altman has grown rich and powerful lying about how GPT will somehow lead to AGI, but at this point, what exactly is OpenAI meant to do? The only way it’s ever been able to develop new models is by throwing masses of compute and training data at the problem, and its only other choice is to start stapling its reasoning model onto its main Large Language Model, at which point something happens, something so good that literally nobody working for OpenAI or in the media appears to be able to tell you what it is.
Putting that aside, OpenAI is also a terrible business that has to burn $5 billion to make $3.4 billion, with no proof that it’s capable of bringing down costs. The constant refrain I hear from VCs and AI fantasists is that "chips will bring down the cost of inference," yet I don't see any proof of that happening, nor do I think it'll happen quickly enough for these companies to turn things around.
And you can feel the desperation, too. OpenAI is reportedly looking at ads as a means to narrow the gap between its revenues and losses. As I pointed out in Burst Damage, introducing an advertising revenue stream would require significant upfront investment, both in terms of technology and talent. OpenAI would need a way to target ads, and a team to sell advertising — or, instead, use a third-party ad network that would take a significant bite out of its revenue.
It’s unclear how much OpenAI could charge advertisers, or what percentage of its reported 200 million weekly users have an ad-blocker installed. Or, for that matter, whether ads would provide a perverse incentive for OpenAI to enshittify an already unreliable product.
Facebook and Google — as I’ve previously noted — have made their products manifestly worse in order to increase the amount of time people spend on their sites, and thus, the number of ads they see. In the case of Facebook, it buried your newsfeed under a deluge of AI-generated sludge and “recommended content.” Google, meanwhile, has progressively degraded the quality of its search results in order to increase the volume of queries it received as a means of making sure users saw more ads.
OpenAI could, just as easily, fall into the same temptation. Most people who use ChatGPT are trying to accomplish a specific task — like writing a term paper, or researching a topic, or whatever — and then they leave. And so, the amount of ads they’d conceivably see each will undoubtedly be comparatively low compared to a social network or search engine. Would OpenAI try to get users to stick around longer — to write more prompts — by crippling the performance of its models?
Even if OpenAI listens to its better angels, the reality still stands: ads won’t dam the rising tide of red ink that promises to eventually drown the company.
This is a truly dismal situation where the only options are to stop now, or continue burning money until the heat gets too much. It cost $100 million to train GPT-4o, and Anthropic CEO Dario Amodei estimated a few months ago that training future models will cost $1 billion to $10 billion, with one researcher claiming that training OpenAI's GPT-5 will cost around $1 billion.
And that’s before mentioning any, to quote a Rumsfeldism, “unknown unknowns.” Trump’s election, at the risk of sounding like a cliché, changes everything and in ways we don’t yet fully understand. According to the Wall Street Journal, Musk has successfully ingratiated himself with Trump, thanks to his early and full-throated support of his campaign. He’s now reportedly living in Mar a Lago, sitting on calls with world leaders, and whispering in Trump’s ear as he builds his cabinet.
And, as The Journal claims, his enemies fear that he could use his position of influence to harm them or their businesses — chiefly Sam Altman, who is “persona non grata” in Musk’s world, largely due to the new for-profit direction of OpenAI. While it’s likely that these companies will fail due to inevitable organic realities (like running out of money, or not having a product that generates a profit), Musk’s enemies must now contend with a new enemy — one with the full backing of the Federal government, and that neither forgives nor forgets.
And, crucially, one that’s not afraid to bend ethical or moral laws to further his own interests — or to inflict pain on those perceived as having slighted him.
Even if Musk doesn’t use his newfound political might to hurt Altman and OpenAI, he could still pursue the company as a private citizen. Last Friday, he filed an injunction requesting a halt to OpenAI’s transformation from an ostensible non-profit to a for-profit business. Even if he ultimately fails, should Musk manage to drag the process out, or delay it temporarily, it could strike a terminal blow for OpenAI.
That’s because in its most recent fundraise, OpenAI agreed that it would convert its recent $6.6bn equity investment into high-interest debt, should it fail to successfully convert into a for-profit business within a two-year period. This was a tight deadline to begin with, and it can’t afford any delays. The interest payments on that debt would massively increase its cash burn, and it would undoubtedly find it hard to obtain further outside investment.
Outside of a miracle, we are about to enter an era of desperation in the generative AI space. We're two years in, and we have no killer apps — no industry-defining products — other than ChatGPT, a product that burns billions of dollars and nobody can really describe. Neither Microsoft, nor Meta, nor Google or Amazon seem to be able to come up with a profitable use case, let alone one their users actually like, nor have any of the people that have raised billions of dollars in venture capital for anything with "AI" taped to the side — and investor interest in AI is cooling.
It's unclear how much further this farce continues, if only because it isn't obvious what it is that anybody gets by investing in future rounds in OpenAI, Anthropic, or any other generative AI company. At some point they must make money, and the entire dream has been built around the idea that all of these GPUs and all of this money would eventually spit out something revolutionary.
Yet what we have is clunky, ugly, messy, larcenous, environmentally-destructive and mediocre. Generative AI was a reckless pursuit, one that shows a total lack of creativity and sense in the minds of big tech and venture capital, one where there was never anything really impressive other than the amount of money it could burn and the amount of times Sam Altman could say something stupid and get quoted for it.
I'll be honest with you, I have no idea what happens here. The future was always one that demanded that big tech spent more to make even bigger models that would at some point become useful, and that isn't happening. In pursuit of doing so, big tech invested hundreds of billions of dollars into infrastructure specifically to follow one goal, and put AI front and center at their businesses, claiming it was the future without ever considering what they'd do if it wasn't.
The revenue isn't coming. The products aren't coming. "Orion," OpenAI's next model, will underwhelm, as will its competitors' models, and at some point somebody is going to blink in one of the hyperscalers, and the AI era will be over. Almost every single generative AI company that you’ve heard of is deeply unprofitable, and there are few innovations coming to save them from the atrophy of the foundation models.
I feel sad and exhausted as I write this, drained as I look at the many times I’ve tried to warn people, frustrated at the many members of the media that failed to push back against the overpromises and outright lies of people like Sam Altman, and full of dread as I consider the economic ramifications of this industry collapsing. Once the AI bubble pops, there are no other hyper-growth markets left, which will in turn lead to a bloodbath in big tech stocks as they realize that they’re out of big ideas to convince the street that they’re going to grow forever.
There are some that will boast about “being right” here, and yes, there is some satisfaction in being so. Nevertheless, knowing that the result of this bubble bursting will be massive layoffs, a dearth in venture capital funding, and a much more fragile tech ecosystem.
I’ll end with a quote from Bubble Trouble, a piece I wrote in April:
How do you solve all of these incredibly difficult problems? What does OpenAI or Anthropic do when they run out of data, and synthetic data doesn't fill the gap, or worse, massively degrades the quality of their output? What does Sam Altman do if GPT-5 — like GPT-4 — doesn't significantly improve its performance and he can't find enough compute to take the next step? What do OpenAI and Anthropic do when they realize they will likely never turn a profit? What does Microsoft, or Amazon, or Google do if demand never really takes off, and they're left with billions of dollars of underutilized data centers? What does Nvidia do if the demand for its chips drops off a cliff as a result?
I don't know why more people aren't screaming from the rooftops about how unsustainable the AI boom is, and the impossibility of some of the challenges it faces. There is no way to create enough data to train these models, and little that we've seen so far suggests that generative AI will make anybody but Nvidia money. We're reaching the point where physics — things like heat and electricity — are getting in the way of progressing much further, and it's hard to stomach investing more considering where we're at right now is, once you cut through the noise, fairly god damn mediocre. There is no iPhone moment coming, I'm afraid.
I was right then and I’m right now. Generative AI isn’t a revolution, it’s an evolution of a tech industry overtaken by growth-hungry management consultant types that neither know the problems that real people face nor how to fix them. It’s a sickening waste, a monument to the corrupting force of growth, and a sign that the people in power no longer work for you, the customer, but for the venture capitalists and the markets.
I also want to be clear that none of these companies ever had a plan. They believed that if they threw enough GPUs together they would turn generative AI – probabilistic models for generating stuff — into some sort of sentient computer. It’s much easier, and more comfortable, to look at the world as a series of conspiracies and grand strategies, and far scarier to see it for what it is — extremely rich and powerful people that are willing to bet insanely large amounts of money on what amounts to a few PDFs and their gut.
This is not big tech’s big plan to excuse building more data centers — it’s the death throes of twenty years of growth-at-all-costs thinking, because throwing a bunch of money at more servers and more engineers always seemed to create more growth. In practice, this means that the people in charge and the strategies they employ are borne not of an interest in improving the lives of their customers, but in increasing revenue growth, which means the products they create aren’t really about solving any problem other than “what will make somebody give me more money,” which doesn’t necessarily mean “provide them with a service.”
Generative AI is the perfect monster of the Rot Economy — a technology that lacks any real purpose sold as if it could do literally anything, one without a real business model or killer app, proliferated because big tech no longer innovates, but rather clones and monopolizes. Yes, this much money can be this stupid, and yes, they will burn billions in pursuit of a non-specific dream that involves charging you money and trapping you in their ecosystem.
I’m not trying to be a doomsayer, just like I wasn’t trying to be one in March. I believe all of this is going nowhere, and that at some point Google, Microsoft, or Meta is going to blink and pull back on their capital expenditures. And before then, you’re going to get a lot of desperate stories about how “AI gains can be found outside of training new models” to try and keep the party going, despite reality flicking the lights on and off and threatening to call the police.
I fear for the future for many reasons, but I always have hope, because I believe that there are still good people in the tech industry and that customers are seeing the light. Bluesky feels different — growing rapidly, competing with both Threads and Twitter, all while selling an honest product and an open protocol.
There are other ideas for the future that aren’t borne of the scuzzy mindset of billionaire shitheels like Sundar Pichai and Sam Altman, and they can — and will — grow out of the ruins created by these kleptocrats.
2024-11-13 21:31:11
Soundtrack: Post Pop Depression - Paraguay
I haven't wanted to write much in the last week.
Seemingly every single person on Earth with a blog has tried to drill down into what happened on November 5 — to find the people to blame, to somehow explain what could've been done differently, by whom, and why so many actions led to a result that will overwhelmingly harm women, minorities, immigrants, LGBTQ people, and lower-income workers. It's a terrifying time.
I feel woefully unequipped to respond to the moment. I don't have any real answers. I am not a political analyst, and I would feel disingenuous dissecting the Harris (or Trump) campaigns, because I feel like this has been the Dunning-Kruger Olympics for takes, where pundits compete to rationalize and intellectualize events in an attempt to ward off the very thing that has buried us in red: a shared powerlessness and desperation.
People don't trust authority, and yes, it is ironic that this often leads them toward authoritarian figures.
Legacy media — while oftentimes staffed by people that truly love their readers, care about their beats and write like their lives depend upon it — is weighed down by a hysterical attachment to the imaginary concept of objectivity and “the will of the markets.”
Case in point: Regular people have spent years watching the price of goods increase "due to inflation," despite the fact that the increase in pricing was mostly driven by — get this — corporations raising prices. Yet some parts of the legacy media spent an alarming amount of time chiding their readers for thinking otherwise, even going against their own reporting as a means of providing "balanced" coverage, insisting again and again that the economy is good, contorting to prove that prices aren't higher even as companies boasted about literally raising their prices. In fact, the media spent years debating with itself whether price gouging was happening, despite years of proof that it was.
People don’t trust authority, and they especially don’t trust the media — especially the legacy media. It probably didn’t help that they implored readers and viewers to ignore what they saw at the supermarket or when at the pump, and the growing hits to their wallets from the daily necessities of life, gaslighting them that everything was fine.
As an aside: I have used the term “legacy media” here repeatedly, but I don’t completely intend for it to come across as a pejorative. Despite my criticisms, there are people in the legacy media doing a good job, reporting the truth, doing the kinds of work that matters and illuminates readers. I read — and pay for — several legacy media outlets, and I think the world is a better place for them existing, despite their flaws.
The problem, as I’ll explain, is the editorial industrial complex, and how those writing about the powerful don’t seem to be able to (or want to) interrogate power. This could be an entire piece by itself, but I don’t think the answer to these failings is to simply discard legacy media entirely, but to implore it to do better and to strive for the values of truth-hunting and truth-telling that once defined the Fourth Estate — and can once again.
To simmer this down, the price of everything has kept increasing as wages stagnated. Simultaneously, businesses spent several years telling workers they were asking for too much and doing too little, telling people they were “quiet quitting” in 2022 (a grotesque term that means “doing the job you are paid to do”), and, a year later, insisting that years of remote work was actually bad because profits didn’t reach the unrealistic expectations set by the post-lockdown boom of 2021. While the majority of people don't work remotely, from talking to the people I know outside of tech or business, there is a genuine sense that the media has allied itself with the bosses, and I imagine it's because of the many articles that literally call workers lazy.
Yet, when it comes to the powerful, the criticisms feel so much more guarded. Despite the fact that Elon Musk has spent years telegraphing his intent to use his billions of dollars to wield power equivalent to that of a nation state, too much of the media — both legacy and otherwise — responded slowly, cautiously, failing to call him a liar, a con artist, an aggressor, a manipulator, and a racist. Sure, they reported stories that might make you think that, but the desperation to guard objectivity was (and is) such that there is never any intent to call Musk what he was (and is) — a racist billionaire using his outsized capital to bend society to his will.
The news — at least outside of the right wing media terrordome — is always separated from opinion, always guarded, always safe, for fear that they might piss off somebody and be declared "biased," something that happens anyway. While there are columnists that are given some space to have their own thoughts in the newspaper, the stories themselves are delivered with the kind of reserved "hmmm..." tone that often fails to express the consequences of the news itself and lacks the context necessary to deliver the news itself.
This isn't to say these outlets are incapable of doing this right — The Washington Post has done an excellent job of analysis in tech, for example — but that they are custom-built to be bulldozed by authoritarianism, a force that exists to crush those desperately attached to norms and objectivity. Authoritarians know that their ideologically-charged words will be quoted ad verbatim with the occasional "this could mean..." context that's lost in a headline that repeats exactly what they wanted it to.
We rarely explain the structures of our democracy in ways that let people see how to interact with it, which leaves it instead in the hands of special interests who can bankroll their perspectives, even when they’re actively harmful.
...Little of the gravity of what we’re facing makes it into everyday news coverage in a way that would allow us to have real conversations as a country on how to chart a way forward. Instead, each day, we as an industry — to borrow from John Nichols and Robert McChesney’s book Tragedy and Farce — pummel people with facts, but not the context to make sense of them.
Musk is the most brutal example. Despite turning Twitter into a website pumped full of racism and hatred that helped make Donald Trump president, Musk was still able to get mostly-positive coverage from the majority of the mainstream media despite the fact that he has spent the best part of a decade lying about what Tesla will do next. It doesn't matter that these outlets had accompanying coverage that suggested that the markets weren't impressed by its robotaxi plans, or its potemkin robots — Musk is still demonstrably able to use the media's desperation for objectivity against them, knowing that they would never dare combine thinking about stuff with reporting on stuff for fear that someone might say they have "bias" in their "coverage."
This is, by the way, not always the fault of the writers. There are entire foundations of editors that have more faith in the markets and the powerful than they do in the people who spend their days interrogating them, and above them entire editorial superstructures that exist to make sure that the "editorial vision" never colors too far outside the lines. I'm not even talking about Jeff Bezos, or Laurene Powell Jobs, or any number of billionaires who own any number of publications, but the editors editing business and tech reporters who don't know anything about business and tech, or the senior editors that are terrified of any byline that might dare get the outlet "under fire" from somebody who could call their boss.
There are, however, also those who simply defer to the powerful — that assume that "this much money can't be wrong," even if said money has been wrong repeatedly to the point that there's an entire website about it. They are the people that look at the current crop of powerful tech companies that have failed to deliver any truly meaningful innovation in years and coo like newborn babes. Look at the coverage of Sam Altman from the last year — you know, the guy who has spent years lying about what artificial intelligence can do — and tell me why every single thought he has must be uncritically cataloged, his every decision applauded, his every claim trumpeted as certain, his brittle company's obvious problems apologized for and readers reassured of his obvious victory.
Nowhere is this more obvious right now than in The Guardian's nonsensical decision to abandon Twitter, decrying how "X is a toxic media platform and that its owner, Elon Musk, has been able to use its influence to shape political discourse" mere weeks after printing, bereft of context, Elon Musk's ridiculous lies about his plans for cybertaxis. There is little moral quality to leaving X if your outlet continues to act as a stenographer for its leader, and this in fact suggests a lack of any real interest in change or progress, just the paper tiger of norms and values that will only end up depriving people of good journalism.
On the other side of the tracks, Sam Altman is a liar who's been fired from two companies, including OpenAI, and yet because he's a billionaire with a buzzy company, he's left unscathed. The powerful get a completely different set of rules to live by and exist in a totally different media environment — they're geniuses, entrepreneurs and firebrands, their challenges framed as "missteps" and their victories framed as certainties by the same outlets that told us that we were "quiet quitting" and that the economy is actually good and we are the problem. While it's correct to suggest that the right wing is horrendously ideologically biased, it's very hard to look at the rest of the media and claim they're not.
While it might feel a little tangential to bring technology into this, everybody is affected by the growth-at-all-costs Rot Economy, because everybody is using technology, all the time, and the technology in question is getting worse. This election cycle saw more than 25 billion text messages sent to potential voters, and seemingly every website was crammed full of random election advertising.
Our phones are beset with notifications trying to "growth-hack" us into doing things that companies want, our apps full of microtransactions, our websites slower and harder-to-use with endless demands of our emails and our phone numbers and the need to log back in because they couldn't possibly lose a dollar to somebody who dared to consume their content for free. Our social networks are so algorithmically charged that they barely show us the things we want them to anymore, with executives dedicated to filling our feeds with AI-generated slop because despite being the customer, we are also the revenue mechanism. Our search engines do less as a means of making us use them more, our dating apps have become vehicles for private equity to add a toll to falling in love, our video games are constantly nagging us to give them more money, and despite it costing money and being attached to our account, we don't actually own any of the streaming media we purchase. We're drowning in spam — both in our emails and on our phones — and at this point in our lives we're probably agreed to 3 million pages worth of privacy policies allowing companies to use our information as they see fit.
And these are issues that hit everything we do, all the time, constantly, unrelentingly. Technology is our lives now. We wake up, we use our phone, we check our texts (three spam calls, two spam texts), we look at our bank balance (two-factor authentication check), we read the news (a quarter of the page is blocked by an advertisement asking for our email that's deliberately built to hide the button to get rid of it, or a login screen because we got logged out somehow), we check social media (after being shown an ad every two clicks), and then we log onto Slack (and feel a pang of anxiety as 15 different notifications appear).
Modern existence has become engulfed in sludge, the institutions that exist to cut through it bouncing between the ignorance of their masters and a misplaced duty in objectivity, our mechanisms for exploring and enjoying the world interfered with by powerful forces that are too-often left unchecked. Opening our devices is willfully subjecting us to attack after attack from applications, websites and devices that are built to make us do things rather than operate with the dignity and freedom that much of the internet was founded upon.
These millions of invisible acts of terror are too-often left undiscussed, because accepting the truth requires you to accept that most of the tech ecosystem is rotten, and that billions of dollars are made harassing and punishing billions of people every single day of their lives through the devices that we’re required to use to exist in the modern world. Most users suffer the consequences, and most media fails to account for them, and in turn people walk around knowing something is wrong but not knowing who to blame until somebody provides a convenient excuse.
Why wouldn't people crave change? Why wouldn't people be angry? Living in the current world can be absolutely fucking miserable, bereft of industry and filthy with manipulation, an undignified existence, a disrespectful existence that must be crushed if we want to escape the depressing world we've found ourselves in. Our media institutions are fully fucking capable of dealing with these problems, but it starts with actually evaluating them and aggressively interrogating them without fearing accusations of bias that will happen either way.
The truth is that the media is more afraid of bias than they are of misleading their readers. And while that seems like a slippery slope, and may very well be one, there must be room to inject the writer’s voice back into their work, and a willingness to call out bad actors as such, no matter how rich they are, no matter how big their products are, and no matter how willing they are to bark and scream that things are unfair as they accumulate more power.
If you're in the tech industry and reading this and saying that "the media is too critical" of tech, you are flat fucking wrong. Everything we're seeing happening right now is a direct result of a society that let technology and the ultra-rich run rampant, free of both the governmental guardrails that might have stopped them and the media ecosystem that might have held them accountable.
Our default position in interrogating the intentions and actions of the tech industry has become that they will "work it out" as they continually redefine "work it out" as "make their products worse but more profitable." Covering Meta, Twitter, Google, OpenAI and other huge tech companies as if the products they make are remarkable and perfect is disrespectful to readers and a disgusting abdication of responsibility, as their products are, even when they're functional, significantly worse, more annoying, more frustrating and more convoluted than ever, and that's before you get to the ones like Facebook and Instagram that are outright broken.
I don't give a shit if these people have "raised a lot of money," unless you use that as proof that something is fundamentally wrong with the tech industry. Meta making billions of dollars of profit is a sign of something wrong with society, not proof that it’s a "good company" or anything that should grant Mark Zuckerberg any kind of special treatment. OpenAI being "worth" $157 billion for a company that burns $5 billion or more a year to make a product that destroys our environment for a product yet to find any real meaning isn't a sign that it should get more coverage or be taken more seriously. Whatever you may feel about ChatGPT, the coverage it receives is outsized compared to its actual utility and the things built on top of it, and that's a direct result of a media industry that seems incapable of holding the powerful accountable.
It's time to accept that most people's digital life fucking sucks, as does the way we consume our information, and that there are people directly responsible. Be as angry as you want at Jeff Bezos, whose wealth (and the inherent cruelty of Amazon’s labor practices, and the growing enshittification of Amazon itself) makes him an obvious target, but don’t forget Mark Zuckerberg, Elon Musk, Sundar Pichai, Tim Cook and every single other tech executive that has allowed our digital experiences to become rotted out husks dominated by algorithms. These companies are not bound by civic duty, or even a duty to their customers — they have made their monopolies, and they’ll do whatever keeps you trapped in them. If they want me to think otherwise, they should prove it, and the media should stop trying to prove it for them.
Similarly, governments have entirely failed to push through any legislation that might stymie the rot, both in terms of the dominance (and opaqueness) of algorithmic manipulation and the ways in which tech products exist with few real quality standards. We may have (at least for now) consumer standards for the majority of consumer goods, but software is left effectively untouched, which is why so much of our digital lives is such unfettered dogshit.
And if you're reading this and saying I'm being a hater or pessimist, shut the fuck up. I'm so fucking tired of being told to calm down about this as we stare down the barrel of four years of authoritarianism built on top of the decay of our lives (both physical and digital), with a media ecosystem that doesn't do a great job of explaining what's being done to people in an ideologically consistent way. I'm angry, and I don't know why you're not. Explain it to me. Email me. Explain yourself, explain why you do not see the state of our digital lives as one of outright decay and rot, one that robs users of dignity and industry, one that actively harms billions of people in pursuit of greed.
There is an extremely-common assumption in the tech media — based on what, I'm not sure — that these companies are all doing a good job, and that "good job" means having lots of users and making lots of money, and it drives editorial decision-making.
If three-quarters of the biggest car manufacturers were making record profits by making half of their cars with a brake that sometimes doesn't work, it'd be international news, leading to government inquiries and people being put in prison. This isn’t conjecture. After Volkswagen was caught deliberately programming its engines to only meet emissions standards during laboratory testing and certification, lawmakers around the globe responded with civil and criminal action. The executives and engineers responsible were indicted, with one receiving seven years in jail. Its former CEO is currently being tried in Germany, and has been indicted in the US.
And yet so much of the tech industry — consumer software like Google, Facebook, Twitter, and even ChatGPT, and business software from companies like Microsoft and Slack — outright sucks, yet gets covered as if that's just "how things are." Meta, by the admission of its own internal documents, makes products that are ruinous to the mental health of teenage girls. And it hasn’t made any substantial changes. Nor has it received any significant pushback for failing to do so. It exercises a reckless disregard for public safety as the auto industry in the 1960s, when Ralph Nader wrote “Unsafe At Any Speed.”
Nader’s book actually brought about change. It led to the Department of Transport, and the passage of seat belt laws in 49 states, and a bunch of other things that get overlooked (and possibly because he led to eight years of George W. Bush as president). But the tech industry is somehow inoculated against any kind of public pressure or shame, because it operates by a completely different rule book and a different criteria for success, as well as a different set of expectations. By allowing the market to become disconnected from the value it creates, we enable companies like NVIDIA that reduce the quality of their services as they make more money, or Facebook destroying our political discourse or facilitating a genocide in Myanmar, and then celebrate them because, well, they made more money. No, really, that’s quite literally what now-CTO Andrew Bosworth said in an internal memo from 2016, where he said that “all the work [Facebook does] in growth is justified,” even if that includes — and I am quoting him directly — “somebody dying in a terrorist attack coordinated [using Facebook’s tools.]”
The mere mention of violent crime is enough to create reams of articles questioning whether society is safe, yet our digital lives are a wasteland that many still discuss like a utopia. Seriously, putting aside the social networks, have you visited a website on a phone recently? Have you tried to use an app? Have you tried to buy something online starting with a Google Search? Within those experiences, has anything gone wrong? I know it has! You know it has! It's time to wake up!
We — users of products — are at war with the products we’re using and the people that make them. And right now, we’re losing.
The media must realign to fight for how things should be. This doesn't mean that they can't cover things positively, or give credit where credit is due, or be willing to accept what something could be, but what has to change is the evaluation of the products themselves, which have been allowed to decay to a level that has become at best annoying and at worst actively harmful to society.
Our networks are rotten, our information ecosystem poisoned with its pure parts ideologically and strategically concussed, our means of speaking to those we love and making new connections so constantly interfered-with that personal choice and dignity is all but removed.
But there is hope. Those covering the tech industry have one of the most consequential jobs in journalism, if they choose to heed the call. Those willing to guide people through the wasteland — those willing to discuss what needs to change, how bad things have gotten, and what good might look like — have the opportunity to push for a better future by spitting in the faces of those ruining it.
I don’t know where I sit, what title to give myself, if I am legacy (I got my start writing for a print magazine) or independent or an “influencer” or a “content creator,” and I’m not sure I care. All I know is that I feel like I am at war, and we — if I can be considered part of the media — are at war with people that have changed the terms of innovation so that it’s synonymous with value extraction. Technology is how I became a person, how I met my closest friends and loved ones, and without it I would not be able to write, let alone be able to write this newsletter, and I feel poison flow through my veins as I see what these motherfuckers have done and what they will continue to do if they’re not consistently and vigorously interrogated.
Now is the time to talk bluntly about what’s happening. The declining quality of these products, the scourge of growth-hacking, the cancerous growth-at-all-costs mindset, these are all things that need to be raised in every single piece, and judgments must be unrelenting. The companies will squeal that they are being unfairly treated by “biased legacy media,” something which (as I’ve said repeatedly) is already happening.
These companies are poisoning the digital world, and they must be held accountable for the damage they are causing. Readers are already aware, but are — with the help of some members of the media — gaslighting themselves into believing that they “just don’t get it,” when the thing they don’t get is that the tech industry has built legions of obfuscations, legal tricks, and horrifying user interface traps with the intention of making the customer believe they’re the problem.
Things can change, but it has to start with the information sources, and that starts with journalism. The work has already begun, and will continue, but must scale up, and do so quickly.
And you, the user, have power too. Learn to read a privacy policy (yes, there are plenty of people in the tech media who give a shit, the Post has several of them, Bezos be damned). Move to Signal, an encrypted messaging app that works on just about everything. Get a service like DeleteMe (I pay for it, I worked for them like 4 years ago, I have no financial relationship with them) to remove yourself from data brokers. Molly White, a wonderful friend and even better writer, has written an extremely long guide about what to do next, and it runs through a ton of great things you can do — unionization, finding your communities, dropping apps that collect and store sensitive data, and so on.I also recommend WIRED’s guide to protecting yourself from government surveillance.
I'll leave you with a thought I posted on the Better Offline Reddit on November 6.
The last 24 hours things have felt bleak, and will likely feel more bleak as the months and years go on. It will be easy to give into doom, to assume the fight is lost, to assume that the bad guys have permanently won and there will never be the justice or joy we deserve.
Now is the time for solidarity, to crystalize around the ideas that matter, even if their position in society is delayed, even as the clouds darken and the storms brew and the darkness feels all-encompassing and suffocating. Reach out to those you love, and don't just commiserate - plan. It doesn't have to be political. It doesn't even really have to matter. Put shit on your fucking calendar, keep yourself active, and busy, and if not distracted, at the very least animated. Darkness feasts on idleness. Darkness feasts on a sense of failure, and a sense of inability to make change.
You don't know me well, but know that I am aware of the darkness, and the sadness, and the suffocation of when things feel overwhelming. Give yourself mercy today, and in the days to come, and don't castigate yourself for feeling gutted.
Then keep going. I realize it's little solace to think "well if I keep saying stuff out loud things will get better," but I promise you doing so has an effect, and actually matters. Keep talking about how fucked things are. Make sure it's written down. Make sure it's spoken cleanly, and with rage and fire and piss and vinegar. Things will change for the better, even if it takes more time than it should.
2024-11-02 03:29:04
Soundtrack: EL-P - Flyentology
At the core of Microsoft, a three-trillion-dollar hardware and software company, lies a kind of social poison — an ill-defined, cult-like pseudo-scientific concept called 'The Growth Mindset" that drives company decision-making in everything from how products are sold, to how your on-the-job performance is judged.
I am not speaking in hyperbole. Based on a review of over a hundred pages of internal documents and conversations with multiple current and former Microsoft employees, I have learned that Microsoft — at the direction of CEO Satya Nadella — has oriented its entire culture around the innocuous-sounding (but, as we’ll get to later, deeply troubling) Growth Mindset concept, and has taken extraordinary measures to institute it across the organization.
One's "growth mindset" determines one’s success in the organization. Broadly speaking, it includes attributes that we can all agree are good things. People with growth mindsets are willing to learn, accept responsibility, and strive to overcome adversity. Conversely, those considered to have a "fixed mindset" are framed as irresponsible, selfish, and quick to blame others. They believe that one’s aptitudes (like their skill in a particular thing, or their intelligence) are immutable and cannot be improved through hard work.
On the face of things, this sounds uncontroversial. The kind of nebulous pop-science that a CEO might pick up at a leadership seminar. But, from the conversations I’ve held and the internal documents I’ve read, it’s clear that the original (and shaky) scientific underpinnings of mindset theory have devolved into an uglier, nastier beast at Redmond.
The "growth mindset" is Microsoft's cult — a vaguely-defined, scientifically-questionable, abusively-wielded workplace culture monstrosity, peddled by a Chief Executive obsessed with framing himself as a messianic figure with divine knowledge of how businesses should work. Nadella even launched his own Bible — Hit Refresh — in 2017, which he claims has "recommendations presented as algorithms from a principled, deliberative leader searching for improvement."
I’ve used the terms “messianic,” “Bible,” and “divine” for a reason. This book — and the ideas within — have taken an almost religious-like significance within Microsoft, to the point where it’s actually weird.
Like any messianic tale, the book is centered around the theme of redemption, with the subtitle mentioning a “quest to rediscover Microsoft’s soul.” Although presented and packaged like any bland business book that you’d find in an airport Hudson News and half-read on a red eye to nowhere, its religious framing extends to separation of dark and enlightened ages. The dark age — Steve “Developers” Balmer’s Microsoft, with Microsoft stagnant and missing winnable opportunities, like mobile — contrasted against this brave, bright new era where a nearly-assertive Redmond pushes frontiers in places like AI.
Hit Refresh became a New York Times bestseller likely due to the fact that Microsoft employees were instructed (based on an internal presentation I’ve reviewed) to "facilitate book discussions with customers or partners" using talking points provided by the company around subjects like culture, trust, artificial intelligence, and mixed reality.
Side note: Hey, didn’t Microsoft lay off a bunch of people from its mixed reality team earlier this year?
Nadella, desperate to hit the bestseller list and frame himself as some kind of guru, attempted to weaponize tens of thousands of Microsoft employees as his personal propagandists, instructing them to do things like...
Use these questions to facilitate a book discussion with your customers or partners if they are interested in exploring the ideas around leadership, culture and technology in Hit Refresh...
Reflect on each of the three passages about lessons learned from cricket and discuss how they could apply in your current team. (pages 38-40)
"...compete vigorously and with passion in the face of uncertainty and intimidation" (page 38)
"...the importance of putting your team first, ahead of your personal statistics and recognition" (page 39)
"One brilliant character who does not put team first can destroy the entire team" (page 39)
Nadella's campaign was hugely successful, with years of fawning press around him bringing a "growth mindset" to Microsoft, turning employees from "know-it-alls" into "learn-it-alls," Nadella is hailed as "embodying a growth mindset," claiming that he "pushes people to think of themselves as students as part of how he changed things," the kind of thing that sounds really good but is difficult to quantify.
This is, it turns out, a continual problem with the Growth Mindset itself.
If you're wondering why I'm digging into this so deeply, it's because — and I hate to repeat myself — the Growth Mindset is at the very, very core of Microsoft's culture. It’s both a tool for propaganda and a religion.. And it is, in my opinion, a flimsily-founded kind of grift-psychology, one that is deeply irresponsible to implement at scale.
In the late 1980s, American Psychologist Carol Dweck started researching how mindsets — or, how a person perceives a challenge, or their own innate attributes — can influence outcomes in things like work and school. Over the coming decades, she further refined and defined her ideas, coining the terms “growth mindset” and “fixed mindset” in 2012, a mere five years before Nadella took over at Microsoft. These can be explained as follows:
Mindset theory itself is incredibly controversial for a number of reasons, chief of which is that nobody can seem to reliably replicate the results of Dweck's academic work. For the most part, research into mindset theory has been focused on children, with the idea that if we believe we can learn more we can learn more, and that by simply thinking and trying harder, anything is possible.
One of the weird tropes of mindset theory is that praise for intelligence is bad. Dweck herself said in an interview in 2016 that it's better to tell a kid that they worked really hard or put in a lot of effort rather than telling them they're smart, to "teach them they can grow their skills in that way."
Another is that you should say "not yet" instead of "no," as that teaches you that anything is possible, as Dweck believes that kids are "condition[ed] to show that they have talents and abilities all the time...[and that we should show them] that the road to their success is learning how to think through problems [and] bounce back from failures."
All of this is the kind of Vaynerchuckian flim-flam that you'd expect from a YouTube con artist rather than professional psychologist, and one would think that it'd be a bad idea to talk about it if it wasn't scientifically proven — let alone shape the corporate culture of a three-trillion-dollar business around it.
The problem, however, is that things like "mindset theory" are often peddled with little regard for whether they're true or not, peddling concepts that make the reader feel smart because they sort of make sense. After all, being open to the idea that we can do anything is good, right? Surely having a positive and open mind would lead to better outcomes, right?
Sort of, but not really.
A study out of the University of Edinburgh from early 2017 found that mindset didn't really factor into a child's outcomes (emphasis mine).
Mindset theory states that children’s ability and school grades depend heavily on whether they believe basic ability is malleable and that praise for intelligence dramatically lowers cognitive performance. Here we test these predictions in 3 studies totalling 624 individually tested 10-12-year-olds.
Praise for intelligence failed to harm cognitive performance and children’s mindsets had no relationship to their IQ or school grades. Finally, believing ability to be malleable was not linked to improvement of grades across the year. We find no support for the idea that fixed beliefs about basic ability are harmful, or that implicit theories of intelligence play any significant role in development of cognitive ability, response to challenge, or educational attainment.
...Fixed beliefs about basic ability appear to be unrelated to ability, and we found no support for mindset-effects on cognitive ability, response to challenge, or educational progress
The problem, it seems, is that Dweck's work falls apart the second that Dweck isn't involved in the study itself.
In a September 2016 study by Education Week's Research Center, 72% of teachers said the Growth Mindset wasn’t effective at fostering high standardized test scores. Another study (highlighted in this great article from Melinder Wenner Moyer) run by Case Western University psychologist Brooke MacNamara and Georgia Tech psychologist Alexander Burgoyne published in the Psychological Bulletin said that “the apparent effects of growth mindset interventions on academic achievement are likely attributable to inadequate study design, reporting flaws, and bias.”
In other words, the evidence that supports the efficacy of mindset theory is unreliable, and there’s no proof that this actually improves educational outcomes. To quote Wenner Moyer:
Dr. MacNamara and her colleagues found in their analysis that when study authors had a financial incentive to report positive effects — because, say, they had written books on the topic or got speaker fees for talks that promoted growth mindset — those studies were more than two and half times as likely to report significant effects compared with studies in which authors had no financial incentives.
Wenner Moyer's piece is a balanced rundown of the chaotic world of mindset theory, counterbalanced with a few studies where there were positive outcomes, and focuses heavily on one of the biggest problems in the field — the fact that most of the research is meta-analyses of other people's data: Again, from Wenner Moyer.
For you data geeks out there, I’ll note that this growth mindset controversy is a microcosm of a much broader controversy in the research world relating to meta-analysis best practices. Some researchers think that it’s best to lump data together and look for average effects, while others, like Dr. Tipton, don’t. “There's often a real focus on the effect of an intervention, as if there's only one effect for everyone,” she said. She argued to me that it’s better try to figure out “what works for whom under what conditions.” Still, I’d argue there can be value to understanding average effects for interventions that might be broadly used on big, heterogeneous groups, too.
The problem, it seems, is that a "growth mindset" is hard to define, the methods of measuring someone's growth (or fixed) mindset are varied, and the effects of each form of implementation are also hard to evaluate or quantify. It’s also the case that, as Dweck’s theory has grown, it’s strayed away from the scientific fundamentals of falsifiability and testability.
Case in point: In 2016, Carol Dweck introduced the concept of a “false growth mindset.” This is where someone outwardly professes a belief in mindset theory, but their internal monologue says something different. If you’re a social scientist trying to deflect from a growing corpus of evidence casting doubt on the efficacy of your life’s work, this is incredibly useful.
Someone accused of having a false growth mindset could argue, until they’re blue in the face, that they genuinely do believe all of this crap. And the accuser could retort: “Well, you would say that. You’ve got a false growth mindset.”
To quote Wenner Moyer, "we shouldn't pretend that growth mindset is a panacea." To quote George Carlin (speaking on another topic, although pertinent to this post): “It’s all bullshit, and it’s bad for you.”
In Satya Nadella's Hit Refresh, he says that "growth mindset" is how he describes Microsoft's emerging culture, and that "it's about every individual, every one of us having that attitude — that mindset — of being able to overcome any constraint, stand up to any challenge, making it possible for us to grow and, thereby, for the company to grow."
Nadella notes that when he became CEO of Microsoft, he "looked for opportunities to change [its] practices and behaviors to make the growth mindset vivid and real." He says that Minecraft, the game it acquired in 2014 for $2.5bn, "represented a growth mindset because it created new energy and engagement for people on [Microsoft's] mobile and cloud technologies." At one point in the book, he describes how an anonymous Microsoft manager came to him to share how much he loved the "new growth mindset," and "how much he wanted to see more of it," pointing out that he "knew these five people who don't have a growth mindset," adding that he believed that the manager in question was "using growth mindset to find a new way to complain about others," and that was not what they had in mind.
The problem, however, is that this is the exact culture that Microsoft fosters — one where fixed mindsets are bad, growth mindsets are good, and the definition of both varies wildly depending on the scenario.
One employee related to me that managers occasionally add that they "did not display a growth mindset" after meetings, with little explanation as to what that meant or why it was said. Another said that "[the growth mindset] can be an excuse for anything, like people would complain about obvious engineering issues, that the code is shit and needs reworking, or that our tooling was terrible to work with, and the response would be to ‘apply Growth Mindset’ and continue churning out features."
In essence, the growth mindset means whatever it has to mean at any given time, as evidenced by internal training materials that that suggest that individual contributions are subordinate to "your contributions to the success of others," the kind of abusive management technique that exists to suppress worker wages and, for the most part, deprive them of credit or compensation.
One post from Blind, an anonymous social network where you're required to have a company email to post, noted in 2016 that "[the Growth Mindset] is a way for leadership to frame up shitty things that everybody hates in a way that encourages us to be happy and just shut the fuck up," with another adding it was "KoolAid of the month."
In fact, the big theme of Microsoft's "Growth Mindset" appears to be "learn everything you can, say yes to everything, then give credit to somebody else." While this may in theory sound positive — a selflessness that benefits the greater whole — it inevitably, based on conversations with Microsoft employees, leads to managerial abuse.
Managers, from the conversations I've had with Microsoft employees, are the archons of the Growth Mindset — the ones that declare you are displaying a fixed mindset for saying no to a task or a deadline, and frame "Growth Mindset" contributions as core to their success. Microsoft's Growth Mindset training materials continually reference "seeing feedback as more fair, specific and helpful," and "persisting in the face of setbacks," framing criticism as an opportunity to grow.
Again, this wouldn't be a problem if it wasn't so deeply embedded in Microsoft's culture. If you search for the term “Growth Mindset” on the Microsoft subreddit, you’ll find countless posts from people who have applied for jobs and internships asking for interview advice, and being told to demonstrate they have a growth mindset to the interviewer. Those who drink the Kool Aid in advance are, it seems, at an advantage.
“The interview process works more as a personality test,” wrote one person. “You're more likely to be chosen if you have a growth mindset… You can be taught what the technologies are early on, but you can't be taught the way you behave and collaborate with others.”
Personality test? Sounds absolutely nothing like the Church of Scientology.
Moving on.
Microsoft boasts in its performance and development materials that it "[doesn’t] use performance ratings [as it goes] against [Microsoft's] growth mindset culture where anyone can learn, grow and change over time," meaning that there are no numerical evaluations of what a growth mindset is or how it might be successfully implemented.
There are many, many reasons this is problematic, but the biggest is that the growth mindset is directly used to judge your performance at Microsoft. Twice a year, Microsoft employees have a "Connect" with managers where they must answer a number of different questions about their current and future work at Microsoft, with sections titled things like "share how you applied a growth mindset," with prompts to "consider when you could have done something different," and how you might have applied what you learned to make a greater impact. Once filled-out, your manager responds with comments, and then the document is finalized and published internally, though it's unclear who is able to see them.
In theory, they're supposed to be a semi-regular opportunity to reflect on your work and think about how you might do better. In practice? Not so much. The following was shared with me by a Microsoft employee.
First of all, everyone haaaaates filling those out. You need to include half-a-year worth of stuff you've done, which is very hard. A common advice is to run a diary where you note down what you did every single day so that you can write something in the Connect later. Moreover, it forces you into a singular voice. You cannot say "we" in a Connect, it's always "I". Anyone who worked in software (or I would suspect most jobs) will tell you that's idiotic. Almost everything is a team effort. Second, the stakes of those are way too high. It's not a secret that the primary way decisions about bonuses and promotions are done is by looking at this. So this is essentially your "I deserve a raise" form, you fill out one, max two of those a period and that's it.
Microsoft's "Connects" are extremely important to your future at the company, and failing to fill them in in a satisfactory manner can lead to direct repercussions at work. An employee told me the story of Feng Yuan, a high-level software engineer with decades at the company, beloved for his helpful internal emails about working with Microsoft's .NET platform, who was deemed as "underperforming" because he "couldn't demonstrate high impact in his Connects."
He was fired for "low performance," despite the fact that he spent hours educating other employees, running training sessions, and likely saving the company millions in overhead by making people more efficient. One might even say that Yuan embodied the Growth Mindset, selflessly dedicating himself to educating others as a performance architect at the company. Feng's tenure ended with an internal email criticizing the Connect experience.
Feng, however, likely needed to be let go for other reasons. Another user on Blind related a story of Feng calling a junior engineer's code "pathetic" and "a waste of time," spending several minutes castigating the engineer until they cried, relating that they had heard other stories about him doing so in the past. This, clearly, was not a problem for Microsoft, but filling in his Connect was.
One last point: These “Connects” are high-stakes games, with the potential to win or lose, depending on how compelling your story is and how many boxes it ticks. As a result, responses to each of the questions invariably takes the form of a short essay. It’s not enough to write a couple of sentences, or a paragraph. You’ve really got to sell yourself, or demonstrate — with no margin for doubt — that you’re on-board with the growth mindset mantra. This emphasis on long-form writing (whether accidental or intentional) inevitably disadvantages people who don’t speak English (or whatever language is used in their office) natively, or have conditions like dyslexia.
The problem, it seems, is that Microsoft doesn't really care about the Growth Mindset at all, and is more concerned with stripping employees of their dignity and personality in favor of boosting their managers' goals. Some of Microsoft's "Connect" questions veer dangerously close to "attack therapy," where you are prompted to "share how you demonstrated a growth mindset by taking personal accountability for setbacks, asking for feedback, and applying learnings to have a greater impact."
Your career at Microsoft — a $3 trillion company — is largely defined by the whims of your managers and your ability to write essays of indeterminate length, based on your adherence to a vague, scientifically-questionable "mindset theory." You can (and will!) be fired both for failing to express your "growth mindset" — a term as malleable as its alleged adherents — to managers that are also interpreting its meaning in realtime, likely for their own benefit.
This all feels so distinctly cult-y. Think about it. You have a High Prophet (Satya Nadella) with a holy book (Hit Refresh). You have an original sin (a fixed mindset) and a path to redemption (embracing the growth mindset). You have confessions. You have a statement of faith (or close enough) for new members to the church. You have a priestly class (managers) with the power to expel the insufficiently-devout (those with a sinful fixed mindset). Members of the cult are urged to apply its teachings to all facets of their working life, and to proselytize to outsiders.
As with any scripture, its textural meanings are open to interpretation, and can be read in ways that advantage or disadvantage a person.
And, like any cult, it encourages the person to internalize their failures and externalize their successes. If your team didn’t hit a deadline, it isn’t because you’re over-worked and under-resourced. You did something wrong. Maybe you didn’t collaborate enough. Perhaps your communication wasn’t up to scratch. Even if those things are true, or if it was some other external factor that you have no control over, you can’t make that argument because that would demonstrate a fixed mindset. And that would make you a sinner.
Yet there's another dirty little secret behind Microsoft's Connects.
Microsoft is actively training its employees to generate their responses to Connects using Copilot, its generative AI. When I say "actively training," I mean that there is an entire document — "Copilot for Microsoft 365 Performance and Development Guidance" — that explains, in detail, how an employee (or manager) can use Copilot to generate the responses for their Connects. While there are guidelines about how managers can't use Copilot to "infer impact" or "make an impact determination" for direct reports, they are allowed to "reference the role library and understand the expectations for a direct report based on their role profile."
Side Note: What I can't speak to here is how common using Copilot to fill in a Connects or summarize someone else's Connects actually is. However, the documents I have reviewed - as I'll explain - explicitly instruct Microsoft employees and managers on how to do so, and frame them doing so positively.
In essence, a manager can't say how good you were at a job using Copilot, but they can use Copilot to see whether you are meeting expectations using it. Employees are instructed to use Copilot to "collect and summarize evidence of accomplishments" from internal Microsoft sources, and to "ensure [their] inputs align to Microsoft's Performance & Development philosophy."
In another slide from an internal Microsoft presentation, Microsoft directly instructs employees how to prompt Copilot to help them write a self-assessment for their performance review, to "reflect on the past," to "create new core priorities," and find "ideas for accomplishments." The document also names those who "share their Copilot learnings with other Microsoft employees" as "Copilot storytellers," and points them to the approved Performance and Development prompts from the company.
At this point, things become a little insane.
In one slide, titled "Copilot prompts for Connect: Ideas for accomplishments," Microsoft employees are given a prompt to write a self-assessment for their performance review based on their role at Microsoft. It then generates 20 "ideas for success measurements" to include in their performance review. It's unclear if these are sourced from anywhere, or if they're randomly generated. When a source ran the query multiple times, it hallucinated wildly different statistics for the same metrics.
Microsoft's guidance suggests that these are meant to be "generic ideas on metrics" which a user should "modify to reflect their own accomplishments," but one only has to ask it to draft your own achievements to have these numbers — again, generated using the same models as ChatGPT — customized to your own work.
While Copilot warns you that "AI-generated content may be incorrect," it's reasonable to imagine that somebody might use its outputs — either the "ideas" or the responses — as the substance of their Connect/performance review. I have also confirmed that when asked to help draft responses based on things that you've achieved since your last Connect, Copilot will use your activity on internal Microsoft services like Outlook, Teams and your previous Connects.
Side note: How bad is this? Really bad. A source I talked to confirmed that personalized achievements are also prone to hallucinations. When asked to summarize one Microsoft employee’s achievements based on their emails, messages, and other internal documents from the last few quarters, Copilot spat out a series of bullet points with random metrics about their alleged contributions, some of which the employee didn’t even have a hand in, citing emails and documents that were either tangentially related or entirely unrelated to their “achievements,” including one that linked to an internal corporate guidance document that had nothing to do with the subject at hand.
On a second prompt, Copilot produced entirely different achievements, metrics and citations. To quote one employee, “Some wasn't relevant to me at ALL, like a deck someone else put together. Some were relevant to me but had nothing to do with the claim. It's all hallucination.”
To be extremely blunt: Microsoft is asking its employees to draft their performance reviews based on the outputs of generative AI models — the same ones underpinning ChatGPT — that are prone to hallucination.
Microsoft is also — as I learned from an internal document I’ve reviewed — instructing managers to use it to summarize "their direct report's Connects, Perspectives and other feedback collected throughout the fiscal year as a basis to draft Rewards/promotion justifications in the Manage Rewards Tool (MRI)," which in plain English means "use a generative AI to read performance reviews that may or may not be written by generative AI, with the potential for hallucinations at every single step."
Microsoft's corporate culture is built on a joint subservience to abusive pseudoscience and the evaluations of hallucination-prone artificial intelligence. Working at Microsoft means implicitly accepting that you are being evaluated on your ability to adhere to the demands of an obtuse, ill-defined "culture," and the knowledge that whatever you say both must fit a format decided by a generative AI model so that it can be, in turn, read by the very same model to evaluate you.
While Microsoft will likely state that corporate policy prohibits using Copilot to "infer impact or make impact determination for direct reports" or "model reward outcomes," there is absolutely no way that instructing managers to summarize people's Connects — their performance reviews — as a means of providing reward/promotion justifications will end with anything other than an artificial intelligence deciding whether someone is hired or fired.
Microsoft's culture isn't simply repugnant, it's actively dystopian and deeply abusive. Workers are evaluated based on their adherence to pseudo-science, their "achievements" — which may be written by generative AI — potentially evaluated by managers using generative AI. While they ostensibly do a "job" that they're "evaluated for" at Microsoft, their world is ultimately beholden to a series of essays about how well they are able to express their working lives through the lens of pseudoscience, and said expressions can be both generated by and read by machines.
I find this whole situation utterly disgusting. The Growth Mindset is a poorly-defined and unscientific concept that Microsoft has adopted as gospel, sold through Satya Nadella's book and reams of internal training material, and it's a disgraceful thing to build an entire company upon, let alone one as important as Microsoft.
Yet to actively encourage the company-wide dilution of performance reviews — and by extension the lives of Microsoft employees — by introducing generative AI is reprehensible. It shows that, at its core, Microsoft doesn't actually want to evaluate people's performance, but see how well it can hit the buttons that make managers and the Senior Leadership Team feel good, a masturbatory and specious culture built by a man — Satya Nadella — that doesn't know a fucking thing about the work being done at his company.
This is the inevitable future of large companies that have simply given up on managing their people, sacrificing their culture — and ultimately their businesses — to as much automation as is possible, to the point that the people themselves are judged based on the whims of managers that don't do the actual work and the machines that they've found to do what little is required of them. Google now claims that 25% of its code is written by AI, and I anticipate Microsoft isn't far behind.
Side note: This might be a little out of the scope of this newsletter, but the 25% stat is suspect at best.
First, even before generative AI was a thing, developers were using autocomplete to write code. There are a lot of patterns in writing software. Code has to meet a certain format to be valid. And so, the difference between an AI model creating a class declaration, or an IDE doing it is minimal. You’ve substituted one tool for another, but the outcome is the same.
Second, I’d question how much of this code is actually… you know… high-value stuff. Is Google using AI to build key parts of its software, or is it just writing comments and creating unit/integration tests? Based on my conversations with developers at other companies that have been strong-armed into using Copilot, I’m fairly confident this is the case.
Third, lines of code is an absolute dogshit metric. Developers aren’t judged by how many bytes they can shovel into a text editor, but how good — how readable, efficient, reliable, secure — their work is. To quote The Zen of Python, “Simple is better than complex… Sparse is better than dense.”
This brings me on to my fourth, and last, point: How much of this code is actually solid from the moment it’s created, and how much has to get fixed by an actual human engineer?
At some point, these ugly messes will collapse as it becomes clear that their entire infrastructure is written upon increasingly-automated levels of crap, rife with hallucinations and devoid of any human touch.
The Senior Leadership Team of Microsoft are a disgrace and incapable of any real leadership, and every single conversation I've had with Microsoft employees for this article speaks to a miserable, rotten culture where managers castigate those lacking the "growth mindset," a term that oftentimes means "this wasn't done fast enough, or you didn't give me enough credit."
Yet because the company keeps growing, things will stay the same.
At some point, this deck of cards will collapse. It has to. When you have tens of thousands of people vaguely aspiring to meet the demands of a pseudoscientific concept, filling in performance reviews using AI that will ultimately be judged by AI, you are creating a non-culture — a company that elevates those who can adapt to the system rather than service any particular customer.
It all turns my fucking stomach.
2024-10-22 02:35:15
Last week, Prabhakar Raghavan was relieved of duty as Senior Vice President of Search, becoming Google's "Chief Technologist."
An important rule to follow with somebody's title in Silicon Valley is that if you can't tell what it means, it probably doesn't mean anything. The most notorious example of this is when AOL employed "Shingy," a Digital Prophet, and if you have any information about what he did at AOL, please email me at [email protected] immediately.
Anyway, back to Prabhakar.
Although ostensibly less ridiculous, Raghavan has likely been given a ceremonial title and a job that involves "partnering closely with Sundar Pichai and providing technical direction," as opposed to actually leading it.
Back in April, I published probably my most well-known piece — The Man Who Killed Google Search. Using emails revealed as part of the Department of Justice's antitrust trial against Google over search, it told the tale of how Prabhakar Raghavan, then Google's head of ads, led a coup that began the slow descent of the world's most important website toward its current, half-broken form.
The key event in the piece is a “Code Yellow” crisis declared in 2019 by Google’s ads and finance teams, which had forecast a disappointing quarter. In response, Raghavan pushed Ben Gomes — the erstwhile head of Google Search, and a genuine pioneer in search technology — to increase the number of queries people made by any means necessary.
Though it's not clear what was done to resolve the "query softness" that Raghavan demanded was reversed, I hypothesize one of the moves involved rolling back changes to search that had suppressed spammy content. Google has since denied this, despite the fact that emails revealed as part of DOJ's trial involved Jerry Dischler — Raghavan's deputy at Google Ads at the time — specifically discussing rollbacks. From The Man Who Killed Google Search:
The March 2019 core update to search, which happened about a week before the end of the code yellow, was expected to be “one of the largest updates to search in a very long time. Yet when it launched, many found that the update mostly rolled back changes, and traffic was increasing to sites that had previously been suppressed by Google Search’s “Penguin” update from 2012 that specifically targeted spammy search results, as well as those hit by an update from an August 1, 2018, a few months after Gomes became Head of Search.
Prabhakar Raghavan was made Head of Search a little over a year later in June 2020, and it's pretty obvious how big a decline Google Search has taken since then. Results are filled with Search Engine Optimized spam, ads and sponsored content are bordering on indistinguishable from regular results, and the disastrous launch of Google's AI-powered "summaries" produced results that ranged from hilarious to actively life-threatening.
When Raghavan took over Search (Q3 2020), Google had just experienced its first decline in year-over-year quarterly growth since Q4 2012 — a 1.66% decline in growth that followed by a remarkable recovery, with double-digit year-over-year growth just as Prabhakar turned the screws on search, cresting to a ridiculous 61.58% year-over-year growth in Q3 2021.
Then things began to slow. Every quarter saw progressively lower growth, reaching a nadir in Q4 2022, when Google experienced a mere 0.96% year-over-year growth — something that one might be able to blame on the end of the opulent post-vaccine spending we saw across the entire economy, or the spiraling rates of inflation seen worldwide. And so, one would assume that growth would recover as the wider global economy did, right?
Ehhh. While Google experienced a recovery in its growth rates, it took until Q3 2023 to hit double digits again (11% year-over-year), hitting a high of 15.41% in Q2 2024 before trending down again in Q3 2024 to 13.59%.
The reason these numbers are important is that growth drives everything, and Prabhakar Raghavan drove the most consistent growth engine in the company, which grew 14% year-over-year in Q1 2024, until he didn’t. This context is key to understanding his “promotion” to Chief Technologist, a title that is most decidedly not a Chief Technology Officer, or any kind of officer at all.
Google has, for the most part, enjoyed one of the most incredible runs in business history, with almost an entire decade of 20% year-over-year growth, with a few exceptions, such as Q4 2012 (a few months into Raghavan's tenure at Google, where he started in ads) to Q3 2013, a chaotic period where Google fell behind Amazon in shopping ad revenue, bought Motorola Mobility for $12.5 billion (a 63% premium on its trading price) and seen a 15% year-over-year decline in pricing for its search ads (Google’s earnings also leaked early, which isn't good).
Yet growth is slowing, and isn't showing any signs of returning to the heady days where 17% year-over-year growth was considered a bad quarter. Google has deliberately made its product worse as a means of increasing revenue, spawning a trend of both remarkable revenue growth and worsening search results that started exactly when Raghavan took the wheel of its prime revenue-driver.
The chart tells another story — that this reckless and desperate move only worked for a little bit before growth began to slow again. Recklessness and desperation begets only more recklessness and desperation, and you’ll note that Google’s aggressive push into AI followed its dismal Q4 2022 quarter, where it nearly fell into negative growth (and when you factor inflation, it did).
If you’ll forgive the mixed metaphors, Google has essentially killed its golden goose — search — and is now in the process of pawning its eggs to buy decidedly non-magical beans, by which I mean data centers and GPUs, with Google increasing its capital expenditures in the financial year 2024 to $50 billion, equivalent to nearly double its average capital expenditures from 2019 to 2023.
Since becoming Head of Search, Raghavan also became the silent leader of most of Google's other revenue centers — Google Ads, Google Shopping, Maps, and eventually Gemini, Google's ChatGPT competitor, which might also explain his newly-diminished position within the company.
2024 was a grim year for Google and a grimmer one for Raghavan, starting in February with its Gemini Large Language Model generating racially diverse nazis (among other things), a mess that Raghavan himself had to apologize for. A few months later, Google introduced AI-powered search summaries that told users to eat rocks and put glue on pizza, which only caused people to remember exactly how bad Google Search already was, and laugh at how the only way that Google seemed to be able to innovate was to make it worse.
Raghavan is being replaced by Nick Fox, a former McKinsey guy who, in the emails I called attention to in The Man Who Killed Google Search, told Ben Gomes that making Google Search more profitable was "the new reality of their jobs," to which Ben Gomes responded by saying that he was "concerned that growth [was] all that [Google was] thinking about."
Fox has, to quote Google CEO Sundar Pichai, "been instrumental in shaping Google's AI product roadmap," which suggests that Google is going all-in on AI at a time when developers are struggling to justify using its models and are actively mad at both the way it markets them and the way they're integrated into Google’s other products.
I am hypothesizing here, but I think that Google is desperate, and that its earnings on October 30th are likely to make the street a little worried. The medium-to-long-term prognosis is likely even worse. As the Wall Street Journal notes, Google's ad business is expected to dip below 50% market share in the US in the next year for the first time in more than a decade, and Google's gratuitous monopoly over search (and likely ads) is coming to an end. It’s more than likely that Google sees AI as fundamental to its future growth and relevance.
As part of the Raghavan reorganization, Google is also moving the Gemini App team (the one handling Google's competitor to ChatGPT) under AI research group DeepMind, a move that might be kind of smart in the "hand the AI stuff to the AI people" kind of way, but also suggests that there is a degree of disarray at the company that isn't going to get better in a hurry.
You see, Raghavan was powerful, and for a time successful. He ruled with an iron fist, warning employees to prepare for "a different market reality" because "things [were] not like they were 15-20 years ago," and "shortening the amount of time that his reports would have to work on certain projects" according to Jennifer Elias of CNBC, which is exactly the kind of move you make when things are going poorly.
Replacing Raghavan with Nick Fox — a man who has only worked at either McKinsey or Google, and no, I am not kidding — is something that you do because you don't know what to do, and somebody's head has to roll, even if it's going to roll to the foot of a guy who's most famous for running Google's Assistant business, which is best known for kind of sucking and making absolutely no money.
There is a compelling case to be made that we are watching the slow, painful collapse of Google — a company best known for a transformational and beloved product it chose to ruin, helmed by a management consultant that has, for the most part, overseen the decay of its brand.
Google — like the rest of tech's hyper-scalers — has not had a meaningful new product in over a decade, with its most meaningful acquisition in years involving it paying $2.7 billion for an AI startup that barely made any money specifically to hire back a guy who quit because he was mad that Google wouldn't release an early version of Large Language Models in 2021.
This is a company bereft of vision, incapable of making money without monopolies, and flailing wildly in the hopes that copying everybody else will save itself from perdition — or, I should say, from the Department of Justice breaking it up.
Google is exactly the monster that Sundar Pichai and Prabhakar Raghavan wanted it to be — a lumbering private equity vehicle that uses its crooked money machine to demolish smaller players, except there are no more hyper-growth markets left for it to throw billions at, leaving it with Generative AI, a technology that lacks mass-market utility and burns cash with every prompt.
We are watching the fall of Rome, and it’s been my pleasure to tell you about how much of it you can credit to Prabhakar Raghavan, the Man Who Killed Google Search.