2025-01-30 00:27:45
Soundtrack: The Hives — Hate To Say I Told You So
In the last week or so, but especially over the weekend, the entire generative AI industry has been thrown into chaos.
This won’t be a lengthy, technical write-up — although there will be some inevitable technical complexities, just because the nature of the subject demands it.. Rather, I will address the elephant in the room, namely why the Western tech giants have been caught so flat-footed.
In short, the recent AI bubble (and, in particular, the hundreds of billions of spending behind it) hinged on the idea that we need bigger models, which are both trained and run on bigger and even larger GPUs almost entirely sold by NVIDIA, and are based in bigger and bigger data centers owned by companies like Microsoft and Google. There was an expectation that this would always be the case, and that generative AI would always be energy and compute hungry, and thus, incredibly expensive.
But then, a Chinese artificial intelligence company that few had heard of called DeepSeek came along with multiple models that aren’t merely competitive with OpenAI's, but undercut them in several meaningful ways. DeepSeek’s models are both open source and significantly more efficient — 30 times cheaper to run — and can even be run locally on relatively modest hardware.
As a result, the markets are panicking, because the entire narrative of the AI bubble has been that these models have to be expensive because they're the future, and that's why hyperscalers had to burn $200 billion in capital expenditures for infrastructure to support generative AI companies like OpenAI and Anthropic. The idea that there was another way to do this — that, in fact, we didn't need to spend all that money, had any of the hyperscalers considered a different approach beyond "throw as much money at the problem as possible" — simply wasn’t considered.
And then came an outsider to upend the conventional understanding and, perhaps, dethrone a member of America’s tech royalty — a man who has crafted, if not a cult of personality, then a public image of an unassailable visionary that will lead the vanguard in the biggest technological change since the Internet. I am, of course, talking about Sam Altman.
DeepSeek isn't just an outsider, but a company that emerged as a side project from a tiny, tiny hedge fund — at least, by the standards of hedge funds — and whose founding team have nowhere near the level of fame and celebrity as Altman. Humiliating.
On top of all of that, DeepSeek's biggest, ugliest insult is that its model, DeepSeek R1, is competitive with OpenAI's incredibly expensive o1 "reasoning" model, yet significantly (96%~) cheaper to run, and can even be run locally. Speaking to a few developers I know, one was able to run DeepSeek's R1 model on their 2021 MacBook Pro with an M1 chip. Worse still, DeepSeek’s models are made freely available to use, with the source code published under the MIT license, along with the research on how they were made (although not the training data), which means they can be adapted and used for commercial use without the need for royalties or fees.
By contrast, OpenAI is anything but open, and its last LLM to be released under the MIT license was 2019’s GPT-2.
No, wait. Let me correct that. DeepSeek’s biggest, ugliest secret is that it’s obviously taking aim at every element in OpenAI’s portfolio. As the company was already dominating headlines, it quietly dropped its Janus-Pro-7B image generation and analysis model, which the company says outperforms both StableDiffusion and OpenAI’s DALL-E 3. And, as with its other code, is also freely available to both commercial and personal users alike, whereas OpenAI has largely paywalled Dall-E 3.
It’s a cynical, vulgar version of David and Goliath, where a tech startup backed by a shadowy Chinese hedge fund with $5.5 billion dollars under management is somehow the plucky upstart against the lumbering, lossy, oafish $150 billion startup backed by a public tech company with a market capitalization of $3.2 trillion.
DeepSeek's V3 model — which is comparable (and competitive with) both OpenAI's GPT 4o and Anthropic's Claude Sonnet 3.5 models (which has some reasoning features) — is 53 times cheaper to run when using the company’s own cloud services. And, as noted above, said model is effectively free for anyone to use — locally or in their own cloud instances — and can be taken by any commercial enterprise and turned into a product of their own, should they so desire.
In essence, DeepSeek — and I'll get into its background and the concerns people might have about its Chinese origins — released two models that perform competitively (and even beat) models from both OpenAI and Anthropic, undercut them in price, and made them open, undermining not just the economics of the biggest generative AI companies, but laying bare exactly how they work. That last point is particularly important when it comes to OpenAI's reasoning model, which specifically hid its chain of thought for fear of "unsafe thoughts" that might "manipulate the customer," then muttered under their breath that the actual reason was that it was a "competitive advantage."
And let's be completely clear: OpenAI's literal only competitive advantage against Meta and Anthropic was its "reasoning" models (o1 and o3, which is currently in research preview). Although I mentioned that Anthropic’s Claude Sonnet 3.5 model has some reasoning features, they’re comparatively more rudimentary than those in o1 and o3.
In an AI context, reasoning works by breaking down a prompt into a series of different steps with "considerations" of different approaches — effectively a Large Language Model checking its work as it goes, with no thinking involved, because these models do not "think" or "know" stuff. OpenAI rushed to launch its o1 reasoning model last year because, and I quote Fortune, Sam Altman was "eager to prove to potential investors in the company's latest funding round that OpenAI remains at the forefront of AI development." And, as I noted at the time, it was not particularly reliable, failing to accurately count the number of times the letter ‘r’ appeared in the word “strawberry.”
At this point, it's fairly obvious that OpenAI wasn’t anywhere near the “forefront of AI development,” and now that its competitive advantage is effectively gone, there are genuine doubts about what comes next for the company.
As I'll go into, there are many questionable parts of DeepSeek's story — its funding, what GPUs it has, and how much it actually spent training these models — but what we definitively understand to be true is bad news for OpenAI, and, I would argue, every other large US tech firm that jumped on the generative AI bandwagon in the past few years.
DeepSeek’s models actually exist, they work (at least, by the standards of hallucination-prone LLMs that don’t, at the risk of repeating myself, know anything in the true meaning of the word), they've been independently verified to be competitive in performance, and they are magnitudes cheaper in price than those from both hyperscalers (EG: Google's Gemini, Meta's Llama, Amazon Q, etc.) and from OpenAI and Anthropic.
DeepSeek's models don't require massive new data centers (they run on the GPUs currently used to run services like ChatGPT, and can even work on more austere hardware), nor do they require an endless supply of bigger, faster NVIDIA GPUs every year to progress. The entire AI bubble was inflated based on the premise that these models were simply impossible to build without burning massive amounts of cash, straining the power grid, and blowing past emissions goals, and that these were necessary costs to create "powerful AI."
Obviously, that wasn’t true. Now the markets are asking a very reasonable question: “did we just waste $200 billion?”
First of all, if you want a super deep dive into DeepSeek, I can't recommend VentureBeat's writeup enough. I'll be quoting it liberally, because it deserves the credit for giving a very succinct and well-explained background.
First, some background on how DeepSeek got to where it did. DeepSeek, a 2023 spin-off from Chinese hedge-fund High-Flyer Quant, began by developing AI models for its proprietary chatbot before releasing them for public use. Little is known about the company’s exact approach, but it quickly open sourced its models, and it’s extremely likely that the company built upon the open projects produced by Meta, for example the Llama model, and ML library Pytorch.
To train its models, High-Flyer Quant secured over 10,000 Nvidia GPUs before U.S. export restrictions, and reportedly expanded to 50,000 GPUs through alternative supply routes, despite trade barriers. This pales compared to leading AI labs like OpenAI, Google, and Anthropic, which operate with more than 500,000 GPUs each.
Now, you've likely seen or heard that DeepSeek "trained its latest model for $5.6 million," and I want to be clear that any and all mentions of this number are estimates. In fact, the provenance of the "$5.58 million" number appears to be a citation of a post made by NVIDIA engineer Jim Fan in an article from the South China Morning Post, which links to another article from the South China Morning Post, which simply states that "DeepSeek V3 comes with 671 billion parameters and was trained in around two months at a cost of US$5.58 million" with no additional citations of any kind. As such, take them with a pinch of salt.
While there are some that have estimated the cost (DeepSeek's V3 model was allegedly trained using 2048 NVIDIA h800 GPUs, according to its paper), as Ben Thompson of Stratechery made clear, the "$5.5 million" number only covers the literal training costs of the official training run (and this is made fairly clear in the paper!) of V3, meaning that any costs related to prior research or experiments on how to build the model were left out.
While it's safe to say that DeepSeek's models are cheaper to train, the actual costs — especially as DeepSeek doesn't share its training data, which some might argue means its models are not really open source — are a little harder to guess at. Nevertheless, Thompson (who I, and a great deal of people in the tech industry, deeply respect) lays out in detail how the specific way that DeepSeek describes training its models suggests that it was working around the constrained memory of the NVIDIA GPUs sold to China (where NVIDIA is prevented by US export controls from selling its most capable hardware over fears they’ll help advance the country’s military development):
Here’s the thing: a huge number of the innovations I explained above are about overcoming the lack of memory bandwidth implied in using H800s instead of H100s. Moreover, if you actually did the math on the previous question, you would realize that DeepSeek actually had an excess of computing; that’s because DeepSeek actually programmed 20 of the 132 processing units on each H800 specifically to manage cross-chip communications. This is actually impossible to do in CUDA. DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language. This is an insane level of optimization that only makes sense using H800s.
DeepSeek's models — V3 and R1 — are more efficient (and as a result cheaper to run), and can be accessed via its API at prices that are astronomically cheaper than OpenAI's. DeepSeek-Chat — running DeepSeek's GPT-4o competitive V3 model — costs $0.07 per 1 million input tokens (as in commands given to the model) and $1.10 per 1 million output tokens (as in the resulting output from the model), a dramatic price drop from the $2.50 per 1 million input tokens and $10 per 1 million output tokens that OpenAI charges for GPT-4o. DeepSeek-Reasoner — its "reasoning" model — costs $0.55 per 1 million input tokens, and $2.19 per 1 million output tokens compared to OpenAI's o1 model, which costs $15 per 1 million input tokens and $60 per 1 million output tokens.
Now, there's a very obvious "but" here. We do not know where DeepSeek is hosting its models, who has access to that data, or where that data is coming from or going. We don't even know who funds DeepSeek, other than that it’s connected to High-Flyer, the hedge fund that it split from in 2023. There are concerns that DeepSeek could be state-funded, and that DeepSeek's low prices are a kind of geopolitical weapon, breaking the back of the generative AI industry in America.
I have no idea whether that’s the case. It’s certainly true that China has long treated AI as a strategic part of its national industrial policy, and is reported to help companies in sectors where it wants to catch up with the Western world. The Made In China 2025 initiative saw a reported hundreds of billions of dollars provided to Chinese firms working in industries like chipmaking, aviation, and yes, artificial intelligence. The extent of the support isn’t exactly transparent, and so it’s not entirely out of the realm of possibility that DeepSeek is the recipient of state aid. The good news is that we're gonna find out fairly quickly. American AI infrastructure company Groq is already bringing DeepSeek's models online, meaning that we'll at the very least get a confirmation of whether these prices are realistic, or heavily-subsidized by whomever it is that backs DeepSeek.
It’s also true that DeepSeek is owned by a hedge fund, which likely isn’t short of cash to pump into the venture.
Aside: Given that OpenAI is the benefactor of millions in cloud compute credits, and gets reduced pricing for Microsoft’s Azure cloud services, it’s a bit tough for them to complain about a rival being subsidized by a larger entity with the ability to absorb the costs of doing business, should that be the case. And yes, I know Microsoft isn’t a state, but with a market cap of $3.2 trillion and quarterly revenues larger than the combined GDPs of some EU and NATO nations, it’s the next best thing.
Whatever concerns there may be about malign Chinese influence are bordering on irrelevant, outside of the low prices offered by DeepSeek itself, and even that is speculative at this point. Once these models are hosted elsewhere, and once DeepSeek's methods (which I'll get to shortly) are recreated (which won't take long), I believe we'll see that these prices are indicative of how cheap these models are to run.
That's a bloody good question, and because I'm me, I have a hypothesis: I do not believe that the companies making foundation models (such as OpenAI and Anthropic) have been incentivized to do more with less, and because their chummy relationships with hyperscalers were focused almost entirely on "make the biggest, most hugest models possible, using the biggest, most hugest chips," and because the absence of profitability didn’t stop them from raising more money, efficiency was never a major problem for them.
Let me put it in simpler terms: imagine living on $1,500 a month, and then imagine how you'd live on $150,000 a month, and you have to, Brewster's Millions style, spend as much of it as you can to complete the mission of "live your life." In the former example, your concern is survival — you have a limited amount of money and must make it go as far as possible, with real sacrifices to be made with every dollar you spend. In the latter, you're incentivized to splurge, to lean into excess, to pursue a vague remit of "living" your life. Your actions are dictated not by any existential threats — or indeed future planning — but by whatever you perceive to be an opportunity to "live."
OpenAI and Anthropic are emblematic of what happens when survival takes a backseat to “living.” They have been incentivized by frothy venture capital and public markets desperate for the next big growth market to build bigger models and sell even bigger dreams, like Dario Amodei of Anthropic saying that your AI "could surpass almost all humans at almost everything" "shortly after 2027." Both OpenAI and Anthropic have effectively lived their existence with the infinite money cheat from The Sims, with both companies bleeding billions of dollars a year after revenue and still operating as if the money will never run out. If they were worried about it, they would have certainly tried to do what DeepSeek has done, except they didn't have to, because both of them had endless cash and access to GPUs from either Microsoft, Amazon or Google.
OpenAI and Anthropic have never been made to sweat, receiving endless amounts of free marketing from a tech and business media happy to print whatever vapid bullshit they spout, raising money at will (Anthropic is currently raising another $2 billion, valuing the company at $60 billion), all off of a narrative of "we need more money than any company has ever needed before because the things we're doing have to cost this much."
Do I think they were aware that there were methods to make their models more efficient? Sure. OpenAI tried (and failed) in 2023 to deliver a more efficient model to Microsoft. I'm sure there are teams at both Anthropic and OpenAI that are specifically dedicated to making things "more efficient." But they didn't have to do it, and so they didn’t.
As I've written before, OpenAI simply burns money, has been allowed to burn money, and up until recently likely would've been allowed to burn even more money, because everybody — all of the American model developers — appeared to agree that the only way to develop Large Language Models was to make the models as big as humanly possible, and work out troublesome stuff like "making them profitable" later, which I presume is when "AGI happens," a thing they are still in the process of defining.
DeepSeek, on the other hand, had to work out a way to make its own Large Language Models within the constraints of the hamstrung NVIDIA chips that can be legally sold to China. While there is a whole cottage industry of selling chips in China using resellers and other parties to get restricted Silicon into the country, as Thompson over at Stratechery explains, the entire way in which DeepSeek went about developing its models suggests that it was working around very specific memory bandwidth constraints (meaning the amount of data that can be fed to and from chips). In essence, doing more with less wasn’t something it chose, but something it had to do.
While it's certainly possible that DeepSeek had unrestrained access to American silicon, the actual work it’s done (which is well-documented in the research paper accompanying the V3 model) heavily suggests it was working within the constraints of lower memory bandwidth. Basically, it wasn’t able to move as much data around the chips, which is a problem because the reason why GPUs are so useful in AI is because they can move a lot of data at the same time and then process it in parallel (running multiple tasks simultaneously). Lower bandwidth means less data moving, which means things like training and inference take longer.
And so, it had to get creative. DeepSeek combined numerous different ways to reduce the amount of the model it loaded into memory at any given time. This included using Mixture of Experts architecture (where models are split into different "experts" that handle different kinds of inputs and outputs — a similar technique to what OpenAI's GPT-4o does) and multi-head latent attention, where DeepSeek compresses the key-value cache (think of it as a place where a Large Language Model writes down everything it's processed so far from an input as it generates) into something called a "latent vector." Essentially, instead of writing down all the information, it just caches what it believes is the most important information.
In simpler terms, DeepSeek's approach breaks the Large Language Model into a series of different experts — specialist parts of the model — to handle specific inputs and outputs, and it’s found a way to take shortcuts with the amount of information it caches without sacrificing performance. Yes, there is a more complex explanation here, but this is so you have a frame of reference.
There's also the training data situation — and another mea culpa. I've previously discussed the concept of model collapse, and how feeding synthetic data (training data created by an AI, rather than a human) to an AI model can end up teaching it bad habits, but it seems that DeepSeek succeeded in training its models using generative data, but specifically for subjects (to quote GeekWire's Jon Turow) "...like mathematics where correctness is unambiguous," and using "...highly efficient reward functions that could identify which new training examples would actually improve the model, avoiding wasted compute on redundant data."
It seems to have worked. Though model collapse may still be a possibility, this approach — extremely precise use of synthetic data — is in line with some of the defenses against model collapse I've heard from LLM developers I've talked to. This is also a situation where we don't know its exact training data, and it doesn’t negate any of the previous points made about model collapse. Synthetic data might work where the output is something that you could figure out on a TI-83 calculator, but when you get into anything a bit more fuzzy (like written text, or anything with an element of analysis) you’ll likely start to encounter unhappy side effects..
There's also some scuttlebutt about where DeepSeek got this data. Ben Thompson at Stratechery suggests that DeepSeek's models are potentially "distilling" other models' outputs — by which. I mean having another model (say, Meta's Llama, or OpenAI's GPT-4o, which is why DeepSeek identified itself as ChatGPT at one point) spit out outputs specifically to train parts of DeepSeek.
Distillation is a means of extracting understanding from another model; you can send inputs to the teacher model and record the outputs, and use that to train the student model. This is how you get models like GPT-4 Turbo from GPT-4. Distillation is easier for a company to do on its own models, because they have full access, but you can still do distillation in a somewhat more unwieldy way via API, or even, if you get creative, via chat clients.
Distillation obviously violates the terms of service of various models, but the only way to stop it is to actually cut off access, via IP banning, rate limiting, etc. It’s assumed to be widespread in terms of model training, and is why there are an ever-increasing number of models converging on GPT-4o quality. This doesn’t mean that we know for a fact that DeepSeek distilled 4o or Claude, but frankly, it would be odd if they didn’t.
OpenAI has reportedly found “evidence” that DeepSeek used OpenAI’s models to train its rivals, according to the Financial Times, although it failed to make any formal allegations, though it did say that using ChatGPT to train a competing model violates its terms of service. David Sacks, the investor and Trump Administration AI and Crypto czar, says “it’s possible” that this occurred, although he failed to provide evidence.
Personally, I genuinely want OpenAI to point a finger at DeepSeek and accuse it of IP theft, purely for the hypocrisy factor. This is a company that exists purely from the wholesale industrial larceny of content produced by individual creators and internet users, and now it’s worried about a rival pilfering its own goods?
Cry more, Altman, you nasty little worm.
As I've written about many, many, many times, the Large Language Models run by companies like OpenAI, Anthropic, Google and Meta are unprofitable and unsustainable, and the transformer-based architecture they run on has peaked. They're running out of training data, and the actual capabilities of these models were peaking as far back as March 2024.
Nevertheless, I had assumed — incorrectly — that there would be no way to make them more efficient, because I had assumed — also incorrectly — that the hyperscalers (along with OpenAI and Anthropic) would be constantly looking for ways to bring down the ruinous costs of their services. After all, OpenAI lost $5 billion (after $3.7 billion in revenue, too!), and Anthropic just under $3 billion in 2024.
What I didn't wager was that, potentially, nobody was trying. My mistake was — if you can believe this — being too generous to the AI companies, assuming that they didn’t pursue efficiency because they couldn’t, and not because they couldn’t be bothered.
You see, the pre-DeepSeek status quo was one where several truths allowed the party to keep going:
Now, I've argued for a while that the latter plan was insane — that there was no path to profitability for these Large Language Models, as I believed there simply wasn't a way to make these models more efficient.
In a way, I was right. The current models developed by both the hyperscalers (Gemini, Llama, et. al) and multi-billion-dollar "startups" like OpenAI and Anthropic are horribly inefficient, I had just made the mistake of assuming that they'd actually tried to make them more efficient.
What we're witnessing is the American tech industry's greatest act of hubris — a monument to the barely-conscious stewards of so-called "innovation," incapable of breaking the kayfabe of "competition" where everybody makes the same products, charges about the same amount, and mostly "innovates" in the same direction.
Somehow nobody — not Google, not Microsoft, not OpenAI, not Meta, not Amazon, not Oracle — thought to try, or was capable of creating something like DeepSeek, which doesn't mean that DeepSeek's team is particularly remarkable, or found anything new, but that for all the talent, trillions of dollars of market capitalization and supposed expertise in America's tech oligarchs, not one bright spark thought to try the things that DeepSeek tried, which appear to be "what if we didn't use as much memory and what if we tried synthetic data."
And because the cost of model development and inference was so astronomical, they never assumed that anyone would try to usurp their position. This is especially bad, considering that China’s focus on AI as a strategic part of its industrial priority was no secret — even if the ways it supported domestic companies was. In the same way that the automotive industry was blindsided by China’s EV manufacturers, the same is now happening to AI.
Fat, happy and lazy, and most of all, oblivious, America's most powerful tech companies sat back and built bigger, messier models powered by sprawling data centers and billions of dollars of NVIDIA GPUs, a bacchanalia of spending that strains our energy grid and depletes our water reserves without, it appears, much consideration of whether an alternative was possible. I refuse to believe that none of these companies could've done this — which means they either chose not to, or were so utterly myopic, so excited to burn so much money and so many parts of the Earth in pursuit of further growth, that they didn't think to try.
This isn't about China — it's so much fucking easier if we let it be about China — it's about how the American tech industry is incurious, lazy, entitled, directionless and irresponsible. OpenAi and Anthropic are the antithesis of Silicon Valley. They are incumbents, public companies wearing startup suits, unwilling to take on real challenges, more focused on optics and marketing than they are on solving problems, even the problems that they themselves created with their large language models.
By making this "about China" we ignore the root of the problem — that the American tech industry is no longer interested in making good software that helps people.
DeepSeek shouldn't be scary to them, because they should've come up with it first. It uses less memory, fewer resources, and uses several quirky workarounds to adapt to the limited compute resources available — all things that you'd previously associate with Silicon Valley, except Silicon Valley's only interest, like the rest of the American tech industry, is The Rot Economy. It cares about growth at all costs, even if said costs were readily mitigable, or if the costs are ultimately self-defeating.
To be clear, if the alternative is that all of these companies simply didn't come up with this idea, that in and of itself is a damning indictment of the valley. Was nobody thinking about this stuff? If they were, why didn't Sam Altman, or Dario Amodei, or Satya Nadella, or anyone else put serious resources into efficiency? Was it because there was no reason to? Was it because there was, if we're honest, no real competition between any of these companies? Did anybody try anything other than throwing as much compute and training data at the model as possible?
It's all so cynical and antithetical to innovation itself. Surely if any of this shit mattered — if generative AI truly was valid and viable in the eyes of these companies — they would have actively worked to do something like DeepSeek.
Don't get me wrong, it appears DeepSeek employed all sorts of weird tricks to make this work, including taking advantage of distinct parts of both CPU and GPU to create a virtual Digital Processing Unit, essentially redefining how data is communicated within the servers running training and inference. It had to do things that a company with unrestrained access to capital and equipment wouldn’t have to do.
Nevertheless, OpenAI and Anthropic both have enough money and hiring power to have tried — and succeeded — in creating a model this efficient and capable of running on older GPUs, except what they actually wanted was more rapacious growth and the chance to build even bigger data centers with even more compute. OpenAI has pledged $19 billion to fund the "Stargate" data center — an amount it is somehow going to raise through further debt and equity raises, despite the fact that it’s likely already in the process of raising another round as we speak just to keep the company afloat.
OpenAI is as much a lazy, cumbersome incumbent as Google or Microsoft, and it’s just as innovative too. The launch of its "Operator" "agent" was a joke — a barely-functional product that is allegedly meant to control your computer and take distinct actions, but doesn't seem to work. Casey Newton, a man so gratingly credulous that it makes me want to scream, of course wrote that it was a "compelling demonstration" that "represented an extraordinary technological achievement" that also somehow was "significantly slower, more frustrating, and more expensive than simply doing any of these tasks yourself."
Casey, of course, had some thoughts about DeepSeek — that there were reasons to be worried, but that "American AI labs [were] still in the lead," saying that DeepSeek was "only optimizing technology that OpenAI and others invented first," before saying that it was "only last week that OpenAI made available to Pro plan users a computer that can use itself," a statement bordering on factually incorrect.
Let's be frank: these companies aren't building shit. OpenAI and Anthropic are both limply throwing around the idea that "agents are possible" in an attempt to raise more money to burn, and after the launch of DeepSeek, I have to wonder what any investor thinks they're investing in.
OpenAI can't simply "add on" DeepSeek to its models, if not just for the optics. It would be a concession. An admittal that it slipped and needs to catch up, and not to its main rival, or to another huge tech firm, but to a company that few, before last weekend, had even heard of. And this, in turn, will make any investor think twice about writing the company a blank check — which, as I’ve said ad nauseum, is potentially fatal, as OpenAI needs to continually raise more money than any startup ever has in history, and it has no path to breaking even.
If OpenAI wants to do its own cheaper, more-efficient model, it’ll likely have to create it from scratch, and while it could do distillation to make it "more OpenAI-like" using OpenAI's own models, that's effectively what DeepSeek already did. Even with OpenAI's much larger team and more powerful hardware, it's hard to see how creating a smaller, more-efficient, and almost-as-powerful version of o1 benefits the company, because said version has, well, already been beaten to market by DeepSeek, and thanks to DeepSeek will almost certainly have a great deal of competition for a product that, to this day, lacks any real killer apps.
And, again, anyone can build on top of what DeepSeek has already built. Where is OpenAI's moat? Where is Anthropic's moat? What are the things that truly make these companies worth $60 or $150 billion? What is the technology they own, or the talent they have that justifies these valuations, because it's hard to argue that their models are particularly valuable anymore.
Celebrity, perhaps? Altman, as discussed previously, is an artful bullshitter, having built a career out of being in the right places, having the right connections, and knowing exactly what to say — especially to a credulous tech media without the spine or inclination to push back on his more fanciful claims. And already, Altman has tried to shrug off DeepSeek’s rise, admitting that while “deepseek's r1 is an impressive model,” particularly when it comes to its efficiency, “[OpenAI] will obviously deliver much better models and also it's legit invigorating to have a new competitor!”
He ended with “look forward to bringing you all AGI and beyond” — something which, I add, has always been close on the horizon in Altman’s world, although curiously has yet to materialize, or even come close to materializing.
Altman is, in essence, the Muhammad Saeed al-Sahhaf of tech — the Saddam-era Iraqi Minister of Information who, as Abrams tanks entered Baghdad and gunfire could be heard in the background, proclaimed an entirely counterfactual world where the coalition forces weren’t merely losing, but American troops were “committing suicide by the hundreds on the gates of Baghdad.” It’s adorable, and yes, it’s also understandable, but nobody should — or could — believe that OpenAI hasn’t just suffered some form of existential wound.
DeepSeek has commoditized the Large Language Model, publishing both the source code and the guide to building your own. Whether or not someone chooses to pay DeepSeek is largely irrelevant — someone else will take what it’s created and build their own, or people will start running their own DeepSeek instances renting GPUs from one of the various cloud computing firms.
While NVIDIA will find other ways to make money — Jensen Huang always does — it's going to be a hard sell for any hyperscaler to justify spending billions more on GPUs to markets that now know that near-identical models can be built for a fraction of the cost with older hardware. Why do you need Blackwell? The narrative of "this is the only way to build powerful models" no longer holds water, and the only other selling point it has is "what if the Chinese do something?"
Well, the Chinese did something, and they've now proven that they can not only compete with American AI companies, but do so in such an effective way that they can effectively crash the market.
It still isn't clear if these models are going to be profitable — as discussed, it's unclear who funds DeepSeek and whether its current pricing is sustainable — but they are likely going to be a damn sight more profitable than anything OpenAI is flogging. After all, OpenAI loses money on every transaction — even its $200-a-month "ChatGPT Pro" subscription. And if OpenAI cuts its prices to compete with DeepSeek, its losses will only deepen.
And as I’ve said above, this is all so deeply cynical, because it’s obvious that none of this was ever about the proliferation of generative AI, or making sure that generative AI was “accessible.”
Putting aside my personal beliefs for a second, it’s fairly obvious why these companies wouldn’t want to create something like DeepSeek — because creating an open source model that uses less resources means that OpenAI, Anthropic and their associated hyperscalers would lose their soft monopoly on Large Language Models.
I’ll explain.
Before DeepSeek, to make a competitive Large Language Model — as in one that you can commercialize — required exceedingly large amounts of capital, and to make larger ones effectively required you to kiss the ring of Microsoft, Google or Amazon. While it isn’t clear what it cost to train OpenAI’s o1 reasoning model, we know that GPT-4o cost around $100 million, and o1, as a more complex model, would likely cost even more.
We also know that OpenAI’s training and inference costs in 2024 were around $7 billion, meaning that either refining current models or building new ones is quite costly.
The mythology of both OpenAI and Anthropic is that these large amounts of capital weren’t just necessary, but the only way to do this. While these companies ostensibly “compete,” neither of them seemed concerned about doing so as actual businesses that made products that were, say, cheaper and more efficient to run, because in doing so they would break the illusion that the only way to create “powerful artificial intelligence” was to hand billions of dollars to one of two companies, and build giant data centers to build even larger language models.
This is artificial intelligence’s Rot Economy — two lumbering companies claiming they’re startups creating a narrative that the only way to “build the future” is to keep growing, to build more data centers, to build larger language models, to consume more training data, with each infusion of capital, GPU purchase and data center buildout creating an infrastructural moat that always leads back to one of a few tech hyperscalers.
OpenAI and Anthropic need the narrative to say “buy more GPUs and build more data centers,” because in doing so they create the conditions of an infrastructural monopoly, because the terms — forget about “building software” that “does stuff” for a second — were implicitly that smaller players can’t enter the market because “the market” is defined as “large language models that cost hundreds of millions of dollars and require access to more compute than any startup could access without the infrastructure of a public tech company.”
Remember, neither of these companies has ever marketed themselves based on the products they actually build. Large Language Models are, in and of themselves, a fairly bland software product, which is why we’re yet to see any killer apps. This isn’t a particularly exciting pitch to investors or the public markets, because there’s no product, innovation or business model to point to, and if they’d actually try and productize it and turn it into a business, it’s quite obvious at this point that there really isn’t a multi-trillion dollar industry for generative AI.
Indeed, look at the response to Microsoft’s strong-arming of co-pilot on Office 365 users, both personal and commercial. Nobody said “wow, this is great.” Lots of people asked “why am I being charged significantly more for a product that I don’t care about?”
OpenAI only makes 27% of its revenue from selling access to its models — around a billion dollars in annual recurring revenue — with the rest ($2.7 billion or so) coming from subscriptions to ChatGPT. If you ignore the hype, OpenAI and Anthropic are deeply boring software businesses with unprofitable, unreliable products prone to hallucinations, and their new products — such as OpenAI’s Sora — cost way too much money to both run and train to get results that, well, suck. Even OpenAI’s push into the federal government, with the release of ChatGPT Gov, is unlikely to reverse its dismal fortunes.
The only thing that OpenAI and Anthropic could do is sell the market a story about a thing it’s yet to build (such as AI that will somehow double human lifespans), and heavily intimate (or outright say) that the only way to build these made-up things was to keep funnelling billions to their companies and, by extension, that hyperscalers would have to keep funnelling billions of dollars to NVIDIA and into building data centers to crunch numbers in the hope that this wonderful, beautiful and entirely fictional world would materialize.
To make this *more* than a deeply boring software business, OpenAI and Anthropic needed models to get larger, and for the story to always be that there was only one way to build the future, that it cost hundreds of billions of dollars, and that only the biggest geniuses (who all happen to work at the same two or three places) were capable of doing it.
Post-DeepSeek, there isn’t really a compelling argument for investing hundreds of billions of capital expenditures in data centers, buying new GPUs, or even pursuing Large Language Models as they currently stand. It’s possible — and DeepSeek, through its research papers, explained in detail how — to build models competitive with both of OpenAI’s leading models, and that’s assuming you don’t simply build on top of the ones DeepSeek released.
It also seriously calls into question what it is you’re paying OpenAI for in its various subscriptions — most of which (other than the $200-a-month “Pro” subscription) have hard limits on how much you can use OpenAI’s most advanced reasoning models.
One thing we do know is that OpenAI and Anthropic will now have to either drop the price of accessing their models, and potentially even the cost of their subscriptions. I’d argue that despite the significant price difference between o1 and DeepSeek’s r1 reasoning model, the real danger to both OpenAI and Anthropic is DeepSeek v3, which competes with GPT-4o.
DeepSeek’s narrative shift isn’t just commoditizing LLMs at large, but commoditizing the most expensive ones run by two monopolists backed by three other monopolists.
Fundamentally, the magic has died. There’s no halo around Sam Altman or Dario Amodei’s head anymore, as their only real argument was “we’re the only ones that can do this,” something that nobody should’ve believed in the first place.
Up until this point, people believed that the reason these models were so expensive was because they had to be, and that we had to build more data centers and buy more silicon because that was just how things were. They believed that “reasoning models” were the future, even if members of the media didn’t really seem to understand what they did or why they mattered, and that as a result they had to be expensive, because OpenAI and their ilk were just so smart, even though it wasn’t obvious what it was that “reasoning” allowed you to do.
Now we’re going to find out, because reasoning is commoditized, along with Large Language Models in general. Funnily enough, the way that DeepSeek may have been trained — using, at least in part, synthetic data — also pushes against the paradigm that these companies even need to use other people’s training data, though their argument, of course, will be that they “need more.”
We also don’t know the environmental effects, because even if it’s cheaper, these models still require expensive, energy-guzzling GPUs to run at full-tilt.
In any case, if I had to guess, the result will be the markets accepting that generative AI isn’t the future. OpenAI and Anthropic no longer have moats to raise capital with. Sure, they could con another couple of billion dollars out of Masayoshi Son and other gormless billionaires, but what’re they offering, exactly? The chance to continue an industry-wide con? The chance to participate in the capitalist death cult? The chance to burn money at a faster rate than WeWork ever did?
Or will this be the time that Microsoft, Amazon and Google drop OpenAI and Anthropic, making their own models based on DeepSeek’s work? What incentive is there for them to keep funding these companies? The hyperscalers hold all the cards — the GPUs and the infrastructure, and in the case of Microsoft, non-revocable licenses that permits it unfettered use and access to OpenAI’s tech — and there’s little stopping them building their own models and dumping GPT and Claude.
As I’ve said before, I believe we’re at peak AI, and now that generative AI has been commoditized, the only thing that OpenAI and Anthropic have left is their ability to innovate, which I’m not sure they’re capable of doing.
And because we sit in the ruins of Silicon Valley, with our biggest “startups” all doing the same thing in the least-efficient way, living at the beck and call of public companies with multi-trillion-dollar market caps, everyone is trying to do the same thing in the same way based on the fantastical marketing nonsense of a succession of directionless rich guys that all want to create America’s Next Top Monopoly.
It’s time to wake up and accept that there was never an “AI arms race,” and that the only reason that hyperscalers built so many data centers and bought so many GPUs because they’re run by people that don’t experience real problems and thus don’t know what problems real people face. Generative AI doesn’t solve any trillion-dollar problems, nor does it create outcomes that are profitable for any particular business.
DeepSeek’s models are cheaper to run, but the real magic trick they pulled is that they showed how utterly replaceable a company like OpenAI (and by extension any Large Language Model company) really is. There really isn’t anything special about any of these companies anymore — they have no moat, their infrastructural advantage is moot, and their hordes of talent irrelevant.
What DeepSeek has proven isn’t just technological, but philosophical. It shows that the scrappy spirit of Silicon Valley builders is dead, replaced by a series of different management consultants that lead teams of engineers to do things based on vibes.
You may ask if all of this means generative AI suddenly gets more prevalent — after all, Satya Nadella of Microsoft quoted Jevons paradox, which posits that when resources are made more efficient their use increases.
Sadly, I hypothesize that something else happens. Right now, I do not believe that there are companies that are stymied by the pricing that OpenAI and their ilk offer, nor do I think there are many companies or use cases that don’t exist because Large Language Models are too expensive. AI companies took up a third of all venture capital funding last year, and on top of that, it’s fairly easy to try reasoning models like o1 and make a proof of concept without having to make an entire operational company. I don’t think anyone has been “on the sidelines” of generative AI due to costs, (and remember, few seemed to be able to come up with a great use case for o1 or other reasoning models), and DeepSeek’s models, while cheaper, don’t have any new functionality.
Chaos Hypothetical! One way in which this entire facade could fall is if Mark Zuckerberg decides that he wants to simply destroy the entire market for Large Language Models. Meta has already formed four separate war rooms to break down how DeepSeek did it, and apparently, to quote The Information, “In pursuing Llama, CEO Mark Zuckerberg wants to commoditize AI models so that the applications that use such models, including Meta’s, generate more money than the sales of the AI models themselves. That could hurt Meta’s AI rivals such as OpenAI and Anthropic, which are on pace to generate billions of dollars in revenue from such sales.”
I could absolutely see Meta releasing its own version of DeepSeek’s models — it has the GPUs and Zuckerberg can never be fired, meaning that if he decided to simply throw billions of dollars into specifically creating his own deep-discounted LLMs to wipe out OpenAI he absolutely could. After all, last Friday Zuckerberg said that Meta would spend between $60 billion and $65 billion in capital expenditures this year — before the DeepSeek situation hit fever pitch — and I imagine the markets would love a more modest proposal that involves Meta offering a ChatGPT-beater simply to fuck over Sam Altman.
As a result, I don’t really see anything changing, beyond the eventual collapse of the API market for companies like Anthropic and OpenAI. Large Language Models (and reasoning models) are niche. The only reason that ChatGPT became such a big deal is because the tech industry has no other growth ideas, and despite the entire tech industry and public markets screaming about it, I can’t think of any major mass-market product that really matters.
ChatGPT is big because “everybody is talking about AI,” and ChatGPT is the big brand in AI. It’s not essential, and it’s only been treated as such because the media (and the markets) ran away with a narrative they barely understood. DeepSeek pierced that narrative because believing it also required you to believe that Sam Altman is a magician, versus an extremely shitty CEO that burned a bunch of money.
Sure, you can argue that “DeepSeek just built on top of software that already existed thanks to OpenAI,” which begs a fairly obvious question: why didn’t OpenAI? And another fairly obvious question: why does it matter?
In any case, the massive expense of running generative models hasn’t been the limiter on their deployment or success — you can blame that on the fact that they, as a piece of technology, are neither artificial intelligence nor capable of providing the kind of meaningful outcomes that would make them the next Smartphone.
It’s all been a con, a painfully-obvious one, one I’ve been screaming about since February 2024, trying to explain that beneath the hype was an industry that provided modest-at-best outcomes rather than resembling any kind of “next big thing.”
Without “reasoning” as its magical new creation, OpenAI has nothing left. “Agents” aren’t coming. “AGI” isn’t coming. It was all flimflam to cover up how mediocre and unreliable the fundament of the supposed “AI revolution” really was.
All of this money, time, energy and talent was wasted thanks to a media industry that fails to hold the powerful to account, and markets run by executives that don’t know much of anything, and it looks like it got broken in two the moment that a few hundred Chinese engineers decided to compete.
It’s utterly sickening.
2025-01-18 03:47:33
In the last week we've seen the emergence of the true Meta — and the true Mark Zuckerberg — as the company ended its fact-checking program, claiming that (and I quote) "fact checkers have been too politically biased and have destroyed more trust than they've created" on both Instagram and Facebook, the latter of which was shown in a study from George Washington University to, by design, "afford antivaccine content producers several means to circumvent the intent of misinformation removal policies." Meta has also killed its diversity, equity and inclusion programs.
Shortly after announcing the policies, Zuckerberg went on the Joe Rogan Experience and had a full-scale pissfit, claiming that corporations are "culturally neutered" and that companies should have both "more masculine energy" and "[have a culture] that celebrates the aggression a bit more," adding that said culture would "[have] its own merits that are really positive." Zuckerberg believes that modern corporate culture has somehow framed masculinity as bad, something he does not attempt to elaborate upon, frame with any kind of evidence (after all, it's Joe Rogan), or really connect to anything besides a sense of directionless grievance.
This means that Meta has now "[gotten rid of] a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate," which in practice means that Meta now allows you to say that being gay is a mental illness or describe immigrants as "filth." Casey Newton at Platformer — who I have been deeply critical of (and will be later in this piece!) — has done an excellent job reporting on exactly how horrifying these policies are, revealing how Meta's internal guidelines allow Facebook users to say that trans people are both mentally ill and don't exist (to be clear, if you feel this way, stop reading and take a quick trip to the garage with your car keys), and included one of the most wretched things I've ever read: that Alex Schultz, Meta's CMO, a gay man, "suggested in an internal post that people seeing their queer friends and family members abused on Facebook and Instagram could lead to increased support for LGBTQ rights."
This is the kind of social network that Mark Zuckerberg wants — an unrestrained, unfiltered, unrepentantly toxic and noxiously heteronormative, one untethered by the frustrating norms of "making sure that a social network of billions of people doesn't actively encourage hate of multiple different marginalized groups."
Finally, Mark Zuckerberg can do whatever he wants, as opposed to the past 20 years, where it's hard to argue that he's faced an unrelenting series of punishments. Zuckerberg's net worth recently hit $213 billion, he's running a company with a market capitalization of over $1.5 trillion that he can never be fired from, he owns a 1400-acre compound in Hawaii, and while dealing with all this abject suffering, he was forced to half-heartedly apologize during a senate hearing where he was tortured (translation: made to feel slightly uncomfortable) after only having six years to recover from the last time when nothing happened to him in a senate hearing.
Sarcasm aside, few living people have had it easier than Mark Zuckerberg, a man who has been insulated from consequence, risk, and responsibility for nearly twenty years. The sudden (and warranted) hysteria around these monstrous changes has an air of surprise, framing Meta (and Zuckerberg's) moves as a "MAGA-tilt" to "please Donald Trump," which I believe is a comfortable way to frame a situation that is neither sudden nor surprising.
Mere months ago, the media was fawning over Mark Zuckerberg's new look, desperate to hear about why he's wearing gold chains, declaring that he had "the swagger of a Roman emperor" and that he had (and I quote the Washington Post) transformed himself from "a dorky, democracy-destroying CEO into a dripped-out, jacked AI accelerationist in the eyes of potential Meta recruits." Zuckerberg was, until this last week, being celebrated for the very thing people are upset about right now — flimsy, self-conscious and performative macho bullshit that only signifies strength to weak men and those credulous enough to accept it, which in this case means "almost every major media outlet." The only thing he did differently this time was come out and say it. After all, there was no punishment or judgment for his last macho media cycle, and if anything he proved that many will accept whatever he says in whatever way he does it.
Yet I want to be clear that what you're seeing with Meta — and by extension Zuckerberg — is not a "sudden" move, but the direct result of a man that has never, ever been held in check. It is a fantasy to describe — or even hint — that these changes are the beginning of some sort of unrestrained Meta, rather than part of the intentional destruction of the market leader in growth-at-all-costs Rot Economics.
As I wrote in the middle of last year, Meta has spent years gradually making the experience of using its products worse in pursuit of perpetual growth. When I say "intentionally," I mean that product decisions — such as limiting the information in notifications as a means of making users and heavily promoting clickbait headlines as a means of keeping people on the site longer — have been made for years, at times in broad daylight, that have led to the deterioration of the core experiences of both Facebook and Instagram.
Some are touting Zuckerberg's current move as some sort of master plan to appease Trump and conservatives, suggesting that this is a "MAGA-fication" of these platforms, where conservatives will somehow be given preferential treatment, such as, say, Facebook's recommendation engine promoting dramatically more conservative media than other sources.
Which is something that already fucking happened!
In 2020, journalist Kevin Roose created an automated Twitter account called "Facebook's Top 10," listing the top-performing (as in what posts were shared, viewed and commented on the most) link posts by U.S. Facebook pages on a daily basis, something he was able to do by connecting to Facebook's "CrowdTangle" data analytics tool, a product created for researchers and journalists to understand what was happening on the world's largest social network.
Roose's reporting revealed that Meta's top-performing links regularly skewed toward right wing influencers like Dan Bongino, Ben Shapiro and Sean Hannity, outlets like Fox News, and the page of president-elect Donald Trump. Internally, Meta was freaking out, suggesting to Roose that "engagement" was a misleading measurement of what was popular on Facebook, suggesting the real litmus test being "reach," as in how many people saw it. Roose also reported that internal arguments at Meta led it to suggesting it’d make a Twitter account of its own that had a more "balanced" view based on internal data.
Meta even suggested the obvious answer — sharing reach data — would vindicate its position, only for CrowdTangle's CEO to add that "false and misleading news stories also rose to the top of those lists."
These stories danced around one important detail: that these stories were likely recommended by Facebook's algorithm, which has reliably and repeatedly skewed conservative for many years. A study in The Economist from September 2020 found that the most popular American media outlets on Facebook in a one month period were Breitbart and Fox News, and that both Facebook page engagements and website views heavily skewed conservative.
While one could argue that this might be the will of the users, what a user sees on Facebook is almost entirely algorithmic, and thus it's reasonable to assume that said algorithm was deliberately pushing conservative content.
At this time, Meta's Head of Public Policy was Joel Kaplan, a man whose previous work involved working as George W. Bush's Deputy Chief of Staff for Policy, as well as handling public policy and affairs for Energy Future Holdings, which involved three private equity firms buying Texas power company TXU for $45 billion and immediately steering it into bankruptcy due to the $38.7 billion in debt Energy Future Holdings was forced to take on as a means of acquiring TXU.
Jeff Horwitz reports in his book Broken Code that Kaplan personally intervened when Facebook's health team attempted to remove COVID conspiracy movie Plandemic from its recommendation engine, and Facebook only did so once Roose reported that it was the most-engaged link in a 24 hour period.
Naturally, Meta's choice wasn't to "fix things" or "improve" or "take responsibility." By the end of 2021, Meta had disbanded the team that ran CrowdTangle, and in early 2022, the company had stopped registering new users. In early 2024 — months before the 2024 elections — CrowdTangle was officially shut down, though Facebook Top 10 had stopped working in the middle of 2023.
Meta hasn't "made a right-wing turn." It’s been an active arm of the right wing media for nearly a decade, actively empowering noxious demagogues like Alex Jones, allowing him to evade bans and build massive private online groups on the platform to disseminate content. A report from November 2021 by Media Matters found that Facebook had tweaked its news algorithm in 2021, helping right-leaning news and politics pages to outperform other pages using "sensational and divisive content." Another Media Matters report from 2023 found that conservatives were continually earning more total interactions than left or non-aligned pages between January 1 2020 and December 31 2022, even as the company was actively deprioritizing political content.
A 2024 report from non-profit GLAAD found that Meta had continually allowed widespread anti-trans hate content across Instagram, Facebook, and Threads, with the company either claiming that the content didn't violate its community standards or ignoring reports entirely. While we can — and should — actively decry Meta's disgusting new standards, it's ahistorical to pretend that this was a company that gave a shit about any of this stuff, or took it seriously, or sought to protect marginalized people.
It's a testament to the weak and inconsistent approach that the media has taken to both Meta and Mark Zuckerberg that any of these changes are being framed as anything other than Meta formalizing the quiet reality of its platforms: that whatever gets engagement is justified, even if it's hateful, racist, violent, cruel, or bigoted.
To quote Facebook's then-Vice President Andrew Bosworth in an internal email from 2017, "all the work that [Facebook] does in growth is justified," even if it's bullying, or a terrorist attack, and yes, that's exactly what he said. One might think that Bosworth would be fired for sending this email.
He's now Meta's Chief Technology Officer.
Sidenote: On the subject of bullying, my editor just told me something. His Facebook newsfeed is now filled with random posts from companies celebrating their workers of the month, registry offices (government buildings in the UK where you can get married), and so on. These posts, which are pushed to a random global audience, inevitably attract cruelty. One post from a registry office somewhere in Yorkshire attracted hundreds of comments, with many mocking the appearance of the newly-married couple.
But hey, engagement is engagement, right?
It's time to stop pretending that this company was ever something noble, or good, or well-meaning. Mark Zuckerberg is a repressed coward, as far from "manly" as one can get, because true masculinity is a sense of responsibility for both oneself and others, and finding strength in supporting and uplifting them with sincerity and honesty.
Meta, as an institution, has been rotten for years, making trillions of dollars as it continually makes the service worse, all to force users to spend more time on the site, even if it's because Facebook and Instagram are now engineered to interrupt users' decisionmaking and autonomy with a constant slew of different forms of sponsored and algorithmicly-curated content.
The quality of the experience — something the media has categorically failed in covering — has never been lower on either Facebook or Instagram. I am not sure how anyone writing about this company for the last few years has been able to do so with a straight face. The products suck, they're getting worse, and yet the company has never been more profitable. Facebook is crammed with fake accounts, AI-generated slop, and nonsense groups teeming with boomers posting that they "wish we'd return to a culture of respect" as they're recommended their third racist meme of the day. Instagram is a carousel of screen-filling sponsored and recommended content, and users are constantly battling with these products to actually see the things that they log on to see.
I want to be explicit here: for years, using Facebook has been a nightmare for many, many years.
Sidebar: No, really, I need you to look at this a little harder than you have been.
You log in, immediately see a popup for stories, scroll down and see one post from someone you know, a giant ad, a carousel of "people you may know," a post from a page you don’t follow, a series of recommended "reels" that show a two-second clip on repeat of what you might see so that you have to click through, another ad, three posts from pages you don’t follow, then another ad.
Searching for "Facebook support" leads you to a sponsored post about Facebook "bringing your community together," and then a selection of groups, the first of which is called "Facebook Support" with 18,000 members including a support number (1-800-804-9396) that does not work. The group is full of posts about people having issues with Facebook, with one by an admin called "Oliver Green" telling everyone that this group is where they can "discuss issues and provide assistance and solutions to them." Oliver Green's avatar is actually a picture of writer Oliver Darcy.
One post says "please don't respond to messages from my Facebook, I was hacked," with one responder — "Decker Techfix" — saying "when was it hacked" and asking them to message him now for quick recovery of an account they appear to be posting with.
Another, where a user says "someone hacked my Facebook and changed all password," is responded to by an account called "Ree TechMan," who adds "Inbox me now for help." Another, where someone also says they were hacked, has another account — James Miles — responding saying "message me privately." There are hundreds of interactions like these.
Another group called "Account Hacked" (which has 8500 members and hasn't been updated since late 2023) immediately hits you with a post that says "message me for any hacking services Facebook recovery Instagram recovery lost funds recovery I cloud bypass etc," with a few users responding along with several other scammers offering to help in the same way.
Another group with 6700 members called "Recover an old Facebook account you can't log into" offers another 1800 number that doesn't work. A post from December 5 2023 from a user claiming their account was compromised and their email and password was changed has been responded to 44 times, mostly by scammers attempting to offer account recovery services, but a few times by actual users.
Elsewhere, a group promising to literally send you money on PayPal has 24,000 members and 10+ posts a day. Another, called "Paypal problem solution," offers similarly scammy services if you can't get into Paypal. Another called "Cash App Venmo Paypal Zelle Support" has 5800 members.
This is what Facebook is — a decrepit hole of sponsored content and outright scams. Meta has been an atrocious steward of its product, it has been fucking awful for years, and it's time to wake up.
And, to repeat myself, this has been the case for years. Meta has been gradually (yet aggressively) reducing the quality of the Facebook and Instagram experience with utter disregard for user safety, all while flagrantly ignoring its own supposed quality and safety standards.
It's a comfortable lie to say that Meta has "suddenly" done something, because it gives the media (and society at large) air cover for ignoring a gaping wound in the internet. Two of the world's largest social networks are run — and have been run — with a blatant contempt for the user, misinforming and harming people at scale, making the things they want to see harder to find as they're swamped by an endless stream of sponsored and recommended content that either sells them something or gets them to engage further with the platform with little care whether the engagement is positive, fun or useful.
Casey Newton of Platformer has done an admirable job covering Meta in the last two weeks, but it's important to note that he was cheerfully covering Zuckerberg's "expansive view of the future" as recently as September 25, 2024, happily publishing how "Zuckerberg was back on the offensive," somehow not seeing anything worrying about Zuckerberg's t-shirt referencing Julius Caesar, the historic dictator that perpetuated a genocide in the Gallic wars.
Aside: Personally, I would have said Zuckerberg is more of a Nero type, fiddling (read: chasing AI and the metaverse, and dressing like Kevin Federline) while Rome (read: Facebook and Instagram) burned.
Newton felt it unnecessary to mention the utterly atrocious quality of Facebook while quoting Zuck saying things like "in every generation of technology, there is a competition of ideas for what the future should look like."
Yet the most loathsome thing Newton published was this:
But it left unsaid what seemed to be the larger point, which is that Zuckerberg intends to crush his rivals — particularly Apple — into a fine pulp. His swagger on stage was most evident when discussed the company's next-generation glasses as the likeliest next-generation computing platform, and highlighted the progress that Meta had made so far in overcoming the crushing technological burdens necessary for that to happen.
And it also failed to capture just how personal all this seems to him. Burned by what he has called the 20-year mistake of the company's reaction to the post-2016 tech backlash, and long haunted by criticisms that Meta has been nothing more than a competition-crushing copycat since it released the News Feed, Zuckerberg has never seemed more intent on claiming for himself the mantle of innovator.
This is paper tiger analysis — stenography for the powerful, masked as deep thoughts. This specific paragraph is exactly where Newton could've said something about how worrying Zuckerberg modeling himself on a Roman emperor was. How this company was, despite oinking about how it’s building the future, letting its existing products deteriorate as it deliberately turned the screws to juice engagement. Casey Newton has regularly and reliably turned his back on the truth — that Meta's core products are really quite bad — in favor of pumping up various artificial intelligence products and vague promises about a future that just arrived and fucking sucks.
The reason I am singling Newton out is that it is very, very important to hold the people that helped Zuckerberg succeed accountable, especially as they attempt to hint that they've always been an aggressive advocate for the truth. Newton is fully capable of real journalism — as proven by his recent coverage — but has chosen again and again to simply print whatever Mark Zuckerberg wants to say.
I am also going hard in the paint against Newton because of something else he wrote at the end of last year — "The phony comforts of AI skepticism" — where Newton sloppily stapled together multiple different pieces of marketing collateral to argue that not only was AI the future, but those that critiqued it were doing so in a cynical, corrupt way, singling out independent critic Gary Marcus. I'm not going to go through it in detail — Edward Ongweso Jr. already did so — but I think there are far better comforts for Newton than his ambiguous yet chummy relationship with the powerful.
I also did not like something Newton said at the end of a (paywalled) blog about what he learned from the reaction to his piece about AI skeptics. Newton said, and I quote, that he was "taking detailed notes on all bloggers writing ’financial analyses’ suggesting that OpenAI will go bankrupt soon because it's not profitable yet."
I do not like bullies, and I do not like threats. Suggesting one is "taking detailed notes on bloggers" is an attempt to intimidate people that are seriously evaluating the fact that OpenAI burns $5 billion a year and has no path to profitability. I don't know if this is about me, nor do I particularly care.
I have, however, been taking detailed notes on Casey Newton for quite some time.
While Newton's metaverse interview from 2021 was deeply irresponsible in how much outright nonsense it printed, arguably his most disgraceful was his October 26 2021 piece called "The Facebook Papers' missing piece" — an interview with an anonymous "integrity worker" that attempted to undermine the Wall Street Journal's Facebook Files (that revealed in that "Facebook Inc. knows, in acute detail, that its platforms are riddled with flaws that cause harm, often in ways only the company fully understands"), in a seeming attempt to discredit (by proxy) the bravery of Frances Haugen, the whistleblower that provided the documents that led to this reporting. It's utterly repulsive corporate hand-washing, and it's important context for any criticism he has of Meta going forward, especially if he tries to suggest he's always held it to account.
I think Newton seems a little more comfortable with the powerful's messaging than he should. He's argued that while AI companies have hit a scaling wall, it's actually okay, that NFTs went finally mainstream in 2022, that Clubhouse was the future, that live audio was Zuckerberg's "big bet on creators" in 2021 due to "the shift in power from institutions to individuals" (Facebook shut down its podcast service a year later), that — and I cannot see the full text here, because it's paywalled — "metaverse pessimists were missing something" because "Meta's grand vision [was] already well on its way" in November 2021, that Meta's leadership changed its mind about its name because "Facebook [was] not the future of the company," that Axie Infinity — a crypto/web3 Pokémon clone that has created its own form of indentured servitude in the global south backed by Andreessen Horowitz — was "turning gaming on its head" (the game sucks)...okay, I'll stop.
Casey has also, at times, had dalliances with criticisms — sometimes even of Facebook! No, really! — but it's hard to take them seriously when he writes a piece for The Verge about how "Google plans to win its antitrust trial," printing the legal and marketing opinion of one of the largest companies in the world knowing full-well that the Department of Justice would not get such an opportunity.
In emails revealed in the Department of Justice's antitrust trial against Google Search, Google specifically mentioned having briefed Newton with the intention of, and I quote, "look[ing] for ways to drive headlines on [Google's] own terms." In Newton's defense, this is standard PR language, but it is, within the context of what I'm writing, hard to ignore.
The reason I am so fucking furious at Newton is he is part of the media machine that has helped repeatedly whitewash Mark Zuckerberg and his cronies, running glossy puff-pieces with scumbags like Nick Clegg, and saying things like "the transition away from Facebook’s old friends and family-dominated feeds to Meta’s algorithmic wonderland seems to be proceeding mostly without incident."
That line, I add, was published in 2023, two years after the release of The Facebook Files, which revealed that the company knew its algorithmic timeline had a propensity to push users into increasingly-radical echo chambers. And one year after Amnesty International published a report accusing Facebook’s algorithms of “supercharging the spread of harmful anti-Rohingya content in Myanmar” amidst a genocide that saw an estimated million displaced and tens of thousands massacred.
Newton has spent years using his vast platform to subtly defend companies actively making their products worse, occasionally proving he can be a real journalist (his work on the acquisition of Twitter was genuinely great), only to slip back into the comfortable pajamas of "Are We Being Too Mean To Meta?"
The media — especially people like Newton — are extremely influential over public policy and the overall way that society views the tech industry. As an experienced, knowledgeable journalist, Newton has too regularly chosen to frame "fair and balanced" as "let's make sure the powerful get their say too." As I've said — and will continue to say — Casey Newton is fully capable of doing some of the literal best journalism in tech, such as his coverage of the horrible lives of Facebook moderators, yet has, up until alarmingly recently, chosen to do otherwise.
The cost, by the way, is that the powerful have used Newton and others as mouthpieces to whitewash their contemptuous and half-baked product decisions. I don't know Newton's intentions, nor will I attempt to guess at them. What I will say is that as a result of Casey Newton's work, Mark Zuckerberg and his products have received continual promotion of their ideas and air cover for their failures, directly influencing public opinion in the process.
Worse still, Newton has acted for years like nothing was wrong with the quality of platforms themselves. Had Newton written with clarity and purpose about the erosion of quality in Facebook and Instagram, Zuckerberg would have lost a valuable way to convince the 150,000+ people that read Platformer that things were fine, that the quality was fine, that Meta is an innovative company, and that there was no reason to dislike Mark Zuckerberg.
There was, and there always will be. Putting aside the horrifying person he obviously is, Zuckerberg is a career liar and a charlatan, and has deliberately made the lives of billions of people worse as a means of increasing revenue growth.
And I believe that he's about to inspire a new era of decay, where tech executives, unburdened by their already-flimsy approach to societal norms and customer loyalty, will begin degrading experiences at scale, taking as many liberties as possible.
Meta has fired the starting gun of the Slop Society.
In an interview with the Financial Times from December 2024, Meta's Vice President of Product for Generative AI (Connor Hayes) said that Meta "expect[s] AIs to actually, over time, exist on [Meta's] platforms, kind of in the same way that accounts do," each one having their own bios and profile pictures and the ability to "generate and share content powered by AI on the platform." This came hot off the heels of Zuckerberg saying in a quarterly earnings call that Meta would "...add a whole new category of content, which is AI generated or AI summarized content or kind of existing content pulled together by AI in some way," effectively promising to drop AI slop into feeds already filled full of recommended and sponsored content that gets in the way of you seeing real human beings.
This led to a scandal where users discovered what they believed to be brand new AI profiles. Karen Attiah of the Washington Post wrote a long thread on Bluesky (and a piece in the post itself) about her experience talking to a bot she (fairly!) described as "digital blackface," with Meta receiving massive backlash for bots that would happily admit they were trained by a bunch of white people. It turns out these bots had been around for around a year in various forms but were so unpopular that nobody really noticed until the Financial Times story, leading to Meta deleting the bots, at least for now.
I am 100% sure that these chatbots will return, because it's fairly obvious that Meta intends to fill your feeds with content either entirely generated or summarized by generative AI, be it with fake profiles or content itself. AI-generated slop already dominates the platform, and as discussed earlier, the quality of the platform has fallen into utter disrepair, mostly because Meta's only concern is keeping you on the platform to show you ads or content it’s been paid to show you, even if the reason you're seeing it is because you can't find the stuff you actually want to find.
That's because all roads lead back to the Rot Economy — the growth-at-all-costs mindset that means that the only thing that matters is growth of revenue, which comes from showing as many ads to you as possible. It's almost ridiculous to call us "users" of Facebook at this point. We are the used, the punished, the terrorized, the tricked, the scammed, and the abused, constantly forced to navigate through layers of abstraction between the thing we are allegedly using and the features we'd like to use. It's farcical how little attention has been given to how bad tech products have gotten, and few have decayed as severely as Facebook thanks to its near-monopoly on social media. I, personally, would rather not use Instagram, but there are people I know that I only really speak to there, and I know many people who have the same experience.
As Zuckerberg and his ilk are intimately aware, people really don't have anywhere else to go, which — along with a lack of regulation and a compliant media — has given them permission to adjust their services to increase advertising impressions, which in practice means giving you more reasons to stay on the platforms, which means "put more things in the way of what you want to see" rather than "create more compelling experiences for users."
It's the same thing that happened with Google Search, where the revenue team pushed the search team to make search results worse as a means of increasing the amount of times people searched for things on Google — because a user that finds what they're looking for quickly spends less time on the site looking at ads. It's why the Apple App Store is so chaotically-organized and poorly-curated — Apple makes money on the advertising impressions it gets from you searching — and why so many products on the App Store have expensive and exploitative microtransactions, because Apple makes 30% off of all App Store revenue, even if the app sucks.
And I believe that Zuckerberg loosening community standards and killing fact checking is just the beginning of tech's real era of decay. Trump, and by extension those associated with him, win in part by bulldozing norms and good taste. They do things that most of us agree are bad (such as being noxious and cruel, which I realize is a dilution) and would never do, moving the Overton window (the range of acceptable things in a society) further and further into Hell in the process.
In this case, we've already seen tech's Overton window shift for years — a lack of media coverage of the actual decay of these products and a lack of accountability for tech executives (both in the media and regulation) has given companies the permission to quietly erode their services, and because everybody made things shittier over time, it became accepted practice to punish and trick customers with dark patterns (design choices to intentionally mislead people) to the point that the FTC found last year that the majority of subscription apps and websites use them.
I believe, however, that Zuckerberg is attempting to move it further — to publicly say "we're going to fill your feeds with AI slop" and "we're firing 5% of our workers because they suck" and "we're going to have AI profiles instead of real people on the site you visit to see real people's stuff" and "we don't really give a shit about marginalized people" and "people are too mean to men" knowing, in his position as the CEO of one of the largest tech companies in the world, that people will follow.
Tech has already taken liberties with the digital experience and I believe the Slop Era will be one where experiences will begin to subtly and overtly rot, with companies proudly boasting that they're making "adjustments to user engagement that will provide better business-forward outcomes," which will be code for "make them worse so that we make more money."
I realize that there's an obvious thing I'm not saying: that the Trump administration isn't going to be any kind of regulatory force against big tech. Trump is perhaps the most transactional human being ever to grace the earth. Big tech, knowing this, has donated generously to the Trump inaugural fund. It’s why Trump has completely reversed his position on TikTok, having once wanted to ban the platform, now wants to give it a reprieve because he “won youth by 34 points” and “there are those that say that TikTok has something to do with that.” Tech knows that by kissing the ring, it can do whatever it wants.
Not that previous governments have been effective at curbing the worst excesses of big tech. Outside of the last few years, and specifically the work done by the FCC under Lina Khan, antitrust against big tech has been incredibly weak, and no meaningful consumer protections exist to keep websites like Facebook or Google functional for consumers, or limit how exploitative they can be.
The media has failed to hold them accountable at scale, which has in turn allowed the Overton window to shift on quality, and now that Trump — and the general MAGAfied mindset of "you can say or do whatever you want if you do so confidently or loudly enough" — has risen to power again, so too will an era of outright selfishness and cruelty within the products that consumers use every day, except this time I believe they finally have permission to enter their slop era. I also think that Trump gives them confidence that monopolies like Facebook, Instagram, Microsoft 365 (Microsoft's enterprise monopoly over business productivity software), Google Search, and Google Advertising (remedies will take some time, and the Trump administration will likely limit any punishment it inflicts) will remain unchallenged.
What I'm describing is an era of industrial contempt for the consumer, a continuation of what I described in Never Forgive Them, where big tech decides that they will do whatever they want to customers within the boundaries of the law, but with little consideration of good taste, user happiness, or anything other than growth.
How this manifests in Facebook and Instagram will be fairly obvious. I believe that the already-decrepit state of these platforms will accelerate. Meta will push as much AI slop as it wants, both created by its generative models and their users, and massively ramp up content that riles up users with little regard for the consequences. Instagram will become more exploitative and more volatile. Instagram ads have been steadily getting more problematic, and I think Meta will start taking ads from just about anyone. This will lead to an initial revenue bump, and then, I hypothesize, a steady bleed of users that will take a few quarters to truly emerge.
Elsewhere, we're already seeing the first sign of abusive practices. Both Google and Microsoft are now forcing generative AI features onto customers, with Google grifting business users by increasing the cost of Google Workspace by $2-per-user-per-month along with AI features they don't want, and Microsoft raising the price of consumer Office subscriptions, justifying the move by adding Copilot AI features that, again, customers really don't want. The Information's Jon Victor and Aaron Holmes add that it's yet to be seen what Microsoft does with the corporate customers using Microsoft's 365 productivity suite — adding Copilot costs $30-per-user-per-month — but my hypothesis is it will do exactly the same thing that Google did to its business customers.
I should also be clear that the reason they're doing this is that they're desperate. These companies must express growth every single quarterly earnings or see their stock prices crater, and big tech companies have oriented themselves around growth as a result — meaning that they're really not used to having to compete for customers or make products they like. For over a decade, tech has been rewarded for creating growth opportunities empowered by monopolistic practices, and I'd argue that the cultures that created the products that people remember actually liking are long-dead, their evangelists strangled by the soldiers of the Rot Economy.
You are, more than likely, already seeing the signs that this is happening. The little features on the products you use that feel broken — like when you try and crop an image on iOS and it sometimes doesn't actually crop it, when the "copy link" button on Google Docs doesn't work, when a Google Search gives you a page of forum links that don't answer your question — and I expect things to get worse, possibly in new and incredibly frustrating ways.
I deeply worry that we're going to enter the most irresponsible era of tech yet — not just in the harms that companies allow or perpetuate, but in the full rejection of their stewardship for the products themselves. Our digital lives are already chaotic and poisonous, different incentives warring for our attention, user interfaces corroded by those who believe everything is justified in pursuit of growth. I fear that the subtle little problems you see every day will both multiply and expand, and that the core services we use will break down, because I believe the most powerful in big tech never really gave a shit and no longer believe they have to pretend otherwise.
2024-12-24 08:33:04
Soundtrack: Spinnerette - The Walking Dead - (Alt: Postmodern Jukebox - Radioactive)
Thanks so much to everybody that has supported me in the last year. This newsletter started as a way for me to process the complex feelings I have about the technology industry, and still remains, in a way, as a kind of side project, as I still run my PR firm during the day. I also apologize for sending you another email.
Anyway, the newsletter has now grown into something altogether larger and more important to me, both personally and professionally. It has allowed me to meet and get to know some incredible people, as well as deepen friendships with others I’ve known for years, all while helping me understand what’s happening in an industry I find both fascinating and frustrating.
As I hit publish on this email, Where’s Your Ed At has over 49,250 subscribers, and regular read rates of 55-59%, numbers that have stayed consistent for years. Where’s Your Ed At - as a place on the web - had over 1.8 million unique visitors. In the last four years I’ve published over 800,000 words, and it is very cool to see it go anywhere. I had no plan and still do not have one.
It has been quite a year.
As a direct result of writing the newsletter, I was recruited by Cool Zone Media (best known for Behind The Bastards and It Could Happen Here), and worked with them to create Better Offline, which has been this year’s fastest-growing tech podcast, and was called one of the best podcasts of 2024 by Esquire, New York Magazine and The Information. It has turned into a bizarre mixture of talk radio and spoken word oratory, something truly unique, and I am extremely proud of it. Robert Evans and Sophie Lichterman have been incredible bosses, as have my producers Matt Osowski, Ian Johnson, Danl Goodman and Eva Warrender.
In the latter-half of the year, I sold a book to Penguin Random House - the upcoming Why Everything Stopped Working, which should be out sometime in 2026, and I intend it to be the best thing I’ve ever written. I will be honest, I am still shocked this happened. Nevertheless, I will write the shit out of it. My editor (Megan Wenerstrom) and agent (William Callahan) are, much like Robert and Sophie, fully behind me and what I believe in.
Also, if you missed my speech at Web Summit, do watch it.
It is still ridiculous to me that any of this happened. A year ago, I had 22,000 subscribers, had just signed the contract for Better Offline, and felt, if I’m honest, kind of lost, a feeling I’ve had on-and-off for about two years. I liked writing, but I didn’t love writing the newsletter. I’d also, for whatever reason, yet to really feel confident writing about big tech, because I figured there was something I was missing.
So I tried to work it out. I have never been a financial or investigative journalist (I was a games journalist over 16 years ago), so a lot of this was learning on the go. This is why I wrote an investigation about the NFT-turned-AI-doodad company Rabbit out of nowhere, which led to an interview with CoffeeZilla, which was both cool to be on and a sign that I could do “this,” even if “this” was a touch vague. Before this year I had never written any kind of financial analysis - I am very proud of both How Does OpenAI Survive? and OpenAI Is A Bad Business, along with more opinion-analysis pieces like The Subprime AI Crisis and The Other Bubble.
But, yeah, no matter how fancy I get with it, everything you’ve read this year has been me trying to work out what the fuck was going on - why these companies keep making money while their products get worse, which is why you’ve seen me dedicated tens of thousands of words trying to explain what happened to Google Search and Facebook, Shareholder Supremacy, and then why I’m so anxious about the AI bubble popping - because there’s nothing left afterwards. Every single AI piece I’ve written this year - my biggest being Pop Culture - has been me trying to work out what the hell is going on. This is why my latest AI piece sounds like the crazed scientist at the beginning of a disaster movie - I am alarmed! There will be measurable damage to the stock market, but more importantly tens of thousands of people will lose their jobs and a depression will begin in the tech industry.
Every time I am worried I’ve gone too hard, or been a little too emotional, I get surprised by the outpouring of support, from people who feel the same frustration and outrage. I felt like I’d gone a little too hard on Lost In The Future (and its sister episode of Better Offline, The Rot Society), but the reaction was people saying they felt the same way, something I’m happy to say happened again with Never Forgive Them, which is my favourite thing I’ve ever written, and the hardest I’ve gone.
Anyway, long story short, the format of this newsletter has matured into something weird and cool. I am so glad people enjoy it. It is slightly insane to regularly write 3000 to 5000 words in which I combine financial, cultural and economic analysis with calling people scumbags, but I enjoy doing it. I hope you continue reading. I work very hard on it.
It wouldn’t be my newsletter without me writing “as an aside” and then doing a quote:
While I am not doing a belabored list of thank yous, but there is one name that cannot be left out: Matt Hughes, my editor, who has edited over a hundred of my pieces and scripts, as well as being one of my closest friends, somebody that riffed with me for hours and made so many ideas sharper. Whenever I have flinched away from an idea, fearing I’d gone too far, Matt would hold me up and not just reassure me, but walk me through the logical backing of the argument - a true friend makes their friend stronger. Thank you Matt. You make this newsletter both possible and much, much better. I am eternally grateful. Let’s make these fucking people take responsibility for once in their lives.
I love what I do, and I am so lucky to be able to do it. I really am grateful for you reading my work, and though this is me trying to work stuff out as I go, I am also doing my best to deeply research and provide real, meaningful analysis. It is a deeply personal journey for me, one that has allowed me to develop more as a person and be able to speak with more clarity, purpose and vigor, and as wanky as that sounds, all I can tell you is that I’m exactly the same as this in person, except I tell my friends I love them way more.
I sound a little ridiculous writing about a blog and a podcast as if I’m talking about playing the violin (which I cannot do), but I am proud of my work and find it deeply meaningful, as well as something that has enriched my life. I will continue to use it to, at the very least, provide you with some clarity about the world, and promise to do so with sincerity.
I am genuinely so thankful for every minute you give anything I create.
At CES, I’ll be trying something new - I’ll be running a week-long live-to-tape radio show, with two 90 minute episodes a day taking a temperature check of the tech industry, joined by David Roth of Defector and tech critic Edward Ongweso Jr., joined by a host of different tech reporters, at least one priest, Robert Evans of Behind The Bastards and Gare Davis of It Could Happen Here.
It’s yet another ambitious and weird idea that I intend to at the very least make a lot of fun. If you’re a reporter reading this and want to join - [email protected], let’s make it happen, we’re recording Tuesday through end of Saturday. We’ll have food and drink.
Similarly, next year I’ll be spending a lot more of my life in New York City, I’ll be starting up Radio Better Offline, a regular tech talk show recorded at iHeartRadio’s NYC studios within the Better Offline podcast. Each week I’ll have two or three tech people in the studio, and want to create a kind of lively, meaningful and exciting talk radio setting for the tech industry.
The tech industry - especially within tech journalism - has such an incredible variety of people, both normal and otherwise, and I want to bring their voices to their ears. I feel like tech reporters regularly feel isolated and crushed by this industry, and I want Better Offline - and Radio Better Offline in particular - to help fight back. Reporters are also regularly robbed of the opportunity to build their own brands while at their publications - come on Better Offline, I'll put you on a great-sounding podcast with a huge audience where people will hear your voice, be directed to your work and social media, and remember you, not just the place you work. I will do my damndest to bring on as many of you as I can.
Anyway. So many people have helped me in so many ways this year, and I'm eternally grateful. Members of the tech, business and political media, software engineers, data analysts, academics, scientists, all endlessly generous and helpful. I will do my best to pay forward and back the generosity of time, love and support that I've received.
Outside of next week's 3-part year-end series of Better Offline, I am taking a break until mid-January. It has been a long year for all of us. Please take care of yourselves. Thanks for reading and listening to my stuff.
If you somehow haven’t subscribed to my podcast, please subscribe to my podcast, then download every episode. I need download numbers. I need you to help me. I need you to download every single one then force your family and friends to do it too.
Despite the size of this newsletter, email me at [email protected]. I do my best to respond to every reply, DM and email. I am super online.
I also realize I’ve never written out all my social handles.
Ones I actually use:
Bluesky: https://bsky.app/profile/edzitron.com (my least normal social media)
Instagram: http://instagram.com/edzitron (my most normal social media)
Ones that I don’t really touch much:
Twitter: http://www.twitter.com/edzitron
Threads: http://www.threads.net/edzitron
If you see an edzitron it’s probably me. I think I’m ezitron on TikTok?
2024-12-17 01:06:48
In the last year, I’ve spent about 200,000 words on a kind of personal journey where I’ve tried again and again to work out why everything digital feels so broken, and why it seems to keep getting worse, despite what tech’s “brightest” minds might promise. More regularly than not, I’ve found that the answer is fairly simple: the tech industry’s incentives no longer align with the user.
The people running the majority of internet services have used a combination of monopolies and a cartel-like commitment to growth-at-all-costs thinking to make war with the user, turning the customer into something between a lab rat and an unpaid intern, with the goal to juice as much value from the interaction as possible. To be clear, tech has always had an avaricious streak, and it would be naive to suggest otherwise, but this moment feels different. I’m stunned by the extremes tech companies are going to extract value from customers, but also by the insidious way they’ve gradually degraded their products.
To be clear, I don’t believe that this gradual enshittification is part of some grand, Machiavellian long game by the tech companies, but rather the product of multiple consecutive decisions made in response to short-term financial needs. Even if it was, the result would be the same — people wouldn’t notice how bad things have gotten until it’s too late, or they might just assume that tech has always sucked, or they’re just personally incapable of using the tools that are increasingly fundamental to living in a modern world.
You are the victim of a con — one so pernicious that you’ve likely tuned it out despite the fact it’s part of almost every part of your life. It hurts everybody you know in different ways, and it hurts people more based on their socioeconomic status. It pokes and prods and twists millions of little parts of your life, and it’s everywhere, so you have to ignore it, because complaining about it feels futile, like complaining about the weather.
It isn’t. You’re battered by the Rot Economy, and a tech industry that has become so obsessed with growth that you, the paying customer, are a nuisance to be mitigated far more than a participant in an exchange of value. A death cult has taken over the markets, using software as a mechanism to extract value at scale in the pursuit of growth at the cost of user happiness.
These people want everything from you — to control every moment you spend working with them so that you may provide them with more ways to make money, even if doing so doesn’t involve you getting anything else in return. Meta, Amazon, Apple, Microsoft and a majority of tech platforms are at war with the user, and, in the absence of any kind of consistent standards or effective regulations, the entire tech ecosystem has followed suit. A kind of Coalition of the Willing of the worst players in hyper-growth tech capitalism.
Things are being made linearly worse in the pursuit of growth in every aspect of our digital lives, and it’s because everything must grow, at all costs, at all times, unrelentingly, even if it makes the technology we use every day consistently harmful.
This year has, on some level, radicalized me, and today I’m going to explain why. It’s going to be a long one, because I need you to fully grasp the seriousness and widespread nature of the problem.
You have, more than likely, said to yourself sometime in the last ten years that you “didn’t get tech,” or that you are “getting too old,” or that tech has “gotten away from you” because you found a service, or an app, or a device annoying. You, or someone you love, have convinced yourself that your inability to use something is a sign that you’re deficient, that you’ve failed to “keep up with the times,” as if the things we use every day should be in a constant state of flux.
Sidenote: I’m sure there are exceptions. Some people really just don’t try and learn how to use a computer or smartphone, and naturally reject technology, or steadfastly refuse to pick it up because “it’s not for them.” These people exist, they’re real, we all know them, and I don’t think anybody reading this falls into this camp. Basic technological literacy is a requirement to live in society — and there is some responsibility on the user. But even if we assume that this is the case, and even if there are a lot of people that simply don’t try…should companies really take advantage of them?
The tools we use in our daily lives outside of our devices have mostly stayed the same. While buttons on our cars might have moved around — and I’m not even getting into Tesla’s designs right now — we generally have a brake, an accelerator, a wheel, and a turn signal. Boarding an airplane has worked mostly the same way since I started flying, other than moving from physical tickets to digital ones. We’re not expected to work out “the new way to use a toilet” every few months because somebody decided we were finishing too quickly.
Yet our apps and the platforms we use every day operate by a totally different moral and intellectual compass. While the idea of an update is fairly noble (and not always negative) — that something you’ve bought can be maintained and improved over time is a good thing — many tech platforms see it as a means to further extract and exploit, to push users into doing things that either keep them on the app longer or take more-profitable actions.
We as a society need to reckon with how this twists us up, makes us more paranoid, more judgmental, more aggressive, more reactionary, because when everything is subtly annoying, we all simmer and suffer in manifold ways. There is no digital world and physical world — they are, and have been, the same for quite some time, and reporting on tech as if this isn’t the case fails the user. It may seem a little dramatic, but take a second and really think about how many little digital irritations you deal with in a day. It’s time to wake up to the fact that our digital lives are rotten.
I’m not talking about one single product or company, but most digital experiences. The interference is everywhere, and we’ve all learned to accept conditions that are, when written out plainly, are kind of insane.
Back in 2023, Spotify redesigned its app to, and I quote The Verge, be “part TikTok, part Instagram, and part YouTube,” which in practice meant replacing a relatively clean and straightforward user interface with one made up of full-screen cards (like TikTok) and autoplaying video podcasts (like TikTok), which CEO Daniel Ek claimed would, to quote Sky News, make the platform “come alive” with different content on a platform built and sold as a place to listen to music.
The tech media waved off the redesign without really considering the significance of the fact that at the drop of a hat, hundreds of millions of people’s experience of listening to music would change based on the whims of a multi-billionaire, with the express purpose being to force these people to engage with completely different content as a means of increasing engagement metrics and revenue. By all means try and pretend this is “just an app,” but people’s relationships with music and entertainment are deeply important to their moods and motivations, and adding layers of frustration in an app they interact with for hours a day is consistently grating.
And no matter how you feel, this design was never for the customer. Nobody using Spotify was saying “ah man, I wish I could watch videos on this,” but that doesn’t matter because engagement and revenue must increase. It’s clear that Spotify, a company best-known for exploiting the artists on its platform, treats its customers (both paying and otherwise) with a similar level of contempt.
It’s far from alone. Earlier in the year, smart speaker company Sonos released a redesign of its app that removed accessibility features and the ability to edit song queues or play music from your phone in an attempt to “modernize” the interface, with WIRED suggesting that the changes could potentially open the door to adding a subscription of some sort to help Sonos’ ailing growth. Meta’s continual redesigns of Facebook and Instagram — the latest of which happened in October to “focus on Gen Z” — are probably the most egregious example of the constant chaos of our digital lives.
Sidenote: Some of Meta’s random redesigns are subtle and not announced with any particular fanfare. Try this: Using the iPhone app, go to a friend’s profile, tap “photos,” and then “videos.” Naturally, you’d expect these to be organized in chronological order. If your friend is a prolific uploader, that won’t be the case. You’ll find them organized in a scattershot, algorithmically-driven arrangement that doesn’t make any sense.
What does that mean in practice? Say you’re looking for videos from an important life event — like a birthday or a wedding. You can’t just scroll down until you reach them. You’ve got to parse your way through every single one. Which takes longer, but is presumably great for Facebook’s engagement numbers.
Also, there are two separate tabs that show videos (one on the profile page, another under the photo tab). You’d assume both would show the exact same things, and you’d be wrong. They’ll often show an entirely different selection of videos, with no obvious criteria as to why. And don’t get me started on Facebook’s retrospective conversion of certain older videos — some of which might be a few seconds long, others lasting several minutes — into reels, which also strips the ability to skip certain parts without installing a third-party browser plugin.
As every single platform we use is desperate to juice growth from every user, everything we interact with is hyper-monetized through plugins, advertising, microtransactions and other things that constantly gnaw at the user experience. We load websites expecting them to be broken, especially on mobile, because every single website has to have 15+ different ad trackers, video ads that cover large chunks of the screen, all while demanding our email or for us to let them send us notifications.
Every experience demands our email address, and giving out our email address adds another email to inboxes already stuffed with two types of spam — the actual “get the biggest laser” spam that hits the junk folder automatically, and the marketing emails we receive from clothing brands we wanted a discount from or newspapers we pay for that still feel it’s necessary to bother us 3 to 5 times a day. I’ve basically given up trying to fight back — how about you?
Every app we use is intentionally built to “growth hack” — a term that means “moving things around in such a way that a user does things that we want them to do” so they spend more money or time on the platform — which is why dating apps gate your best matches behind $1.99 microtransactions, or why Uber puts “suggestions” and massive banners throughout their apps to try and convince you to use one of its other apps (or accidentally hit them, which gives Uber a chance to get you to try them), or why Outlook puts advertisements in your email inbox that are near-indistinguishable from new emails (they’re at the top of your inbox too), or why Meta’s video carousels intentionally only play the first few seconds of a clip as a means of making you click.
Our digital lives are actively abusive and hostile, riddled with subtle and overt cons. Our apps are ever-changing, adapting not to our needs or conditions, but to the demands of investors and internal stakeholders that have reduced who we are and what we do to an ever-growing selection of manipulatable metrics.
It isn’t that you don’t “get” tech, it’s that the tech you use every day is no longer built for you, and as a result feels a very specific kind of insane.
Every app has a different design, almost every design is optimized based on your activity on said app, with each app trying to make you do different things in uniquely annoying ways. Meta has hundreds of people on its growth team perpetuating a culture that manipulates and tortures users to make company metrics improve, like limiting the amount of information in a notification to make a user browse deeper into the site, and deliberately promoting low-quality clickbait that promises “one amazing trick” because people click those links, even if they suck.
It’s everywhere.
After a coup by head of ads Prabhakar Raghavan in 2019, Google intentionally made search results worse as a means of increasing the amount of times that people would search for something on the site. Ever wonder why your workplace uses Sharepoint and other horrible Microsoft apps? That’s because Microsoft’s massive software monopoly meant that it was cheaper for your boss to buy all of it in one place, and thus its incentive is to make it good enough to convince your boss to sign up for all of their stuff rather than an app that makes your life easier or better.
Why does every website feel different, and why do some crash randomly or make your phone burn your hand? It’s because every publisher has pumped their sites full of as much ad tracking software as possible as a means of monetizing every single user in as many ways as possible, helping ads follow you across the entire internet. And why does everybody need your email? Because your inbox is one of the few places that advertisers haven’t found a consistent way to penetrate.
It’s digital tinnitus. It’s the pop-up from a shopping app that you downloaded to make one purchase, or the deceptive notification from Instagram that you have “new views” that doesn’t actually lead anywhere. It is the autoplaying video advertisement on your film review website. It is the repeated request for you to log back into a newspaper website that you logged into yesterday because everyone must pay and nothing must get through. It is the hundredth Black Friday sale you got from a company that you swear you unsubscribed from eight times, and perhaps even did, but there’s no real way to keep track. It’s the third time this year you’ve had to make a new password because another data breach happened and the company didn’t bother to encrypt it.
I’m not writing this to complain, but because I believe — as I hinted at a few weeks ago — that we are in the midst of the largest-scale ecological disaster of our time, because almost every single interaction with technology, which is required to live in modern society, has become actively adversarial to the user. These issues hit everything we do, all the time, a constant onslaught of interference, and I believe it’s so much bigger than just social media and algorithms — though they’re a big part of it, of course.
In plain terms, everybody is being fucked with constantly in tiny little ways by most apps and services, and I believe that billions of people being fucked with at once in all of these ways has profound psychological and social consequences that we’re not meaningfully discussing.
The average person’s experience with technology is one so aggressive and violative that I believe it leaves billions of people with a consistent low-grade trauma. We seem, as a society, capable of understanding that social media can hurt us, unsettle us, or make us feel crazed and angry, but I think it’s time to accept that the rest of the tech ecosystem undermines our wellbeing in an equally-insidious way. And most people don’t know it’s happening, because everybody has accepted deeply shitty conditions for the last ten years.
Now, some of you may scoff at this a little — after all, you’re smart, you know about disinformation, you know about the tricks of these companies, and thus most people do, right?
Wrong! Most people don’t think about the things they’re doing at all and are just trying to get by in a society that increasingly demands we make more money to buy the same things, with our lives both interfered with and judged by social networks with aggressive algorithms that feed us more things based on what we’ll engage with, which might mean said things piss us off or actively radicalize us. They’re nagged by constant notifications — an average of 46 a day — some useful, some advertisements, like Apple telling us there’s a nailbiter college football game regardless of whether we’ve ever interacted with anything football related, or a Slack message saying you haven’t joined a group you were invited to yet, or Etsy letting you know that you can buy things for an upcoming holiday. It’s relentless, and the more time you invest in using a device, the more of these notifications you get, making you less likely to turn them off. After all, how well are you doing keeping your inbox clean? Oh what’s that? You get 25 emails a day, many of them from a company owned by William Sonoma?
Your work software veers between “shit” and “just okay,” and never really seems to get better, nor does any part seem to smoothly connect to another. Your organization juggles anywhere from five to fifteen different pieces of software — Slack or Microsoft Teams and/or Zoom for communication, Asana or Monday or Basecamp for project management, or Jira, or Trello, or any number of other different ways that your organization or team wants to plan things. When you connect with another organization, you find they’re using a different product, or perhaps they’re using the same one — say, Slack — and that one requires you to join their organization, which may or may not work. I’m not even talking about the innumerable amount of tech infrastructure products that more-technical workers have to deal with, or how much worse this gets if you’ve got a slower device. Every organization does things differently, and some don’t put a lot of thought into how they do so.
Yet beyond the endless digital nags there’s the need to be constantly aware of scams and outright misinformation, both on social networks that don’t really care to stop it and on the chum box advertisements below major news publications — you know, the little weird stories at the bottom promising miracle cures.
It’s easy to assume that it’s natural that you’d know there are entities out there trying to scam you or trick you, and I’d argue most people don’t. To most, a video from Rumble.com may as well be the same thing as a video from CNN.com, and most people would believe that every advertisement on every website is somehow verified for its accuracy, versus “sold at scale all the time to whoever will pay the money.”
And when I say that, I’m really talking about CNN.com, a website that had 594 million visitors in October 2024. At the bottom is the “Paid Partner Content” section, including things from publications ‘like “FinanceBuzz” that tell you about the “9 Dumbest Things Smart People Waste Money On.” FinanceBuzz immediately asks for you to turn your notifications on — you know, so it can ping you when it has new articles — and each bullet point leads to one of its affiliate marketing arms trying to sell you car insurance and credit cards. You’re offered the chance to share your email address to receive “vetted side hustles and proven ways to earn extra cash sent to your inbox,” which I assume includes things like advertorial content telling you that yes, you could make money playing online bingo (such as “Bingo Cash”) against other people.
Papaya Games, developer of Bingo Cash, was sued in March by rival gaming company Skillz for using bots in allegedly skill-based games that are supposed to be between humans, and the Michigan Gaming Control Board issued a cease-and-desist order against the company for violating multiple gaming laws, including the Lawful Internet Gaming Act. To quote the lawsuit, “Papaya’s games are not skill-based and users are often not playing against live, actual opponents but against Papaya’s own bots that direct and rig the game so that Papaya itself wins its users’ money while leading them to believe that they lost to a live human opponent.”
This is a website and its associated content that has prime placement on the front page of a major news outlet. As a normal person, it’s reasonable to believe that CNN would not willfully allow advertisements for websites that are, in and of themselves, further advertisements masquerading as trustworthy third party entities. It’s reasonable that you would believe that FinanceBuzz was a reputable website, and that its intentions were to share great deals and secret tricks with you. If you think you’re not this stupid, you are privileged and need to have more solidarity with your fellow human beings.
Why wouldn’t you think that the content on one of the most notable media outlets in the entire world is trustworthy? Why wouldn’t you trust that CNN, a respected media outlet, had vetted its advertisers and made sure their content wasn’t actively tricking its users? I think it’s fair to say that CNN has likely led to thousands of people being duped by questionable affiliate marketing companies, and likely profited from doing so.
Why wouldn’t people feel insane? Why wouldn’t the internet, where we’re mostly forced to live, drive most people crazy? How are we not discussing the fact that so much of the internet is riddled with poison? How are we not treating the current state of the tech industry like an industrial chemical accident? Is it because there are too many people at fault? Is it because fixing it would require us to truly interrogate the fabric of a capitalist death cult?
Nothing I am writing is polemic or pessimistic or describing anything other than the shit that’s happening in front of my eyes and your eyes and the eyes of billions of people. Dismissing these things as “just how it is” allows powerful people with no real plan and no real goals other than growth to thrive, and sneering at people “dumb enough” to get tricked by an internet and tech industry built specifically to trick them suggests you have no idea how you are being scammed, because you’re smug and arrogant.
I need you to stop trying to explain away how fucking offensive using the internet and technology has become. I need you to stop making excuses for the powerful and consider the sheer scale of the societal ratfucking happening on almost every single device in the world, and consider the ramifications of the difficulty that a human being using the internet has trying to live an honest, dignified and reasonable life.
To exist in modern society requires you to use these devices, or otherwise sacrifice large parts of how you’d interact with other people. You need a laptop or a smartphone for work, for school, for anything really. You need messaging apps otherwise you don’t exist. As a result, there is a societal monopoly of sorts — or perhaps it’s more of a cartel, in the sense that, for the most part, every tech company has accepted these extremely aggressive, anti-user positions, all in pursuit of growth.
The stakes are so much higher than anyone — especially the tech media — is willing to discuss. The extent of the damage, the pain, the frustration, the terror is so constant that we are all on some level numb to its effects, because discussing it requires accepting that the vast majority of people live poisoned digital lives.
We all live in the ruins created by the Rot Economy, where the only thing that matters is growth. Growth of revenue, growth of the business, growth of metrics related to the business, growth of engagement, of clicks, of time on app, of purchases of micro-transactions, of impressions of ads, of things done that make executives feel happy.
I’ll give you a more direct example.
On November 21, I purchased the bestselling laptop from Amazon — a $238 Acer Aspire 1 with a four-year-old Celeron N4500 Processor, 4GB of DDR4 RAM, and 128GB of slow eMMC storage (which is, and I’m simplifying here, though not by much, basically an SD card soldered to the computer’s motherboard). Affordable and under-powered, I’d consider this a fairly representative sample of how millions of people interact with the internet.
I believe it’s also a powerful illustration of the damage caused by the Rot Economy, and the abusive, exploitative way in which the tech industry treats people at scale.
It took 1 minute and 50 seconds from hitting the power button for the laptop to get to the setup screen. It took another minute and a half to connect and begin downloading updates, which took several more minutes. After that, I was faced with a licensing agreement where I agreed to binding arbitration to use Windows, a 24 second pause, and then got shown a screen of different “ways I could unlock my Microsoft experience,” with animations that shuddered and jerked violently.
Aside: These cheap laptops use a version of Windows called “WIndows Home in S Mode,” which is a paired-down version of Windows where you can only use apps installed from the Microsoft Store. Microsoft claims that it’s a “streamlined version” of Windows, but the reality is it’s a cheap version of Windows for Microsoft to compete with Google’s Chromebook laptops.
Now, why do I know that? Because you’ll never guess who’s a big fan of Windows S? That’s right, Prabhakar Raghavan, The Man Who Killed Google Search, who said that Microsoft’s Windows S “validated” Google’s approach to cheap laptops back when he was Vice President of Google’s G Suite (and three years before he became Head of Search).
To be clear, Windows Home in S Mode is one of the worst operating systems of all time. It is ugly, slow, and actively painful to use, and (unless you deactivate S Mode) locks you into Microsoft’s ecosystem. This man went on to ruin Google Search by the way. How does this man keep turning up? Is it because I say his name so much?
Throughout, the laptop’s cheap trackpad would miss every few clicks. At this point, I was forced to create a Microsoft account and to hand over my cellphone number — or another email address — to receive a code, or I wouldn’t be able to use the laptop. Each menu screen takes 3-5 seconds to load, and I’m asked to “customize my experience” with things like “personalized ads, tips and recommendations,” with every option turned on by default, then to sign up for another account, this time with Acer. At one point I am simply shown an ad for Microsoft’s OneDrive cloud storage product with a QR code to download it on my phone, and then I’m told that Windows has to download a few updates, which I assume are different to the last time it did that.
Aside: With a normal version of Windows, it’s possible — although not easy — to set up and use the computer without a Microsoft account. On S Mode, however, you’re restricted to downloading apps through the Microsoft Store (which, as you’ve guessed, requires a Microsoft account).In essence, it’s virtually impossible to use this machine without handing over your personal data to Microsoft.
It has taken, at this point, around 20 minutes to get to this screen. It takes another 33 minutes for the updates to finish, and then another minute and 57 seconds to log in, at which point it pops up with a screen telling me to “set up my browser and discover the best of Windows,” including “finding the apps I love from the Microsoft Store” and the option to “create an AI-generated theme for your browser.” The laptop constantly struggles as I scroll through pages, the screen juddering, apps taking several seconds to load.
When I opened the start bar — ostensibly a place where you have apps you’d use — I saw some things that felt familiar, like Outlook, an email client that is not actually installed and requires you to download it, and an option for travel website Booking.com, along with a link to LinkedIn. One app, ClipChamp, was installed but immediately needed to be updated, which did not work when I hit “update,” forcing me to go to find the updates page, which showed me at least 40 different apps called things like “SweetLabs Inc.” I have no idea what any of this stuff is.
I type “sweetlabs” into the search bar, and it jankily interrupts into a menu that takes up a third of the screen, with half of that dedicated to “Mark Twain’s birthday,” two Mark Twain-related links, a “quiz of the day,” and four different games available for download.
The computer pauses slightly every time I type a letter. Every animation shudders. Even moving windows around feels painful. It is clunky, slow, it feels cheap, and the operating system — previously something I’d considered to be “the thing that operates the computer system” — is actively rotten, strewn with ads, sponsored content, suggested apps, and intrusive design choices that make the system slower and actively upset the user.
Another note: Windows in S Mode requires you to use Edge as your default browser and Bing as your default search engine. While you can download alternatives — like Firefox and Brave, though not Google Chrome, which was removed from the Microsoft Store in 2017 for unspecified terms of service violations — it’s clear that Microsoft wants you to spend as much time in its ecosystem as possible, where it can monetize you.
The reason I’m explaining this in such agonizing detail is that this experience is more indicative of the average person’s experience using a computer than anybody realizes. Though it’s tough to gauge how many of these things sold to make it a bestseller on Amazon, laptops in this pricepoint, with this specific version of Windows (Windows 11 Home in “S Mode” as discussed above), happen to dominate Amazon’s bestsellers along with Apple’s significantly-more-expensive MacBook Air and Pro series. It is reasonable to believe that a large amount of the laptops sold in America match this price point and spec — there are two similar ones on Best Buy’s bestsellers, and as of writing this sentence, multiple different laptops of this spec are on the front of Target’s laptop page.
And if I haven’t made it completely clear, this means that millions of people are likely using a laptop that’s burdensomely slow, and full of targeted advertisements and content baked into the operating system in a way that’s either impossible or difficult to remove. For millions of people — and it really could be tens of millions considering the ubiquity of these laptops in eCommerce stores alone — the experience of using the computer is both actively exploitative and incredibly slow. Even loading up MSN.com — the very first page you see when you open a web browser — immediately hits you with ads for eBay, QVC and QuickBooks, with icons that sometimes simply don’t load.
Every part of the operating system seems to be hounding you to use some sort of Microsoft product or some sort of product that Microsoft or the laptop manufacturer has been paid to make you see. While one can hope that the people buying these laptops have any awareness of anything, the reality is that they’re being dumped into a kind of TJ Maxx version of computing, except TJ Maxx clothes don’t sometimes scream at you to download TJ Maxx Plus or stop functioning because you used them too fast.
Again, this is how most people are experiencing modern computing, and it isn’t because this is big business — it’s because laptop sales have been falling for over a decade, and manufacturers (and Microsoft) need as many ways to grow revenue as possible, even if the choices they make are actively harmful to consumers.
Aside: I swear to god, if your answer here is “get a MacBook Air, they’re only $600,” I beg you — I plead with you — to speak with people outside of your income bracket at a time when an entire election was decided in part because everything’s more expensive.
At that point, said person using this laptop can now log onto the internet, and begin using websites like Facebook, Instagram, and YouTube, all of which have algorithms we don’t really understand, but that have been regularly proven to be actively — and deliberately — manipulative and harmful.
Now, I know reading about “algorithms” and “manipulation” makes some people’s eyes glaze over, but I want you to take a simpler approach for a second. I hypothesize that most people do not really think about how they interact with stuff — they load up YouTube, they type something in, they watch it, and maybe they click whatever is recommended next. They may know there’s an algorithm of sorts, but they’re not really sitting there thinking “okay so they want me to see this,” or they may even be grateful that the algorithm gave them something they like, and reinforce the algorithm with their own biases, some of which they might have gotten from the algorithm.
To be clear, none of this is mind control or hypnosis or voodoo. These algorithms and their associated entities are not sitting there with some vast agenda to execute — the algorithms are built to keep you on the website, even if it upsets you, pisses you off, or misinforms you. Their incentive isn’t really to make you make any one choice, other than one that involves you staying on their platform or interacting with an advertisement for somebody else’s, and the heavy flow of political — and particularly conservative — content is a result of platforms knowing that’s what keeps people doing stuff on the platform. The algorithms are constantly adapting in real time to try and find something that you might spend time on, with little regard for whether that content is good, let alone good for you.
Putting aside any moral responsibility, the experiences on these apps are discordant. Facebook, as I’ve written about in detail, is a complete nightmare — thousands of people being actively conned in supposed “help groups,” millions of people being scammed every day (with one man killing himself as a result of organized crime’s presence on Facebook), and bizarre AI slop is dominating feeds with Mark Zuckerberg promising that there’s more to come. That’s without mentioning a product experience that continually interrupts you with sponsored and suggested content, as these platforms always do, all algorithmically curated to keep you scrolling, while also hiding content from the people you care about, because Facebook thinks it won’t keep you on the platform for as long.
The picture I am trying to paint is one of terror and abuse. The average person’s experience of using a computer starts with aggressive interference delivered in a shoddy, sludge-like frame, and as the wider internet opens up to said user, already battered by a horrible user experience, they’re immediately thrown into heavily-algorithmic feeds each built to con them, feeding whatever holds their attention and chucking ads in as best they can. As they browse the web, websites like NBCnews.com feature stories from companies like “WorldTrending.com” with advertisements for bizarre toys written in the style of a blog, so intentional in their deceit that the page in question has a huge disclaimer at the bottom saying it’s an ad.
As their clunky, shuddering laptop hitches between every scroll, they go to ESPN.com, and the laptop slows to a crawl. Everything slows to a crawl. “God damnit, why is everything so fucking slow? I’ll just stay on Facebook or Instagram or YouTube. At least that place doesn’t crash half the time or trick me.”
Using the computer in the modern age is so inherently hostile that it pushes us towards corporate authoritarians like Apple, Microsoft, Google and Meta — and now that every single website is so desperate for our email and to show us as many ads as possible, it’s either harmful or difficult for the average person to exist online.
The biggest trick that these platforms played wasn’t any one algorithm, but the convenience of a “clean” digital experience — or, at least as clean as they feel it needs to be. In an internet so horribly poisoned by growth capitalism, these platforms show a degree of peace and consistency, even if they’re engineered to manipulate you, even if the experience gets worse seemingly every year, because at least it isn’t as bad as the rest of the internet. We use Gmail because, well, at least it’s not Outlook. We use YouTube to view videos from other websites because other websites are far more prone to crash, have quality issues, or simply don’t work on mobile. We use Google Search, despite the fact that it barely works anymore, to find things because actually browsing the web fucking sucks.
When every single website needs to make as much money as possible because their private equity or hedge fund or massive corporate owners need to make more money every year without fail, the incentives of building the internet veer away from providing a service and toward putting you, the reader, in silent service of a corporation.
ESPN’s app is a fucking mess — autoplaying videos, discordantly-placed scores, menus that appear to have been designed by M.C. Escher — and nothing changes because Disney needs you to use the app and find what you need, versus provide information in anything approaching a sensible way. It needs your effort. The paid subscription model for dating apps is so aggressive that there’s a lawsuit filed against Match Group — which owns Tinder and Hinge, and thus a great deal of the market — for “gamifying the platforms to transform users into gamblers locked in a search for psychological rewards,” likely as a means of recouping revenue after user numbers have begun to fall. And if you’re curious why these companies aren’t just making their products less horrible to use, I’m afraid that would reduce revenue, which is what they do care about.
If you’re wondering who else is okay with that, it’s Apple. Both Bumble and Tinder are regularly featured on the “Must-Have Apps” section of the App Store, most of which require a monthly fee to work. Each of these apps is run by a company with a “growth” team, and that team exists, on some level, to manipulate you — to move icons around so that you’ll interact with the things they want you to, see ads, or buy things. This is why HBO Max rebranded to Max and created an entirely new app experience — because the growth people said “if we do this in this way the people using it will do what we want.”
Now, what’s important to accept here is that absolutely none of this is done with any real consideration of the wider effects on the customer, as long as the customer continues doing the things that the company needs them to. We, as people, have been trained to accept a kind of digital transience — an inherent knowledge that things will change at random, that the changes may suck, and that we will just have to accept them because that’s how the computer works, and these companies work hard to suppress competition as a means of making sure they can do what they want.
In other words, internet users are perpetually thrown into a tornado of different corporate incentives, and the less economically stable or technologically savvy you are, the more likely you are to be at the mercy of them. Every experience is different, wants something, wants you to do something, and the less people know about why the more likely they are to — with good intentions — follow the paths laid out in front of them with little regard for what might be happening, in the same way people happily watch the same TV shows or listen to the same radio stations.
Even if you’re technologically savvy, you’re still dealing with these problems — fresh installs of Windows on new laptops, avoiding certain websites because you’ve learned what the dodgy ones look like, not interacting with random people in your DMs because you know what a spam bot looks like, and so on. It’s not that you’re immune. It’s that you’re instinctually ducking and weaving around an internet and digital ecosystem that continually tries to interrupt you, batting away pop-ups and silencing notifications knowing that they want something from you — and I need you to realize that most people are not like you and are actively victimized by the tech ecosystem.
As I said a few weeks ago, I believe that most people are continually harmed by their daily lives, as most people’s daily lives are on the computer or their smartphones, and those lives have been stripped of dignity. When they look to the media for clarity or validation, the best they’ll get is a degree of “hmm, maybe algorithm bad?” rather than a wholehearted acceptance that the state of our digital lives is obscene.
Yet it’s not just the algorithms — It’s the entirety of the digital ecosystem, from websites to apps to the devices we use every day. The fact that so many people likely use a laptop that is equal parts unfit for the task and stuffed full of growth hacked poison is utterly disgraceful, because it means that the only way to escape said poison is to simply have more money. Those who can’t afford $300 (at least) phones or $600 laptops are left to use offensively bad technology, and we have, at a societal scale, simply accepted that this is how things go.
Yet even on expensive devices you’re still the victim of algorithmic and growth-hacked manipulation, even if we’re aware of it. Knowing allows you to fight back, even if it’s just to stop yourself being overwhelmed by the mess, and means you can read things that can tell you what new horror we need to avoid next — but you are still the target, you are still receiving hundreds of marketing emails a week, you are still receiving spam calls, you are still unable to use Facebook or Instagram without being bombarded by ads and algorithmically-charged content.
I’ve written a lot about how the growth-at-all-costs mindset of The Rot Economy is what directly leads big tech companies to make their products worse, but what I’ve never really quantified is the scale of its damage.
Everything I’ve discussed around the chaos and pain of the web is a result of corporations and private equity firms buying media properties and immediately trying to make them grow, each in wildly different ways, all clamouring to be the next New York Times or Variety or other legacy media brand, despite those brands already existing, and the ideas for competing with them usually being built on unsustainably-large staffs and expensive consultants. Almost every single store you visit on the internet has a massive data layer on the background that feeds them data about what’s popular, or where they’re spending the most time on the site, and will in turn change things about their design to subtly encourage you to buy more stuff, all so that more money comes out, no matter the cost. Even if this data isn’t personalized, it’s still powerful, and turns so many experiences into subtle manipulations.
Every single weird thing that you’ve experienced with an app or service online is the dread hand of the Rot Economy — the gravitational pull of growth, the demands upon you, the user, to do something. And when everybody is trying to chase growth, nobody is thinking stability, and because everybody is trying to grow, everybody sort of copies everybody else’s ideas, which is why we see microtransactions and invasive ads and annoying tricks that all kind of feel the same in everything, though they’re all subtly different and customized just for that one app. It’s exhausting.
For a while, I’ve had the Rot Economy compared to Cory Doctorow’s (excellent) enshittification theory, and I think it’s a great time to compare (and separate) the two. To quote Cory in The Financial Times, Enshittification is “[his] theory explaining how the internet was colonised by platforms, why all those platforms are degrading so quickly and thoroughly, why it matters and what we can do about it.” He describes the three stages of decline:
“First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves.”
I agree with Cory on some levels, but I believe he gives far more credit to the platforms in question than they deserve, and sees far more intention or strategy than really exists. I fundamentally disagree about the business customers even being some elevated class in the equation — as we’ve seen with the Google Ads trial, Google didn’t really give a shit about its business customers to begin with, has always sought a monopoly, and made things worse for whoever it needed to as a means of increasing growth.
Perhaps that’s semantics. However, Cory’s theory lacks a real perpetrator beyond corporations that naturally say “alright we’re gonna do Enshittification now, watch this.” Where The Rot Economy separates is that growth is, in and of itself, the force that drives companies to enshittify. While enshittification neatly fits across companies like Spotify and Meta (and their ad-focused business models), it doesn’t really make sense when it comes to things where there isn’t a clear split between business customers and consumers, like Microsoft or Salesforce — because enshittification is ultimately one part of the larger Rot Economy, where everything must grow forever.
And I believe the phenomenon that captures both is a direct result of the work of men like Jack Welch and Milton Friedman. The Rot Economy is selfish and potently neoliberal — corporations are bowed down to like gods, and the powerful only seek more, at all times, at all costs, even if said cost is “the company might eventually die because we’ve burned out any value it actually has” or “people are harmed every time they pick up their phone.” The Rot Economy is neoliberalism’s true innovation: a kind of economic cancer that with few reasons to exist beyond “more” and few justifications beyond “if we don’t let it keep growing then everybody’s pensions blow up.”
To be clear, Cory is for the most part right. Enshittification successfully encapsulates how the modern web was destroyed in a way that nobody really has. I think it applies in a wide-ranging way to a wide range of tech companies and effects.
I, however, believe the wider problem is bigger, and the costs are far greater. It isn’t that “everything is enshittified.” It’s that everybody’s pursuit of growth has changed the incentive behind how we generate value in the world, and software enables a specific kind of growth-lust by creating virtual nation states with their own digital despots. While laws may stop Meta from tearing up people’s houses surrounding its offices on 1 Hacker Way, it can happily reroute traffic and engagement on Facebook and Instagram to make things an iota more profitable.
The Rot Economy isn’t simply growth-at-all-costs thinking — it’s a kind-of secular religion, something to believe in, that everything and anything can be more, should be more, must be more, that we are defined only by our pursuit of more growth, and that something that isn’t growing isn’t alive, and is in turn inferior.
No, perhaps not a religion. Religions are, for the most part, concerned with the hereafter, and contain an ethical dimension that says your present actions will affect your future — or your eternity. The Rot Economy is, by every metric, defined by its short-termism. I’m not just talking about undermining the long-term success of a business to juice immediate revenue numbers. I’m thinking in broad ecosystem terms.
The onslaught of AI-generated content — facilitated, in no small part, by Google and Microsoft — has polluted our information ecosystems. AI-generated images and machine-generated text is everywhere, and it’s impossible to avoid, as there is no reliable way to determine the provenance of a piece of content — with one exception, namely the considered scrutiny of a human. This has irreparably damaged the internet in ways I believe few fully understand. This stuff — websites that state falsehoods because an AI hallucinated, or fake pictures of mushrooms and dogs that now dominate Google Images — is not going away. Like microplastics or PFAS chemicals, they’re with us forever, constantly chipping away at our understanding of reality.
These companies unleashed generative AI on the world — or, in the case of Microsoft, facilitated its ascendency — without any consideration of what that would mean for the Internet as an ecosystem. Their concerns were purely short-term. Fiscal. The result? Over-leverage in an industry that has no real path to profitability, burning billions of dollars and the environment - both digital and otherwise - along with it.
I’m not saying that this is how everybody thinks, but I am convinced that everybody is burdened by The Rot Economy, and that digital ecosystems allow the poison of growth to find new and more destructive ways to dilute a human being to a series of numbers that can be made to grow or contract in the pursuit of capital.
Almost every corner of our lives has been turned into some sort of number, and increasing that number is important to us — bank account balances, sure, but also engagement numbers, followers, number of emails sent and received, open rates on newsletters, how many times something we’ve seen has been viewed, all numbers set by other people that we live our lives by while barely understanding what they mean. Human beings thrive on ways to define themselves, but metrics often rob us of our individuality. Products that boil us down to metrics are likely to fail to account for the true depth of anything they're capturing.
Sidenote: Here’s a good example: in an internal document I reviewed from 2017, a Facebook engineer revealed that engagement on the platform had started to dive, but because the company had focused so much energy on time spent on the app as a metric, nobody had noticed (and yes, that’s a quote). Years of changes — the consequences of which were felt by billions of people — were made not based on using the product or talking to users, but a series of numbers that nobody had bothered to check mattered.
The change in incentives toward driving more growth actively pushes out those with long-term thinking. It encourages hiring people who see growth as the driver of a company's success, and in turn investment, research and development into mechanisms for growth, which may sometimes be things that help you, but that isn't necessarily the reason they're doing it. Organisational culture and hiring stops prioritising people that fix customer problems, because that is neither the priority nor, sadly, how one makes a business continue to grow.
We are all pushed toward growth — personal growth, professional growth, growth in our network and our societal status — and the terms of this growth are often set by platforms and media outlets that are, in turn, pursuing growth. And as I've discussed, the way the terms of our growth is framed is almost entirely through a digital ecosystem of warring intents and different ways of pursuing growth — some ethical, many not.
Societal and cultural pressure is nothing new, but the ways we experience it are now elaborate and chaotic. Our relationships — professional, personal, and romantic — are processed through the funhouse mirror of the platforms, changing in ways both subtle and overt based on the signals we receive from the people we care about, each one twisted and processed through the lens of product managers and growth hackers. Changes to these platforms — even subtle ones — actively change the lives of billions of people, and it feels like we talk about it like being online is some hobbyist pursuit rather than something that many people do more than seeing real people in the real world.
I believe that we exist in a continual tension with the Rot Economy and the growth-at-all-costs mindset. I believe that the friction we feel on platforms and apps between what we want to do and what the app wants us to do is one of the most underdiscussed and significant cultural phenomena, where we, despite being customers, are continually berated and conned and swindled.
I believe billions of people are in active combat with their devices every day, swiping away notifications, dodging around intrusive apps, agreeing to privacy policies that they don’t understand, desperately trying to find where an option they used to use has been moved to because a product manager has decided that it needed to be somewhere else. I realize it’s tough to conceptualize because it’s so ubiquitous, but how much do you fight with your computer or smartphone every day? How many times does something break? How many times have you downloaded an app and found it didn’t really do the thing you wanted it to? How many times have you wanted to do something simple and found that it’s actually really annoying?
How much of your life is dodging digital debris, avoiding scams, ads, apps that demand permissions, and endless menu options that bury the simple things that you’re actually trying to do?
You are the victim of a con. You have spent years of your life explaining to yourself and others that “this is just how things are,” accepting conditions that are inherently exploitative and abusive. You are more than likely not deficient, stupid, or “behind the times,” and even if you are, there shouldn’t be multi-billion dollar enterprises that monetize your ignorance.
And it’s time to start holding those responsible accountable.
I’m fairly regularly asked why this all matters to me so much, so as I wrap up the year, I’m going to try and answer that question, and explain why it is I do what I do.
I spent a lot of time alone as a kid. I didn't have friends. I was insular, scared of the world, I felt ostracised and unnoticed, like I was out of place in humanity. The only place I found any kind of community — any kind of real identity — was being online. My life was (and is) defined by technology.
Had social networking not come along, I am not confident I’d have made many (if any) lasting friendships. For the first 25 or so years of my life, I struggled to make friends in the real world for a number of reasons, but made so many more online. I kept and nurtured friendships with people thousands of miles away, my physical shyness less of an issue when I could avoid the troublesome “hey I’m Ed” part that tripped me up so much.
Without the internet, I’d likely be a resentful hermit, disconnected from humanity, layers of scar tissue over whatever neurodivergence or unfortunate habits I'd gained from a childhood mostly spent alone.
Don't feel sorry for me. Technology has allowed me to thrive. I have a business, an upcoming book, this newsletter, and my podcast. I have so many wonderful, beautiful friends who I love that have come exclusively through technology of some sort, likely a social network or the result of a digital connection of some kind.
I am immensely grateful for everything I have, and grateful that technology allowed me to live a full and happy life. I imagine many of you feel the same way. Technology has found so many ways to make our lives better, perhaps more in some cases than others. I will never lie and say I don't love it.
However, the process of writing this newsletter and recording my podcast has made me intimately aware of the gratuitous, avaricious and intentional harm that the tech industry has caused to its customers, the horrifying and selfish decisions they’ve made, and the ruinous consequences that followed.
The things I have watched happen this year alone — which have been at times an enumeration of over a decade of rot — have turned my stomach, as has the outright cowardice of some people that claim to inform the public but choose instead to reinforce the structures of the powerful.
I am a user. I am a guy with a podcast and a newsletter, but I am behind the mic and the keyboard a person that uses the same services as you do, and I see the shit done to us, and I feel poison in my veins. I am not holding back, and neither should you. What is being done to us isn't just unfair — it's larcenous, cruel, exploitative and morally wrong.
Some may try to dismiss what I'm saying as "just social media" or "just how apps work" and if that's what you truly think, you're either a beaten dog or a willing (or unwilling) operative for the people running the con.
I will never forgive these people for what they’ve done to the computer, and the more I learn about both their intentions and actions the more certain I am that they are unrepentant and that their greed will never be sated. I have watched them take the things that made me human — social networking, digital communities, apps, and the other connecting fabric of our digital lives — and turned them into devices of torture, profitable mechanisms of abuse, and find it disgusting how many reporters seem to believe it's their responsibility to thank them and explain why it's good this is happening to their readers.
These are the people in charge. These are the people running the tech industry. These are the people who make decisions that affect billions of people every minute of every day, and their decisionmaking is so flagrantly selfish and abusive that I am regularly astonished by how little criticism they receive.
These men lace our digital lives with asbestos and get told they’re geniuses for doing so because money comes out.
I don’t know — or care — whether these men know who I am or read my work, because I only care that you do.
I don't give a shit if Sam Altman or Mark Zuckerberg knows my name. I don't care about any of their riches or their supposed achievements, I care that when given so many resources and opportunities to change the world they chose to make it worse. These men are tantamount to war criminals, except in 30 years Mark Zuckerberg may still be seen as a success — though I will spend the rest of my life telling you the damage he's caused.
I care about you. The user. The person reading this. The person that may have felt stupid, or deficient, or ignorant, all because the services you pay for or that monetize you have been intentionally rigged against you.
You aren't the failure. The services, the devices, and the executives are.
If you cannot see the significance of the problems I discuss every week, the sheer scale of the rot, the sheer damage caused by unregulated and unrepentant managerial parasites, you are living in a fantasy world and I both envy and worry for you. You're the frog in the pot, and trust me, the stove is on.
2025 will be a year of chaos, fear and a deficit of hope, but I will spend every breath I have telling you what I believe and telling you that I care, and you are not alone.
For years, I’ve watched the destruction of the services and the mechanisms that were responsible for allowing me to have a normal life, to thrive, to be able to speak with a voice that was truly mine. I’ve watched them burn, or worse, turned into abominable growth vehicles for men disconnected from society and humanity. I owe my life to an internet I've watched turned into multiple abuse factories worth multiple trillions of dollars and the people responsible get gladhandled and applauded.
I will scream at them until my dying fucking breath. I have had a blessed life, and I am lucky that I wasn't born even a year earlier or later, but the way I have grown up and seen things change has allowed me to fully comprehend how much damage is being done today, and how much worse is to come if we don't hold these people accountable. The least they deserve is a spoken or written record of their sins, and the least you deserve is to be reminded that you are the victim.
I don't think you realise how powerful it is being armed with knowledge — the clarity of what's being done to and why, and the names of the people responsible. This is an invisible war — and a series of invisible war crimes — perpetuated against billions of people in a trillion different ways every minute of every day, and it's everywhere, a constant in our lives, which makes enumerating and conceptualising it difficult.
But you can help.
You talking about the truth behind generative AI, or the harms of Facebook, or the gratuitous destruction of Google Search will change things, because these people are unprepared for a public that knows both what they’ve done and their sickening, loathsome, selfish and greedy intentions.
I realize this isn’t particularly satisfying to some, because you want big ideas, big changes that can be made. I don’t know what to tell you. I don’t know how to fix things. To quote Howard Beale in the movie Network, I don’t want you to write your Congressman because I don’t know what to tell you to write.
But what I can tell you is that you can live your life with a greater understanding of the incentives of those who control the internet and have made your digital lives worse as a means of making themselves rich. I can tell you to live with more empathy, understanding and clarity into the reasons that people around you might be angry at their circumstances, as even those unrelated to technology are made worse by exploitative, abusive and pernicious digital manipulation.
This is a moment of solidarity, as we are all harmed by the Rot Economy. We are all victims. It takes true opulence to escape it, and I'm guessing you don't have it. I certainly don't. But talking about it — refusing to go quietly, refusing to slurp down the slop willingly or pleasantly — is enough. The conversations are getting louder. The anger is getting too hard to ignore. These companies will be forced to change through public pressure and the knowledge of their deeds.
Holding these people to a higher standard at scale is what brings about change. Be the wrench in the machine. Be the person that explains to a friend why Facebook sucks now, and who chose to make it suck. Be the person to explain who Prabhakar Raghavan is and what his role was in making Google Search worse. Be the person who tells people that Sam Altman burns $5 billion a year on unsustainable software that destroys the environment and is built upon the large-scale larceny of creative works because he's desperate for power.
Every time you do this, you destabilise them. They have succeeded in a decades-long marketing campaign where they get called geniuses for making the things that are necessary to function in society worse. You can change that.
I don't even care if you cite me. Just tell them. Tell everybody. Spread the word. Say what they've done and say their names, say their names again and again and again so that it becomes a contagion. They have twisted and broken and hyper-monetised everything — how you make friends, fall in love, how you bank, how you listen to music, how you find information. Never let their names be spoken without disgust. Be the sandpaper in their veins and the graffiti on their legacies.
The forces I criticize see no beauty in human beings. They do not see us as remarkable things that generate ideas both stupid and incredible, they do not see talent or creativity as something that is innately human, but a commodity to be condensed and monetized and replicated so that they ultimately own whatever value we have, which is the kind of thing you’d only believe was possible (or want) if you were fully removed from the human race.
You deserve better than they’ve given you. You deserve better than I’ve given you, which is why I’m going to work even harder in 2025. Thank you, as ever, for your time.
2024-12-04 04:55:22
Before we get going — please enjoy my speech from Web Summit, Why Are All Tech Products Now Shit? I didn’t write the title.
What if what we're seeing today isn't a glimpse of the future, but the new terms of the present? What if artificial intelligence isn't actually capable of doing much more than what we're seeing today, and what if there's no clear timeline when it'll be able to do more? What if this entire hype cycle has been built, goosed by a compliant media ready and willing to take career-embellishers at their word?
Me, in March 2024.
I have been warning you for the best part of a year that generative AI has no killer apps and had no way of justifying its valuations (February), that generative AI had already peaked (March), and I have pleaded with people to consider an eventuality where the jump from GPT-4 to GPT-5 was not significant, in part due to a lack of training data (April).
I shared concerns in July that the transformer-based-architecture underpinning generative AI was a dead end, and that there were few ways we'd progress past the products we'd already seen, in part due to both the limits of training data and the limits of models that use said training data. In August, I summarized the Pale Horses of the AI Apocalypse — events, many that have since come to pass, that would signify that the end is indeed nigh — and again added that GPT-5 would likely "not change the game enough to matter, let alone [add] a new architecture to build future (and more capable) models on."
Throughout these pieces I have repeatedly made the point that — separate to any lack of a core value proposition, training data drought, or unsustainable economics — generative AI is a dead end due to the limitations of probabilistic models that hallucinate, where they authoritatively state things that aren't true. The hallucination problem is one that is nowhere closer to being solved — and, at least with the current technology — may never go away, and it makes it a non-starter for a great many business tasks, where you need a high level of reliability.
I have — since March — expressed great dismay about the credulousness of the media in their acceptance of the "inevitable" ways in which generative AI will change society, despite a lack of any truly meaningful product that might justify an environmentally-destructive industry led by a company that burns more than $5 billion a year and big tech firms spending $200 billion on data centers for products that people don't want.
The reason I'm repeating myself is that it's important to note how obvious the problems with generative AI have been, and for how long.
And you're going to need context for everything I'm about to throw at you.
Sidebar: To explain exactly what happened here, it's worth going over how these models work and are trained. I’ll keep it simple as it's a reminder.
A transformer-based generative AI model such as GPT — the technology behind ChatGPT — generates answers using "inference," which means it draws conclusions based off of its "training," which requires feeding it masses of training data (mostly text and images scraped from the internet). Both of these processes require you to use high-end GPUs (graphics processing units), and lots of them.
The theory was (is?) that the more training data and compute you throw at these models, the better they get. I have hypothesized for a while they'd have diminishing returns — both from running out of training data and based on the limitations of transformer-based models.
And there, as they say, is the rub.
A few weeks ago, Bloomberg reported that OpenAI, Google, and Anthropic are struggling to build more advanced AI, and that OpenAI's "Orion" model — otherwise known as GPT-5 — "did not hit the company's desired performance," and that "Orion is so far not considered to be as big a step up" as it was from GPT-3.5 to GPT-4, its current model. You'll be shocked to hear the reason is that because "it’s become increasingly difficult to find new, untapped sources of high-quality, human-made training data that can be used to build more advanced AI systems," something I said would happen in March, while also adding that the "AGI bubble is bursting a little bit," something I said more forcefully in July.
I also want to stop and stare daggers at one particular point:
These issues challenge the gospel that has taken hold in Silicon Valley in recent years, particularly since OpenAI released ChatGPT two years ago. Much of the tech industry has bet on so-called scaling laws that say more computing power, data and larger models will inevitably pave the way for greater leaps forward in the power of AI.
The only people taking this as "gospel" have been members of the media unwilling to ask the tough questions and AI founders that don't know what the fuck they're talking about (or that intend to mislead). Generative AI's products have effectively been trapped in amber for over a year. There have been no meaningful, industry-defining products, because, as economist Daron Acemoglu said back in May, "more powerful" models do not unlock new features, or really change the experience, nor what you can build with transformer-based models. Or, put another way, a slightly better white elephant is still a white elephant.
Despite the billions of dollars burned and thousands of glossy headlines, it's difficult to point to any truly important generative-AI-powered product. Even Apple Intelligence, the only thing that Apple really had to add to the latest iPhone, is utterly dull, and largely based on on-device models.
Yes, there are people that use ChatGPT — 200 million of them a week, allegedly, losing the company money with every prompt — but there is little to suggest that there's widespread adoption of actual generative AI software. The Information reported in September that between 0.1% and 1% of the 440 million of Microsoft's business customers were paying for its AI-powered Copilot, and in late October, Microsoft claimed that "AI is on pace to be a $10 billion-a-year business," which sounds good until you consider a few things:
I must be clear that every single one of these investments and products has been hyped with the whisper that they would get exponentially better over time, and that eventually the $200 billion in capital expenditures would spit out remarkable productivity improvements and fascinating new products that consumers and enterprises would buy in droves. Instead, big tech has found itself peddling increasingly-more-expensive iterations of near-identical Large Language Models — a direct result of them all having to use the same training data, which it’s now running out of.
The other assumption — those so-called scaling laws — has been that by simply building bigger data centers with more GPUs (the expensive, power-hungry graphics processing units used to both run and train these models) and throwing as much training data at them as possible, they'd simply start sprouting new capabilities, despite there being little proof that they'd do so. Microsoft, Meta, Amazon, and Google have all burned billions on the assumption that doing so would create something — be it a human-level "artificial general intelligence" or, I dunno, a product that would justify the costs — and it's become painfully obvious that it isn't going to work.
As we speak, outlets are already desperate to try and prove that this isn't a problem. The Information, in a similar story to Bloomberg's, attempted to put lipstick on the pig of generative AI, framing the lack of meaningful progress with GPT-5 as fine, because OpenAI can combine its GPT-5 Model with its o-1 "reasoning" model, which will then do something of some sort, such as "write a lot more very difficult code" according to OpenAI CEO and career liar Sam Altman, who intimated that GPT-5 may function like a "virtual brain" in May.
Chief Valley Cheerleader Casey Newton wrote on Platformer last week that diminishing returns in training models "may not matter as much as you would guess," with his evidence being that Anthropic, who he claims "has not been prone to hyperbole," do not think that scaling laws are ending. To be clear, in a 14,000 op-ed that Newton wrote two pieces about, Anthropic CEO Dario Amodei said that "AI-accelerated neuroscience is likely to vastly improve treatments for, or even cure, most mental illness," the kind of hyperbole that should have you tarred and feathered in public.
So, let me summarize:
The entire tech industry has become oriented around a dead-end technology that requires burning billions of dollars to provide inessential products that cost them more money to serve than anybody would ever pay. Their big strategy was to throw even more money at the problem until one of these transformer-based models created a new, more useful product — despite the fact that every iteration of GPT and other models has been, well, iterative. There has never been any proof (other than benchmarks that are increasingly easier to game) that GPT or other models would become conscious, nor that these models would do more than they do today, or three months ago, or even a year ago.
Yet things can, believe it or not, get worse.
The AI boom helped the S&P 500 hit record high levels in 2024, largely thanks to chip giant NVIDIA, a company that makes both the GPUs necessary to train and run generative AI models and the software architecture behind them. Part of NVIDIA's remarkable growth has been its ability to capitalize on the CUDA architecture — the software layer that lets you do complex computing with GPUs, rather than simply use them to render video games in increasingly higher resolution — and, of course, continually create new GPUs to sell for tens of thousands of dollars to tech companies that want to burn billions of dollars on generative AI, leading the company's stock to pop more than 179% over the last year.
Back in May, NVIDIA CEO and professional carnival barker Jensen Huang said that the company was now "on a one-year rhythm" in AI GPU production, with its latest "Blackwell" GPUs (specifically the B100, B200 and GB200 models used for generative AI) supposedly due at the end of 2024, but are now delayed until at least March 2025.Before we go any further, it's worth noting that when I say "GPU," I don't mean the one you'd find in a gaming PC, but a much larger chip put in a specialized server with multiple other GPUs, all integrated with specialized casing, cooling, and networking infrastructure. In simple terms, the things necessary to make sure all these chips work together efficiently, and also stop them from overheating, because they get extremely hot and are running at full speed, all the time.
The initial delay of the new Blackwell chips was caused by a (now-fixed) design flaw in production, but as I've suggested above, the problem isn't just creating the chips — it's making sure they actually work, at scale, for the jobs they're bought for.
But what if that, too, wasn't possible?
A few days ago, The Information reported that NVIDIA is grappling with the oldest problem in computing — how to cool the fucking things. According to the report, NVIDIA has been asking suppliers to change the design of its 3,000-pound, 75-GPU server racks "several times" to overcome overheating problems, which The Information calls "the most complicated design NVIDIA had ever come up with." According to the report, a few months after revealing the racks, engineers found that they...didn't work properly, even with Nvidia’s smaller 36-chip racks, and have been scrambling to fix it ever since.
While one can dazzle investors with buzzwords and charts, the laws of physics are a far harsher mistress, and if NVIDIA is struggling mere months before the first installations are to begin, it's unclear how it practically launches this generation of chips, let alone continues its yearly cadence. The Information reports that these changes have been made late in the production process, which is scaring customers that desperately need them so that their models can continue to do something they'll work out later. To quote The Information:
Two executives at large cloud providers that have ordered the new chips said they are concerned that such last-minute difficulties might push back the timeline for when they can get their GPU clusters up and running next year.
The fact that NVIDIA is having such significant difficulties with thermal performance is very, very bad. These chips are incredibly expensive — as much as $70,000 a piece — and will be running, as I've mentioned, at full speed, generating an incredible amount of heat that must be dissipated, while sat next to anywhere from 35 to 71 other chips, which will in turn be densely packed so that you can cram more servers into a data center. New, more powerful chips require entirely new methods to rack-mount, operate and cool them, and all of these parts must operate in sync, as overheating GPUs will die. While these units are big, some of their internal components are microscopic in size, and unless properly cooled, their circuits will start to crumble when roasted by a guy typing "Garfield with Gun" into ChatGPT.
Remember, Blackwell is supposed to represent a major leap forward in performance. If NVIDIA doesn’t solve its cooling problem — and solve it well — its customers will undoubtedly encounter thermal throttling, where the chip reduces speed in order to avoid causing permanent damage. It could eliminate any performance gains obtained from the new architecture and new manufacturing process, despite costing much, much more than its predecessor.
NVIDIA's problem isn't just bringing these thermal performance issues under control, but both keeping them under control and being able to educate their customers on how to do so. NVIDIA has, according to The Information, repeatedly tried to influence its customers' server integrations to follow its designs because it thinks it will "lead to better performance," but in this case, one has to worry if NVIDIA's Blackwell chips can be reliably cooled.
While NVIDIA might be able to fix this problem in isolation within its racks, it remains to be seen how this works at scale as they ship and integrate hundreds of thousands of Blackwell GPUs starting in the front half of 2025.
Things also get a little worse when you realize how these chips are being installed — in giant “supercomputer” data centers where tens of thousands, or as many as a hundred thousand in the case of Elon Musk’s “colossus” data center — of GPUs run in concert to power generative AI models. The Wall Street Journal reported a few weeks ago that building these vast data centers creates entirely new engineering challenges, with one expert saying that big tech companies could be using as much as half of their capital expenditures on replacing parts that have broken down, in large part because these clusters are running their GPUs at full speed, at all times.
Remember, the capital expenditures on generative AI and the associated infrastructure have gone over $200 billion in the last year. If half of that’s dedicated to replacing broken gear, what happens when there’s no path to profitability?
In any case, NVIDIA doesn’t care. It’s already made billions of dollars selling Blackwell GPUs — they're sold out for a year, after all — and will continue to do so for now, but any manufacturing or cooling issues will likely be costly.
And even then, at some point somebody has to ask the question: why do we need all these GPUs if we've reached peak AI? Despite the remarkable "power" of these chips, NVIDIA's entire enterprise GPU business model centers around the idea that throwing more power at these problems will finally create some solutions.
What if that isn't the case?
The tech industry is over-leveraged, having doubled, tripled, quadrupled down on generative AI — a technology that doesn't do much more than it did a few months ago and won't do much more than it can do now. Every single big tech company has piled tens of billions of dollars into building out massive data centers with the intent of "capturing AI demand," yet never seemed to think whether they were actually building things that people wanted, or would pay for, or would somehow make the company money.
While some have claimed that "agents are the next frontier," the reality is that agents may be the last generative AI product — multiple Large Language Models and integrations bouncing off of each other in an attempt to simulate what a human might do at a cost that won't be sustainable for the majority of businesses. While Anthropic's demo of its model allegedly controlling a few browser windows with a prompt might have seemed impressive to credulous people like Casey Newton, these were controlled demos which Anthropic added were "slow" and "made lots of mistakes." Hey, almost like it's hallucinating! I sure hope they fix that totally unfixable problem.
Even if it does, Anthropic has now successfully replaced...an entry-level data worker position at an indeterminate and likely unprofitable price. And in many organizations, those jobs had already been outsourced, or automated, or staffed with cheaper contractors.
The obscenity of this mass delusion is nauseating — a monolith to bad decision-making and the herd mentality of tech's most powerful people, as well as an outright attempt to manipulate the media into believing something was possible that wasn't. And the media bought it, hook, line, and sinker.
Hundreds of billions of dollars have been wasted building giant data centers to crunch numbers for software that has no real product-market fit, all while trying to hammer it into various shapes to make it pretend that it's alive, conscious, or even a useful product.
There is no path, from what I can see, to turn generative AI and its associated products into anything resembling sustainable businesses, and the only path that big tech appeared to have was to throw as much money, power, and data at the problem as possible, an avenue that appears to be another dead end.
And worse still, nothing has really come out of this movement. I've used a handful of AI products that I've found useful — an AI powered journal, for example — but these are not the products that one associates with "revolutions," but useful tools that would have been a welcome surprise if they didn't require burning billions of dollars, blowing past emissions targets and stealing the creative works of millions of people to train them.
I truly don't know what happens next, but I'll walk you through what I'm thinking.
If we're truly at the diminishing returns stage of transformer-based models, it will be extremely difficult to justify buying further iterations of NVIDIA GPUs past Blackwell. The entire generative AI movement lives and dies by the idea that more compute power and more training data makes these things better, and if that's no longer the case, there's little reason to keep buying bigger and better. After all, what's the point?
Even now, what exactly happens when Microsoft or Google has racks-worth of Blackwell GPUs? The models aren't going to get better.
This also makes the lives of OpenAI and Anthropic that much more difficult. Sam Altman has grown rich and powerful lying about how GPT will somehow lead to AGI, but at this point, what exactly is OpenAI meant to do? The only way it’s ever been able to develop new models is by throwing masses of compute and training data at the problem, and its only other choice is to start stapling its reasoning model onto its main Large Language Model, at which point something happens, something so good that literally nobody working for OpenAI or in the media appears to be able to tell you what it is.
Putting that aside, OpenAI is also a terrible business that has to burn $5 billion to make $3.4 billion, with no proof that it’s capable of bringing down costs. The constant refrain I hear from VCs and AI fantasists is that "chips will bring down the cost of inference," yet I don't see any proof of that happening, nor do I think it'll happen quickly enough for these companies to turn things around.
And you can feel the desperation, too. OpenAI is reportedly looking at ads as a means to narrow the gap between its revenues and losses. As I pointed out in Burst Damage, introducing an advertising revenue stream would require significant upfront investment, both in terms of technology and talent. OpenAI would need a way to target ads, and a team to sell advertising — or, instead, use a third-party ad network that would take a significant bite out of its revenue.
It’s unclear how much OpenAI could charge advertisers, or what percentage of its reported 200 million weekly users have an ad-blocker installed. Or, for that matter, whether ads would provide a perverse incentive for OpenAI to enshittify an already unreliable product.
Facebook and Google — as I’ve previously noted — have made their products manifestly worse in order to increase the amount of time people spend on their sites, and thus, the number of ads they see. In the case of Facebook, it buried your newsfeed under a deluge of AI-generated sludge and “recommended content.” Google, meanwhile, has progressively degraded the quality of its search results in order to increase the volume of queries it received as a means of making sure users saw more ads.
OpenAI could, just as easily, fall into the same temptation. Most people who use ChatGPT are trying to accomplish a specific task — like writing a term paper, or researching a topic, or whatever — and then they leave. And so, the amount of ads they’d conceivably see each will undoubtedly be comparatively low compared to a social network or search engine. Would OpenAI try to get users to stick around longer — to write more prompts — by crippling the performance of its models?
Even if OpenAI listens to its better angels, the reality still stands: ads won’t dam the rising tide of red ink that promises to eventually drown the company.
This is a truly dismal situation where the only options are to stop now, or continue burning money until the heat gets too much. It cost $100 million to train GPT-4o, and Anthropic CEO Dario Amodei estimated a few months ago that training future models will cost $1 billion to $10 billion, with one researcher claiming that training OpenAI's GPT-5 will cost around $1 billion.
And that’s before mentioning any, to quote a Rumsfeldism, “unknown unknowns.” Trump’s election, at the risk of sounding like a cliché, changes everything and in ways we don’t yet fully understand. According to the Wall Street Journal, Musk has successfully ingratiated himself with Trump, thanks to his early and full-throated support of his campaign. He’s now reportedly living in Mar a Lago, sitting on calls with world leaders, and whispering in Trump’s ear as he builds his cabinet.
And, as The Journal claims, his enemies fear that he could use his position of influence to harm them or their businesses — chiefly Sam Altman, who is “persona non grata” in Musk’s world, largely due to the new for-profit direction of OpenAI. While it’s likely that these companies will fail due to inevitable organic realities (like running out of money, or not having a product that generates a profit), Musk’s enemies must now contend with a new enemy — one with the full backing of the Federal government, and that neither forgives nor forgets.
And, crucially, one that’s not afraid to bend ethical or moral laws to further his own interests — or to inflict pain on those perceived as having slighted him.
Even if Musk doesn’t use his newfound political might to hurt Altman and OpenAI, he could still pursue the company as a private citizen. Last Friday, he filed an injunction requesting a halt to OpenAI’s transformation from an ostensible non-profit to a for-profit business. Even if he ultimately fails, should Musk manage to drag the process out, or delay it temporarily, it could strike a terminal blow for OpenAI.
That’s because in its most recent fundraise, OpenAI agreed that it would convert its recent $6.6bn equity investment into high-interest debt, should it fail to successfully convert into a for-profit business within a two-year period. This was a tight deadline to begin with, and it can’t afford any delays. The interest payments on that debt would massively increase its cash burn, and it would undoubtedly find it hard to obtain further outside investment.
Outside of a miracle, we are about to enter an era of desperation in the generative AI space. We're two years in, and we have no killer apps — no industry-defining products — other than ChatGPT, a product that burns billions of dollars and nobody can really describe. Neither Microsoft, nor Meta, nor Google or Amazon seem to be able to come up with a profitable use case, let alone one their users actually like, nor have any of the people that have raised billions of dollars in venture capital for anything with "AI" taped to the side — and investor interest in AI is cooling.
It's unclear how much further this farce continues, if only because it isn't obvious what it is that anybody gets by investing in future rounds in OpenAI, Anthropic, or any other generative AI company. At some point they must make money, and the entire dream has been built around the idea that all of these GPUs and all of this money would eventually spit out something revolutionary.
Yet what we have is clunky, ugly, messy, larcenous, environmentally-destructive and mediocre. Generative AI was a reckless pursuit, one that shows a total lack of creativity and sense in the minds of big tech and venture capital, one where there was never anything really impressive other than the amount of money it could burn and the amount of times Sam Altman could say something stupid and get quoted for it.
I'll be honest with you, I have no idea what happens here. The future was always one that demanded that big tech spent more to make even bigger models that would at some point become useful, and that isn't happening. In pursuit of doing so, big tech invested hundreds of billions of dollars into infrastructure specifically to follow one goal, and put AI front and center at their businesses, claiming it was the future without ever considering what they'd do if it wasn't.
The revenue isn't coming. The products aren't coming. "Orion," OpenAI's next model, will underwhelm, as will its competitors' models, and at some point somebody is going to blink in one of the hyperscalers, and the AI era will be over. Almost every single generative AI company that you’ve heard of is deeply unprofitable, and there are few innovations coming to save them from the atrophy of the foundation models.
I feel sad and exhausted as I write this, drained as I look at the many times I’ve tried to warn people, frustrated at the many members of the media that failed to push back against the overpromises and outright lies of people like Sam Altman, and full of dread as I consider the economic ramifications of this industry collapsing. Once the AI bubble pops, there are no other hyper-growth markets left, which will in turn lead to a bloodbath in big tech stocks as they realize that they’re out of big ideas to convince the street that they’re going to grow forever.
There are some that will boast about “being right” here, and yes, there is some satisfaction in being so. Nevertheless, knowing that the result of this bubble bursting will be massive layoffs, a dearth in venture capital funding, and a much more fragile tech ecosystem.
I’ll end with a quote from Bubble Trouble, a piece I wrote in April:
How do you solve all of these incredibly difficult problems? What does OpenAI or Anthropic do when they run out of data, and synthetic data doesn't fill the gap, or worse, massively degrades the quality of their output? What does Sam Altman do if GPT-5 — like GPT-4 — doesn't significantly improve its performance and he can't find enough compute to take the next step? What do OpenAI and Anthropic do when they realize they will likely never turn a profit? What does Microsoft, or Amazon, or Google do if demand never really takes off, and they're left with billions of dollars of underutilized data centers? What does Nvidia do if the demand for its chips drops off a cliff as a result?
I don't know why more people aren't screaming from the rooftops about how unsustainable the AI boom is, and the impossibility of some of the challenges it faces. There is no way to create enough data to train these models, and little that we've seen so far suggests that generative AI will make anybody but Nvidia money. We're reaching the point where physics — things like heat and electricity — are getting in the way of progressing much further, and it's hard to stomach investing more considering where we're at right now is, once you cut through the noise, fairly god damn mediocre. There is no iPhone moment coming, I'm afraid.
I was right then and I’m right now. Generative AI isn’t a revolution, it’s an evolution of a tech industry overtaken by growth-hungry management consultant types that neither know the problems that real people face nor how to fix them. It’s a sickening waste, a monument to the corrupting force of growth, and a sign that the people in power no longer work for you, the customer, but for the venture capitalists and the markets.
I also want to be clear that none of these companies ever had a plan. They believed that if they threw enough GPUs together they would turn generative AI – probabilistic models for generating stuff — into some sort of sentient computer. It’s much easier, and more comfortable, to look at the world as a series of conspiracies and grand strategies, and far scarier to see it for what it is — extremely rich and powerful people that are willing to bet insanely large amounts of money on what amounts to a few PDFs and their gut.
This is not big tech’s big plan to excuse building more data centers — it’s the death throes of twenty years of growth-at-all-costs thinking, because throwing a bunch of money at more servers and more engineers always seemed to create more growth. In practice, this means that the people in charge and the strategies they employ are borne not of an interest in improving the lives of their customers, but in increasing revenue growth, which means the products they create aren’t really about solving any problem other than “what will make somebody give me more money,” which doesn’t necessarily mean “provide them with a service.”
Generative AI is the perfect monster of the Rot Economy — a technology that lacks any real purpose sold as if it could do literally anything, one without a real business model or killer app, proliferated because big tech no longer innovates, but rather clones and monopolizes. Yes, this much money can be this stupid, and yes, they will burn billions in pursuit of a non-specific dream that involves charging you money and trapping you in their ecosystem.
I’m not trying to be a doomsayer, just like I wasn’t trying to be one in March. I believe all of this is going nowhere, and that at some point Google, Microsoft, or Meta is going to blink and pull back on their capital expenditures. And before then, you’re going to get a lot of desperate stories about how “AI gains can be found outside of training new models” to try and keep the party going, despite reality flicking the lights on and off and threatening to call the police.
I fear for the future for many reasons, but I always have hope, because I believe that there are still good people in the tech industry and that customers are seeing the light. Bluesky feels different — growing rapidly, competing with both Threads and Twitter, all while selling an honest product and an open protocol.
There are other ideas for the future that aren’t borne of the scuzzy mindset of billionaire shitheels like Sundar Pichai and Sam Altman, and they can — and will — grow out of the ruins created by these kleptocrats.
2024-11-13 21:31:11
Soundtrack: Post Pop Depression - Paraguay
I haven't wanted to write much in the last week.
Seemingly every single person on Earth with a blog has tried to drill down into what happened on November 5 — to find the people to blame, to somehow explain what could've been done differently, by whom, and why so many actions led to a result that will overwhelmingly harm women, minorities, immigrants, LGBTQ people, and lower-income workers. It's a terrifying time.
I feel woefully unequipped to respond to the moment. I don't have any real answers. I am not a political analyst, and I would feel disingenuous dissecting the Harris (or Trump) campaigns, because I feel like this has been the Dunning-Kruger Olympics for takes, where pundits compete to rationalize and intellectualize events in an attempt to ward off the very thing that has buried us in red: a shared powerlessness and desperation.
People don't trust authority, and yes, it is ironic that this often leads them toward authoritarian figures.
Legacy media — while oftentimes staffed by people that truly love their readers, care about their beats and write like their lives depend upon it — is weighed down by a hysterical attachment to the imaginary concept of objectivity and “the will of the markets.”
Case in point: Regular people have spent years watching the price of goods increase "due to inflation," despite the fact that the increase in pricing was mostly driven by — get this — corporations raising prices. Yet some parts of the legacy media spent an alarming amount of time chiding their readers for thinking otherwise, even going against their own reporting as a means of providing "balanced" coverage, insisting again and again that the economy is good, contorting to prove that prices aren't higher even as companies boasted about literally raising their prices. In fact, the media spent years debating with itself whether price gouging was happening, despite years of proof that it was.
People don’t trust authority, and they especially don’t trust the media — especially the legacy media. It probably didn’t help that they implored readers and viewers to ignore what they saw at the supermarket or when at the pump, and the growing hits to their wallets from the daily necessities of life, gaslighting them that everything was fine.
As an aside: I have used the term “legacy media” here repeatedly, but I don’t completely intend for it to come across as a pejorative. Despite my criticisms, there are people in the legacy media doing a good job, reporting the truth, doing the kinds of work that matters and illuminates readers. I read — and pay for — several legacy media outlets, and I think the world is a better place for them existing, despite their flaws.
The problem, as I’ll explain, is the editorial industrial complex, and how those writing about the powerful don’t seem to be able to (or want to) interrogate power. This could be an entire piece by itself, but I don’t think the answer to these failings is to simply discard legacy media entirely, but to implore it to do better and to strive for the values of truth-hunting and truth-telling that once defined the Fourth Estate — and can once again.
To simmer this down, the price of everything has kept increasing as wages stagnated. Simultaneously, businesses spent several years telling workers they were asking for too much and doing too little, telling people they were “quiet quitting” in 2022 (a grotesque term that means “doing the job you are paid to do”), and, a year later, insisting that years of remote work was actually bad because profits didn’t reach the unrealistic expectations set by the post-lockdown boom of 2021. While the majority of people don't work remotely, from talking to the people I know outside of tech or business, there is a genuine sense that the media has allied itself with the bosses, and I imagine it's because of the many articles that literally call workers lazy.
Yet, when it comes to the powerful, the criticisms feel so much more guarded. Despite the fact that Elon Musk has spent years telegraphing his intent to use his billions of dollars to wield power equivalent to that of a nation state, too much of the media — both legacy and otherwise — responded slowly, cautiously, failing to call him a liar, a con artist, an aggressor, a manipulator, and a racist. Sure, they reported stories that might make you think that, but the desperation to guard objectivity was (and is) such that there is never any intent to call Musk what he was (and is) — a racist billionaire using his outsized capital to bend society to his will.
The news — at least outside of the right wing media terrordome — is always separated from opinion, always guarded, always safe, for fear that they might piss off somebody and be declared "biased," something that happens anyway. While there are columnists that are given some space to have their own thoughts in the newspaper, the stories themselves are delivered with the kind of reserved "hmmm..." tone that often fails to express the consequences of the news itself and lacks the context necessary to deliver the news itself.
This isn't to say these outlets are incapable of doing this right — The Washington Post has done an excellent job of analysis in tech, for example — but that they are custom-built to be bulldozed by authoritarianism, a force that exists to crush those desperately attached to norms and objectivity. Authoritarians know that their ideologically-charged words will be quoted ad verbatim with the occasional "this could mean..." context that's lost in a headline that repeats exactly what they wanted it to.
We rarely explain the structures of our democracy in ways that let people see how to interact with it, which leaves it instead in the hands of special interests who can bankroll their perspectives, even when they’re actively harmful.
...Little of the gravity of what we’re facing makes it into everyday news coverage in a way that would allow us to have real conversations as a country on how to chart a way forward. Instead, each day, we as an industry — to borrow from John Nichols and Robert McChesney’s book Tragedy and Farce — pummel people with facts, but not the context to make sense of them.
Musk is the most brutal example. Despite turning Twitter into a website pumped full of racism and hatred that helped make Donald Trump president, Musk was still able to get mostly-positive coverage from the majority of the mainstream media despite the fact that he has spent the best part of a decade lying about what Tesla will do next. It doesn't matter that these outlets had accompanying coverage that suggested that the markets weren't impressed by its robotaxi plans, or its potemkin robots — Musk is still demonstrably able to use the media's desperation for objectivity against them, knowing that they would never dare combine thinking about stuff with reporting on stuff for fear that someone might say they have "bias" in their "coverage."
This is, by the way, not always the fault of the writers. There are entire foundations of editors that have more faith in the markets and the powerful than they do in the people who spend their days interrogating them, and above them entire editorial superstructures that exist to make sure that the "editorial vision" never colors too far outside the lines. I'm not even talking about Jeff Bezos, or Laurene Powell Jobs, or any number of billionaires who own any number of publications, but the editors editing business and tech reporters who don't know anything about business and tech, or the senior editors that are terrified of any byline that might dare get the outlet "under fire" from somebody who could call their boss.
There are, however, also those who simply defer to the powerful — that assume that "this much money can't be wrong," even if said money has been wrong repeatedly to the point that there's an entire website about it. They are the people that look at the current crop of powerful tech companies that have failed to deliver any truly meaningful innovation in years and coo like newborn babes. Look at the coverage of Sam Altman from the last year — you know, the guy who has spent years lying about what artificial intelligence can do — and tell me why every single thought he has must be uncritically cataloged, his every decision applauded, his every claim trumpeted as certain, his brittle company's obvious problems apologized for and readers reassured of his obvious victory.
Nowhere is this more obvious right now than in The Guardian's nonsensical decision to abandon Twitter, decrying how "X is a toxic media platform and that its owner, Elon Musk, has been able to use its influence to shape political discourse" mere weeks after printing, bereft of context, Elon Musk's ridiculous lies about his plans for cybertaxis. There is little moral quality to leaving X if your outlet continues to act as a stenographer for its leader, and this in fact suggests a lack of any real interest in change or progress, just the paper tiger of norms and values that will only end up depriving people of good journalism.
On the other side of the tracks, Sam Altman is a liar who's been fired from two companies, including OpenAI, and yet because he's a billionaire with a buzzy company, he's left unscathed. The powerful get a completely different set of rules to live by and exist in a totally different media environment — they're geniuses, entrepreneurs and firebrands, their challenges framed as "missteps" and their victories framed as certainties by the same outlets that told us that we were "quiet quitting" and that the economy is actually good and we are the problem. While it's correct to suggest that the right wing is horrendously ideologically biased, it's very hard to look at the rest of the media and claim they're not.
While it might feel a little tangential to bring technology into this, everybody is affected by the growth-at-all-costs Rot Economy, because everybody is using technology, all the time, and the technology in question is getting worse. This election cycle saw more than 25 billion text messages sent to potential voters, and seemingly every website was crammed full of random election advertising.
Our phones are beset with notifications trying to "growth-hack" us into doing things that companies want, our apps full of microtransactions, our websites slower and harder-to-use with endless demands of our emails and our phone numbers and the need to log back in because they couldn't possibly lose a dollar to somebody who dared to consume their content for free. Our social networks are so algorithmically charged that they barely show us the things we want them to anymore, with executives dedicated to filling our feeds with AI-generated slop because despite being the customer, we are also the revenue mechanism. Our search engines do less as a means of making us use them more, our dating apps have become vehicles for private equity to add a toll to falling in love, our video games are constantly nagging us to give them more money, and despite it costing money and being attached to our account, we don't actually own any of the streaming media we purchase. We're drowning in spam — both in our emails and on our phones — and at this point in our lives we're probably agreed to 3 million pages worth of privacy policies allowing companies to use our information as they see fit.
And these are issues that hit everything we do, all the time, constantly, unrelentingly. Technology is our lives now. We wake up, we use our phone, we check our texts (three spam calls, two spam texts), we look at our bank balance (two-factor authentication check), we read the news (a quarter of the page is blocked by an advertisement asking for our email that's deliberately built to hide the button to get rid of it, or a login screen because we got logged out somehow), we check social media (after being shown an ad every two clicks), and then we log onto Slack (and feel a pang of anxiety as 15 different notifications appear).
Modern existence has become engulfed in sludge, the institutions that exist to cut through it bouncing between the ignorance of their masters and a misplaced duty in objectivity, our mechanisms for exploring and enjoying the world interfered with by powerful forces that are too-often left unchecked. Opening our devices is willfully subjecting us to attack after attack from applications, websites and devices that are built to make us do things rather than operate with the dignity and freedom that much of the internet was founded upon.
These millions of invisible acts of terror are too-often left undiscussed, because accepting the truth requires you to accept that most of the tech ecosystem is rotten, and that billions of dollars are made harassing and punishing billions of people every single day of their lives through the devices that we’re required to use to exist in the modern world. Most users suffer the consequences, and most media fails to account for them, and in turn people walk around knowing something is wrong but not knowing who to blame until somebody provides a convenient excuse.
Why wouldn't people crave change? Why wouldn't people be angry? Living in the current world can be absolutely fucking miserable, bereft of industry and filthy with manipulation, an undignified existence, a disrespectful existence that must be crushed if we want to escape the depressing world we've found ourselves in. Our media institutions are fully fucking capable of dealing with these problems, but it starts with actually evaluating them and aggressively interrogating them without fearing accusations of bias that will happen either way.
The truth is that the media is more afraid of bias than they are of misleading their readers. And while that seems like a slippery slope, and may very well be one, there must be room to inject the writer’s voice back into their work, and a willingness to call out bad actors as such, no matter how rich they are, no matter how big their products are, and no matter how willing they are to bark and scream that things are unfair as they accumulate more power.
If you're in the tech industry and reading this and saying that "the media is too critical" of tech, you are flat fucking wrong. Everything we're seeing happening right now is a direct result of a society that let technology and the ultra-rich run rampant, free of both the governmental guardrails that might have stopped them and the media ecosystem that might have held them accountable.
Our default position in interrogating the intentions and actions of the tech industry has become that they will "work it out" as they continually redefine "work it out" as "make their products worse but more profitable." Covering Meta, Twitter, Google, OpenAI and other huge tech companies as if the products they make are remarkable and perfect is disrespectful to readers and a disgusting abdication of responsibility, as their products are, even when they're functional, significantly worse, more annoying, more frustrating and more convoluted than ever, and that's before you get to the ones like Facebook and Instagram that are outright broken.
I don't give a shit if these people have "raised a lot of money," unless you use that as proof that something is fundamentally wrong with the tech industry. Meta making billions of dollars of profit is a sign of something wrong with society, not proof that it’s a "good company" or anything that should grant Mark Zuckerberg any kind of special treatment. OpenAI being "worth" $157 billion for a company that burns $5 billion or more a year to make a product that destroys our environment for a product yet to find any real meaning isn't a sign that it should get more coverage or be taken more seriously. Whatever you may feel about ChatGPT, the coverage it receives is outsized compared to its actual utility and the things built on top of it, and that's a direct result of a media industry that seems incapable of holding the powerful accountable.
It's time to accept that most people's digital life fucking sucks, as does the way we consume our information, and that there are people directly responsible. Be as angry as you want at Jeff Bezos, whose wealth (and the inherent cruelty of Amazon’s labor practices, and the growing enshittification of Amazon itself) makes him an obvious target, but don’t forget Mark Zuckerberg, Elon Musk, Sundar Pichai, Tim Cook and every single other tech executive that has allowed our digital experiences to become rotted out husks dominated by algorithms. These companies are not bound by civic duty, or even a duty to their customers — they have made their monopolies, and they’ll do whatever keeps you trapped in them. If they want me to think otherwise, they should prove it, and the media should stop trying to prove it for them.
Similarly, governments have entirely failed to push through any legislation that might stymie the rot, both in terms of the dominance (and opaqueness) of algorithmic manipulation and the ways in which tech products exist with few real quality standards. We may have (at least for now) consumer standards for the majority of consumer goods, but software is left effectively untouched, which is why so much of our digital lives is such unfettered dogshit.
And if you're reading this and saying I'm being a hater or pessimist, shut the fuck up. I'm so fucking tired of being told to calm down about this as we stare down the barrel of four years of authoritarianism built on top of the decay of our lives (both physical and digital), with a media ecosystem that doesn't do a great job of explaining what's being done to people in an ideologically consistent way. I'm angry, and I don't know why you're not. Explain it to me. Email me. Explain yourself, explain why you do not see the state of our digital lives as one of outright decay and rot, one that robs users of dignity and industry, one that actively harms billions of people in pursuit of greed.
There is an extremely-common assumption in the tech media — based on what, I'm not sure — that these companies are all doing a good job, and that "good job" means having lots of users and making lots of money, and it drives editorial decision-making.
If three-quarters of the biggest car manufacturers were making record profits by making half of their cars with a brake that sometimes doesn't work, it'd be international news, leading to government inquiries and people being put in prison. This isn’t conjecture. After Volkswagen was caught deliberately programming its engines to only meet emissions standards during laboratory testing and certification, lawmakers around the globe responded with civil and criminal action. The executives and engineers responsible were indicted, with one receiving seven years in jail. Its former CEO is currently being tried in Germany, and has been indicted in the US.
And yet so much of the tech industry — consumer software like Google, Facebook, Twitter, and even ChatGPT, and business software from companies like Microsoft and Slack — outright sucks, yet gets covered as if that's just "how things are." Meta, by the admission of its own internal documents, makes products that are ruinous to the mental health of teenage girls. And it hasn’t made any substantial changes. Nor has it received any significant pushback for failing to do so. It exercises a reckless disregard for public safety as the auto industry in the 1960s, when Ralph Nader wrote “Unsafe At Any Speed.”
Nader’s book actually brought about change. It led to the Department of Transport, and the passage of seat belt laws in 49 states, and a bunch of other things that get overlooked (and possibly because he led to eight years of George W. Bush as president). But the tech industry is somehow inoculated against any kind of public pressure or shame, because it operates by a completely different rule book and a different criteria for success, as well as a different set of expectations. By allowing the market to become disconnected from the value it creates, we enable companies like NVIDIA that reduce the quality of their services as they make more money, or Facebook destroying our political discourse or facilitating a genocide in Myanmar, and then celebrate them because, well, they made more money. No, really, that’s quite literally what now-CTO Andrew Bosworth said in an internal memo from 2016, where he said that “all the work [Facebook does] in growth is justified,” even if that includes — and I am quoting him directly — “somebody dying in a terrorist attack coordinated [using Facebook’s tools.]”
The mere mention of violent crime is enough to create reams of articles questioning whether society is safe, yet our digital lives are a wasteland that many still discuss like a utopia. Seriously, putting aside the social networks, have you visited a website on a phone recently? Have you tried to use an app? Have you tried to buy something online starting with a Google Search? Within those experiences, has anything gone wrong? I know it has! You know it has! It's time to wake up!
We — users of products — are at war with the products we’re using and the people that make them. And right now, we’re losing.
The media must realign to fight for how things should be. This doesn't mean that they can't cover things positively, or give credit where credit is due, or be willing to accept what something could be, but what has to change is the evaluation of the products themselves, which have been allowed to decay to a level that has become at best annoying and at worst actively harmful to society.
Our networks are rotten, our information ecosystem poisoned with its pure parts ideologically and strategically concussed, our means of speaking to those we love and making new connections so constantly interfered-with that personal choice and dignity is all but removed.
But there is hope. Those covering the tech industry have one of the most consequential jobs in journalism, if they choose to heed the call. Those willing to guide people through the wasteland — those willing to discuss what needs to change, how bad things have gotten, and what good might look like — have the opportunity to push for a better future by spitting in the faces of those ruining it.
I don’t know where I sit, what title to give myself, if I am legacy (I got my start writing for a print magazine) or independent or an “influencer” or a “content creator,” and I’m not sure I care. All I know is that I feel like I am at war, and we — if I can be considered part of the media — are at war with people that have changed the terms of innovation so that it’s synonymous with value extraction. Technology is how I became a person, how I met my closest friends and loved ones, and without it I would not be able to write, let alone be able to write this newsletter, and I feel poison flow through my veins as I see what these motherfuckers have done and what they will continue to do if they’re not consistently and vigorously interrogated.
Now is the time to talk bluntly about what’s happening. The declining quality of these products, the scourge of growth-hacking, the cancerous growth-at-all-costs mindset, these are all things that need to be raised in every single piece, and judgments must be unrelenting. The companies will squeal that they are being unfairly treated by “biased legacy media,” something which (as I’ve said repeatedly) is already happening.
These companies are poisoning the digital world, and they must be held accountable for the damage they are causing. Readers are already aware, but are — with the help of some members of the media — gaslighting themselves into believing that they “just don’t get it,” when the thing they don’t get is that the tech industry has built legions of obfuscations, legal tricks, and horrifying user interface traps with the intention of making the customer believe they’re the problem.
Things can change, but it has to start with the information sources, and that starts with journalism. The work has already begun, and will continue, but must scale up, and do so quickly.
And you, the user, have power too. Learn to read a privacy policy (yes, there are plenty of people in the tech media who give a shit, the Post has several of them, Bezos be damned). Move to Signal, an encrypted messaging app that works on just about everything. Get a service like DeleteMe (I pay for it, I worked for them like 4 years ago, I have no financial relationship with them) to remove yourself from data brokers. Molly White, a wonderful friend and even better writer, has written an extremely long guide about what to do next, and it runs through a ton of great things you can do — unionization, finding your communities, dropping apps that collect and store sensitive data, and so on.I also recommend WIRED’s guide to protecting yourself from government surveillance.
I'll leave you with a thought I posted on the Better Offline Reddit on November 6.
The last 24 hours things have felt bleak, and will likely feel more bleak as the months and years go on. It will be easy to give into doom, to assume the fight is lost, to assume that the bad guys have permanently won and there will never be the justice or joy we deserve.
Now is the time for solidarity, to crystalize around the ideas that matter, even if their position in society is delayed, even as the clouds darken and the storms brew and the darkness feels all-encompassing and suffocating. Reach out to those you love, and don't just commiserate - plan. It doesn't have to be political. It doesn't even really have to matter. Put shit on your fucking calendar, keep yourself active, and busy, and if not distracted, at the very least animated. Darkness feasts on idleness. Darkness feasts on a sense of failure, and a sense of inability to make change.
You don't know me well, but know that I am aware of the darkness, and the sadness, and the suffocation of when things feel overwhelming. Give yourself mercy today, and in the days to come, and don't castigate yourself for feeling gutted.
Then keep going. I realize it's little solace to think "well if I keep saying stuff out loud things will get better," but I promise you doing so has an effect, and actually matters. Keep talking about how fucked things are. Make sure it's written down. Make sure it's spoken cleanly, and with rage and fire and piss and vinegar. Things will change for the better, even if it takes more time than it should.