MoreRSS

site iconEd Zitron

CEO of national Media Relations and Public Relations company EZPR
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Ed Zitron

Lost In The Future

2024-11-13 21:31:11

Soundtrack: Post Pop Depression - Paraguay


I haven't wanted to write much in the last week.

Seemingly every single person on Earth with a blog has tried to drill down into what happened on November 5 — to find the people to blame, to somehow explain what could've been done differently, by whom, and why so many actions led to a result that will overwhelmingly harm women, minorities, immigrants, LGBTQ people, and lower-income workers. It's a terrifying time.

I feel woefully unequipped to respond to the moment. I don't have any real answers. I am not a political analyst, and I would feel disingenuous dissecting the Harris (or Trump) campaigns, because I feel like this has been the Dunning-Kruger Olympics for takes, where pundits compete to rationalize and intellectualize events in an attempt to ward off the very thing that has buried us in red: a shared powerlessness and desperation.

People don't trust authority, and yes, it is ironic that this often leads them toward authoritarian figures.

Legacy media — while oftentimes staffed by people that truly love their readers, care about their beats and write like their lives depend upon it — is weighed down by a hysterical attachment to the imaginary concept of objectivity and “the will of the markets.” 

Case in point: Regular people have spent years watching the price of goods increase "due to inflation," despite the fact that the increase in pricing was mostly driven by — get this — corporations raising prices. Yet some parts of the legacy media spent an alarming amount of time chiding their readers for thinking otherwise, even going against their own reporting as a means of providing "balanced" coverage, insisting again and again that the economy is good, contorting to prove that prices aren't higher even as companies boasted about literally raising their prices. In fact, the media spent years debating with itself whether price gouging was happening, despite years of proof that it was.

People don’t trust authority, and they especially don’t trust the media — especially the legacy media. It probably didn’t help that they implored readers and viewers to ignore what they saw at the supermarket or when at the pump, and the growing hits to their wallets from the daily necessities of life, gaslighting them that everything was fine. 

As an aside: I have used the term “legacy media” here repeatedly, but I don’t completely intend for it to come across as a pejorative. Despite my criticisms, there are people in the legacy media doing a good job, reporting the truth, doing the kinds of work that matters and illuminates readers. I read — and pay for — several legacy media outlets, and I think the world is a better place for them existing, despite their flaws. 

The problem, as I’ll explain, is the editorial industrial complex, and how those writing about the powerful don’t seem to be able to (or want to) interrogate power. This could be an entire piece by itself, but I don’t think the answer to these failings is to simply discard legacy media entirely, but to implore it to do better and to strive for the values of truth-hunting and truth-telling that once defined the Fourth Estate — and can once again. 

To simmer this down, the price of everything has kept increasing as wages stagnated. Simultaneously, businesses spent several years telling workers they were asking for too much and doing too little, telling people they were “quiet quitting” in 2022 (a grotesque term that means “doing the job you are paid to do”), and, a year later, insisting that years of remote work was actually bad because profits didn’t reach the unrealistic expectations set by the  post-lockdown boom of 2021. While the majority of people don't work remotely, from talking to the people I know outside of tech or business, there is a genuine sense that the media has allied itself with the bosses, and I imagine it's because of the many articles that literally call workers lazy.

Yet, when it comes to the powerful, the criticisms feel so much more guarded. Despite the fact that Elon Musk has spent years telegraphing his intent to use his billions of dollars to wield power equivalent to that of a nation state, too much of the media — both legacy and otherwise — responded slowly, cautiously, failing to call him a liar, a con artist, an aggressor, a manipulator, and a racist. Sure, they reported stories that might make you think that, but the desperation to guard objectivity was (and is) such that there is never any intent to call Musk what he was (and is) — a racist billionaire using his outsized capital to bend society to his will.

The news — at least outside of the right wing media terrordome — is always separated from opinion, always guarded, always safe, for fear that they might piss off somebody and be declared "biased," something that happens anyway. While there are columnists that are given some space to have their own thoughts in the newspaper, the stories themselves are delivered with the kind of reserved "hmmm..." tone that often fails to express the consequences of the news itself and lacks the context necessary to deliver the news itself.

This isn't to say these outlets are incapable of doing this right — The Washington Post has done an excellent job of analysis in tech, for example — but that they are custom-built to be bulldozed by authoritarianism, a force that exists to crush those desperately attached to norms and objectivity. Authoritarians know that their ideologically-charged words will be quoted ad verbatim with the occasional "this could mean..." context that's lost in a headline that repeats exactly what they wanted it to.

Kendra Pierre-Louis, a senior climate reporter with Gimlet's How To Save A Planet, put it well in a piece on NiemanLab:

We rarely explain the structures of our democracy in ways that let people see how to interact with it, which leaves it instead in the hands of special interests who can bankroll their perspectives, even when they’re actively harmful.

...Little of the gravity of what we’re facing makes it into everyday news coverage in a way that would allow us to have real conversations as a country on how to chart a way forward. Instead, each day, we as an industry — to borrow from John Nichols and Robert McChesney’s book Tragedy and Farce — pummel people with facts, but not the context to make sense of them.

Musk is the most brutal example. Despite turning Twitter into a website pumped full of racism and hatred that helped make Donald Trump president, Musk was still able to get mostly-positive coverage from the majority of the mainstream media despite the fact that he has spent the best part of a decade lying about what Tesla will do next. It doesn't matter that these outlets had accompanying coverage that suggested that the markets weren't impressed by its robotaxi plans, or its potemkin robots — Musk is still demonstrably able to use the media's desperation for objectivity against them, knowing that they would never dare combine thinking about stuff with reporting on stuff for fear that someone might say they have "bias" in their "coverage." 

This is, by the way, not always the fault of the writers. There are entire foundations of editors that have more faith in the markets and the powerful than they do in the people who spend their days interrogating them, and above them entire editorial superstructures that exist to make sure that the "editorial vision" never colors too far outside the lines. I'm not even talking about Jeff Bezos, or Laurene Powell Jobs, or any number of billionaires who own any number of publications, but the editors editing business and tech reporters who don't know anything about business and tech, or the senior editors that are terrified of any byline that might dare get the outlet "under fire" from somebody who could call their boss.

There are, however, also those who simply defer to the powerful — that assume that "this much money can't be wrong," even if said money has been wrong repeatedly to the point that there's an entire website about it. They are the people that look at the current crop of powerful tech companies that have failed to deliver any truly meaningful innovation in years and coo like newborn babes. Look at the coverage of Sam Altman from the last year — you know, the guy who has spent years lying about what artificial intelligence can do — and tell me why every single thought he has must be uncritically cataloged, his every decision applauded, his every claim trumpeted as certain, his brittle company's obvious problems apologized for and readers reassured of his obvious victory.

Nowhere is this more obvious right now than in The Guardian's nonsensical decision to abandon Twitter, decrying how "X is a toxic media platform and that its owner, Elon Musk, has been able to use its influence to shape political discourse" mere weeks after printing, bereft of context, Elon Musk's ridiculous lies about his plans for cybertaxis. There is little moral quality to leaving X if your outlet continues to act as a stenographer for its leader, and this in fact suggests a lack of any real interest in change or progress, just the paper tiger of norms and values that will only end up depriving people of good journalism.

On the other side of the tracks, Sam Altman is a liar who's been fired from two companies, including OpenAI, and yet because he's a billionaire with a buzzy company, he's left unscathed. The powerful get a completely different set of rules to live by and exist in a totally different media environment — they're geniuses, entrepreneurs and firebrands, their challenges framed as "missteps" and their victories framed as certainties by the same outlets that told us that we were "quiet quitting" and that the economy is actually good and we are the problem. While it's correct to suggest that the right wing is horrendously ideologically biased, it's very hard to look at the rest of the media and claim they're not.


While it might feel a little tangential to bring technology into this, everybody is affected by the growth-at-all-costs Rot Economy, because everybody is using technology, all the time, and the technology in question is getting worse. This election cycle saw more than 25 billion text messages sent to potential voters, and seemingly every website was crammed full of random election advertising.

Our phones are beset with notifications trying to "growth-hack" us into doing things that companies want, our apps full of microtransactions, our websites slower and harder-to-use with endless demands of our emails and our phone numbers and the need to log back in because they couldn't possibly lose a dollar to somebody who dared to consume their content for free. Our social networks are so algorithmically charged that they barely show us the things we want them to anymore, with executives dedicated to filling our feeds with AI-generated slop because despite being the customer, we are also the revenue mechanism. Our search engines do less as a means of making us use them more, our dating apps have become vehicles for private equity to add a toll to falling in love, our video games are constantly nagging us to give them more money, and despite it costing money and being attached to our account, we don't actually own any of the streaming media we purchase. We're drowning in spam — both in our emails and on our phones — and at this point in our lives we're probably agreed to 3 million pages worth of privacy policies allowing companies to use our information as they see fit.

And these are issues that hit everything we do, all the time, constantly, unrelentingly. Technology is our lives now. We wake up, we use our phone, we check our texts (three spam calls, two spam texts), we look at our bank balance (two-factor authentication check), we read the news (a quarter of the page is blocked by an advertisement asking for our email that's deliberately built to hide the button to get rid of it, or a login screen because we got logged out somehow), we check social media (after being shown an ad every two clicks), and then we log onto Slack (and feel a pang of anxiety as 15 different notifications appear). 

Modern existence has become engulfed in sludge, the institutions that exist to cut through it bouncing between the ignorance of their masters and a misplaced duty in objectivity, our mechanisms for exploring and enjoying the world interfered with by powerful forces that are too-often left unchecked. Opening our devices is willfully subjecting us to attack after attack from applications, websites and devices that are built to make us do things rather than operate with the dignity and freedom that much of the internet was founded upon. 

These millions of invisible acts of terror are too-often left undiscussed, because accepting the truth requires you to accept that most of the tech ecosystem is rotten, and that billions of dollars are made harassing and punishing billions of people every single day of their lives through the devices that we’re required to use to exist in the modern world. Most users suffer the consequences, and most media fails to account for them, and in turn people walk around knowing something is wrong but not knowing who to blame until somebody provides a convenient excuse.

Why wouldn't people crave change? Why wouldn't people be angry? Living in the current world can be absolutely fucking miserable, bereft of industry and filthy with manipulation, an undignified existence, a disrespectful existence that must be crushed if we want to escape the depressing world we've found ourselves in. Our media institutions are fully fucking capable of dealing with these problems, but it starts with actually evaluating them and aggressively interrogating them without fearing accusations of bias that will happen either way.

The truth is that the media is more afraid of bias than they are of misleading their readers. And while that seems like a slippery slope, and may very well be one, there must be room to inject the writer’s voice back into their work, and a willingness to call out bad actors as such, no matter how rich they are, no matter how big their products are, and no matter how willing they are to bark and scream that things are unfair as they accumulate more power.

If you're in the tech industry and reading this and saying that "the media is too critical" of tech, you are flat fucking wrong. Everything we're seeing happening right now is a direct result of a society that let technology and the ultra-rich run rampant, free of both the governmental guardrails that might have stopped them and the media ecosystem that might have held them accountable.

Our default position in interrogating the intentions and actions of the tech industry has become that they will "work it out" as they continually redefine "work it out" as "make their products worse but more profitable." Covering Meta, Twitter, Google, OpenAI and other huge tech companies as if the products they make are remarkable and perfect is disrespectful to readers and a disgusting abdication of responsibility, as their products are, even when they're functional, significantly worse, more annoying, more frustrating and more convoluted than ever, and that's before you get to the ones like Facebook and Instagram that are outright broken.

I don't give a shit if these people have "raised a lot of money," unless you use that as proof that something is fundamentally wrong with the tech industry. Meta making billions of dollars of profit is a sign of something wrong with society, not proof that it’s a "good company" or anything that should grant Mark Zuckerberg any kind of special treatment. OpenAI being "worth" $157 billion for a company that burns $5 billion or more a year to make a product that destroys our environment for a product yet to find any real meaning isn't a sign that it should get more coverage or be taken more seriously. Whatever you may feel about ChatGPT, the coverage it receives is outsized compared to its actual utility and the things built on top of it, and that's a direct result of a media industry that seems incapable of holding the powerful accountable.

It's time to accept that most people's digital life fucking sucks, as does the way we consume our information, and that there are people directly responsible. Be as angry as you want at Jeff Bezos, whose wealth (and the inherent cruelty of Amazon’s labor practices, and the growing enshittification of Amazon itself) makes him an obvious target, but don’t forget Mark Zuckerberg, Elon Musk, Sundar Pichai, Tim Cook and every single other tech executive that has allowed our digital experiences to become rotted out husks dominated by algorithms. These companies are not bound by civic duty, or even a duty to their customers — they have made their monopolies, and they’ll do whatever keeps you trapped in them. If they want me to think otherwise, they should prove it, and the media should stop trying to prove it for them.

Similarly, governments have entirely failed to push through any legislation that might stymie the rot, both in terms of the dominance (and opaqueness) of algorithmic manipulation and the ways in which tech products exist with few real quality standards. We may have (at least for now) consumer standards for the majority of consumer goods, but software is left effectively untouched, which is why so much of our digital lives is such unfettered dogshit.

And if you're reading this and saying I'm being a hater or pessimist, shut the fuck up. I'm so fucking tired of being told to calm down about this as we stare down the barrel of four years of authoritarianism built on top of the decay of our lives (both physical and digital), with a media ecosystem that doesn't do a great job of explaining what's being done to people in an ideologically consistent way. I'm angry, and I don't know why you're not. Explain it to me. Email me. Explain yourself, explain why you do not see the state of our digital lives as one of outright decay and rot, one that robs users of dignity and industry, one that actively harms billions of people in pursuit of greed.


There is an extremely-common assumption in the tech media — based on what, I'm not sure — that these companies are all doing a good job, and that "good job" means having lots of users and making lots of money, and it drives editorial decision-making. 

If three-quarters of the biggest car manufacturers were making record profits by making half of their cars with a brake that sometimes doesn't work, it'd be international news, leading to government inquiries and people being put in prison. This isn’t conjecture. After Volkswagen was caught deliberately programming its engines to only meet emissions standards during laboratory testing and certification, lawmakers around the globe responded with civil and criminal action. The executives and engineers responsible were indicted, with one receiving seven years in jail. Its former CEO is currently being tried in Germany, and has been indicted in the US. 

And yet so much of the tech industry — consumer software like Google, Facebook, Twitter, and even ChatGPT, and business software from companies like Microsoft and Slack — outright sucks, yet gets covered as if that's just "how things are." Meta, by the admission of its own internal documents, makes products that are ruinous to the mental health of teenage girls. And it hasn’t made any substantial changes. Nor has it received any significant pushback for failing to do so. It exercises a reckless disregard for public safety as the auto industry in the 1960s, when Ralph Nader wrote “Unsafe At Any Speed.” 

Nader’s book actually brought about change. It led to the Department of Transport, and the passage of seat belt laws in 49 states, and a bunch of other things that get overlooked (and possibly because he led to eight years of George W. Bush as president). But the tech industry is somehow inoculated against any kind of public pressure or shame, because it operates by a completely different rule book and a different criteria for success, as well as a different set of expectations. By allowing the market to become disconnected from the value it creates, we enable companies like NVIDIA that reduce the quality of their services as they make more money, or Facebook destroying our political discourse or facilitating a genocide in Myanmar, and then celebrate them because, well, they made more money. No, really, that’s quite literally what now-CTO Andrew Bosworth said in an internal memo from 2016, where he said that “all the work [Facebook does] in growth is justified,” even if that includes — and I am quoting him directly — “somebody dying in a terrorist attack coordinated [using Facebook’s tools.]”

The mere mention of violent crime is enough to create reams of articles questioning whether society is safe, yet our digital lives are a wasteland that many still discuss like a utopia. Seriously, putting aside the social networks, have you visited a website on a phone recently? Have you tried to use an app? Have you tried to buy something online starting with a Google Search? Within those experiences, has anything gone wrong? I know it has! You know it has! It's time to wake up!

We — users of products — are at war with the products we’re using and the people that make them. And right now, we’re losing. 

The media must realign to fight for how things should be. This doesn't mean that they can't cover things positively, or give credit where credit is due, or be willing to accept what something could be, but what has to change is the evaluation of the products themselves, which have been allowed to decay to a level that has become at best annoying and at worst actively harmful to society. 

Our networks are rotten, our information ecosystem poisoned with its pure parts ideologically and strategically concussed, our means of speaking to those we love and making new connections so constantly interfered-with that personal choice and dignity is all but removed.

But there is hope. Those covering the tech industry have one of the most consequential jobs in journalism, if they choose to heed the call. Those willing to guide people through the wasteland — those willing to discuss what needs to change, how bad things have gotten, and what good might look like — have the opportunity to push for a better future by spitting in the faces of those ruining it. 

I don’t know where I sit, what title to give myself, if I am legacy (I got my start writing for a print magazine) or independent or an “influencer” or a “content creator,” and I’m not sure I care. All I know is that I feel like I am at war, and we — if I can be considered part of the media — are at war with people that have changed the terms of innovation so that it’s synonymous with value extraction. Technology is how I became a person, how I met my closest friends and loved ones, and without it I would not be able to write, let alone be able to write this newsletter, and I feel poison flow through my veins as I see what these motherfuckers have done and what they will continue to do if they’re not consistently and vigorously interrogated. 

Now is the time to talk bluntly about what’s happening. The declining quality of these products, the scourge of growth-hacking, the cancerous growth-at-all-costs mindset, these are all things that need to be raised in every single piece, and judgments must be unrelenting. The companies will squeal that they are being unfairly treated by “biased legacy media,” something which (as I’ve said repeatedly) is already happening

These companies are poisoning the digital world, and they must be held accountable for the damage they are causing. Readers are already aware, but are — with the help of some members of the media — gaslighting themselves into believing that they “just don’t get it,” when the thing they don’t get is that the tech industry has built legions of obfuscations, legal tricks, and horrifying user interface traps with the intention of making the customer believe they’re the problem.

Things can change, but it has to start with the information sources, and that starts with journalism. The work has already begun, and will continue, but must scale up, and do so quickly.

And you, the user, have power too. Learn to read a privacy policy (yes, there are plenty of people in the tech media who give a shit, the Post has several of them, Bezos be damned). Move to Signal, an encrypted messaging app that works on just about everything. Get a service like DeleteMe (I pay for it, I worked for them like 4 years ago, I have no financial relationship with them) to remove yourself from data brokers. Molly White, a wonderful friend and even better writer, has written an extremely long guide about what to do next, and it runs through a ton of great things you can do — unionization, finding your communities, dropping apps that collect and store sensitive data, and so on.I also recommend WIRED’s guide to protecting yourself from government surveillance

I'll leave you with a thought I posted on the Better Offline Reddit on November 6.

The last 24 hours things have felt bleak, and will likely feel more bleak as the months and years go on. It will be easy to give into doom, to assume the fight is lost, to assume that the bad guys have permanently won and there will never be the justice or joy we deserve.

Now is the time for solidarity, to crystalize around the ideas that matter, even if their position in society is delayed, even as the clouds darken and the storms brew and the darkness feels all-encompassing and suffocating. Reach out to those you love, and don't just commiserate - plan. It doesn't have to be political. It doesn't even really have to matter. Put shit on your fucking calendar, keep yourself active, and busy, and if not distracted, at the very least animated. Darkness feasts on idleness. Darkness feasts on a sense of failure, and a sense of inability to make change.

You don't know me well, but know that I am aware of the darkness, and the sadness, and the suffocation of when things feel overwhelming. Give yourself mercy today, and in the days to come, and don't castigate yourself for feeling gutted.

Then keep going. I realize it's little solace to think "well if I keep saying stuff out loud things will get better," but I promise you doing so has an effect, and actually matters. Keep talking about how fucked things are. Make sure it's written down. Make sure it's spoken cleanly, and with rage and fire and piss and vinegar. Things will change for the better, even if it takes more time than it should.

The Cult of Microsoft

2024-11-02 03:29:04

Soundtrack: EL-P - Flyentology

At the core of Microsoft, a three-trillion-dollar hardware and software company, lies a kind of social poison — an ill-defined, cult-like pseudo-scientific concept called 'The Growth Mindset" that drives company decision-making in everything from how products are sold, to how your on-the-job performance is judged.

I am not speaking in hyperbole. Based on a review of over a hundred pages of internal documents and conversations with multiple current and former Microsoft employees, I have learned that Microsoft — at the direction of CEO Satya Nadella — has oriented its entire culture around the innocuous-sounding (but, as we’ll get to later, deeply troubling) Growth Mindset concept, and has taken extraordinary measures to institute it across the organization.

One's "growth mindset" determines one’s success in the organization. Broadly speaking, it includes attributes that we can all agree are good things. People with growth mindsets are willing to learn, accept responsibility, and strive to overcome adversity. Conversely, those considered to have a "fixed mindset" are framed as irresponsible, selfish, and quick to blame others. They believe that one’s aptitudes (like their skill in a particular thing, or their intelligence) are immutable and cannot be improved through hard work.

On the face of things, this sounds uncontroversial. The kind of nebulous pop-science that a CEO might pick up at a leadership seminar. But, from the conversations I’ve held and the internal documents I’ve read, it’s clear that the original (and shaky) scientific underpinnings of mindset theory have devolved into an uglier, nastier beast at Redmond. 

The "growth mindset" is Microsoft's cult — a vaguely-defined, scientifically-questionable, abusively-wielded workplace culture monstrosity, peddled by a Chief Executive obsessed with framing himself as a messianic figure with divine knowledge of how businesses should work. Nadella even launched his own Bible — Hit Refresh — in 2017, which he claims has "recommendations presented as algorithms from a principled, deliberative leader searching for improvement." 

I’ve used the terms “messianic,” “Bible,” and “divine” for a reason. This book — and the ideas within — have taken an almost religious-like significance within Microsoft, to the point where it’s actually weird. 

Like any messianic tale, the book is centered around the theme of redemption, with the subtitle mentioning a “quest to rediscover Microsoft’s soul.” Although presented and packaged like any bland business book that you’d find in an airport Hudson News and half-read on a red eye to nowhere, its religious framing extends to separation of dark and enlightened ages. The dark age — Steve “Developers” Balmer’s Microsoft, with Microsoft stagnant and missing winnable opportunities, like mobile — contrasted against this brave, bright new era where a nearly-assertive Redmond pushes frontiers in places like AI. 

Hit Refresh became a New York Times bestseller likely due to the fact that Microsoft employees were instructed (based on an internal presentation I’ve reviewed) to "facilitate book discussions with customers or partners" using talking points provided by the company around subjects like culture, trust, artificial intelligence, and mixed reality.

Side note: Hey, didn’t Microsoft lay off a bunch of people from its mixed reality team earlier this year

Nadella, desperate to hit the bestseller list and frame himself as some kind of guru, attempted to weaponize tens of thousands of Microsoft employees as his personal propagandists, instructing them to do things like...

Use these questions to facilitate a book discussion with your customers or partners if they are interested in exploring the ideas around leadership, culture and technology in Hit Refresh...

Reflect on each of the three passages about lessons learned from cricket and discuss how they could apply in your current team. (pages 38-40)

"...compete vigorously and with passion in the face of uncertainty and intimidation" (page 38)

"...the importance of putting your team first, ahead of your personal statistics and recognition" (page 39)

"One brilliant character who does not put team first can destroy the entire team" (page 39)

Nadella's campaign was hugely successful, with years of fawning press around him bringing a "growth mindset" to Microsoft, turning employees from "know-it-alls" into "learn-it-alls," Nadella is hailed as "embodying a growth mindset," claiming that he "pushes people to think of themselves as students as part of how he changed things," the kind of thing that sounds really good but is difficult to quantify.

This is, it turns out, a continual problem with the Growth Mindset itself.


If you're wondering why I'm digging into this so deeply, it's because — and I hate to repeat myself — the Growth Mindset is at the very, very core of Microsoft's culture. It’s both a tool for propaganda and a religion.. And it is, in my opinion, a flimsily-founded kind of grift-psychology, one that is deeply irresponsible to implement at scale.

In the late 1980s, American Psychologist Carol Dweck started researching how mindsets — or, how a person perceives a challenge, or their own innate attributes — can influence outcomes in things like work and school. Over the coming decades, she further refined and defined her ideas, coining the terms “growth mindset” and “fixed mindset” in 2012, a mere five years before Nadella took over at Microsoft. These can be explained as follows: 

  • A "fixed" mindset, where one believes that our intelligence and skills are innate, and cannot be significantly changed or improved upon.
    • To quote Microsoft's training materials, "A fixed mindset is an assumption that character, intelligence, and creative ability are static givens that can't be altered."
  • A "growth" mindset where you believe that your intelligence and the things that you can do can be improved with enough effort.
    • To quote Microsoft's training materials, "A growth mindset is the belief that abilities and intelligence can be developed through perseverance and hard work."

Mindset theory itself is incredibly controversial for a number of reasons, chief of which is that nobody can seem to reliably replicate the results of Dweck's academic work. For the most part, research into mindset theory has been focused on children, with the idea that if we believe we can learn more we can learn more, and that by simply thinking and trying harder, anything is possible.

One of the weird tropes of mindset theory is that praise for intelligence is bad. Dweck herself said in an interview in 2016 that it's better to tell a kid that they worked really hard or put in a lot of effort rather than telling them they're smart, to "teach them they can grow their skills in that way." 

Another is that you should say "not yet" instead of "no," as that teaches you that anything is possible, as Dweck believes that kids are "condition[ed] to show that they have talents and abilities all the time...[and that we should show them] that the road to their success is learning how to think through problems [and] bounce back from failures."

All of this is the kind of Vaynerchuckian flim-flam that you'd expect from a YouTube con artist rather than professional psychologist, and one would think that it'd be a bad idea to talk about it if it wasn't scientifically proven — let alone shape the corporate culture of a three-trillion-dollar business around it. 

The problem, however, is that things like "mindset theory" are often peddled with little regard for whether they're true or not, peddling concepts that make the reader feel smart because they sort of make sense. After all, being open to the idea that we can do anything is good, right? Surely having a positive and open mind would lead to better outcomes, right?

Sort of, but not really. 

A study out of the University of Edinburgh from early 2017 found that mindset didn't really factor into a child's outcomes (emphasis mine).

Mindset theory states that children’s ability and school grades depend heavily on whether they believe basic ability is malleable and that praise for intelligence dramatically lowers cognitive performance. Here we test these predictions in 3 studies totalling 624 individually tested 10-12-year-olds.

Praise for intelligence failed to harm cognitive performance and children’s mindsets had no relationship to their IQ or school grades. Finally, believing ability to be malleable was not linked to improvement of grades across the year. We find no support for the idea that fixed beliefs about basic ability are harmful, or that implicit theories of intelligence play any significant role in development of cognitive ability, response to challenge, or educational attainment.

...Fixed beliefs about basic ability appear to be unrelated to ability, and we found no support for mindset-effects on cognitive ability, response to challenge, or educational progress

The problem, it seems, is that Dweck's work falls apart the second that Dweck isn't involved in the study itself.

In a September 2016 study by Education Week's Research Center, 72% of teachers said the Growth Mindset wasn’t effective at fostering high standardized test scores. Another study (highlighted in this great article from Melinder Wenner Moyer) run by Case Western University psychologist Brooke MacNamara and Georgia Tech psychologist Alexander Burgoyne published in the Psychological Bulletin said that “the apparent effects of growth mindset interventions on academic achievement are likely attributable to inadequate study design, reporting flaws, and bias.” 

In other words, the evidence that supports the efficacy of mindset theory is unreliable, and there’s no proof that this actually improves educational outcomes. To quote Wenner Moyer:

Dr. MacNamara and her colleagues found in their analysis that when study authors had a financial incentive to report positive effects — because, say, they had written books on the topic or got speaker fees for talks that promoted growth mindset — those studies were more than two and half times as likely to report significant effects compared with studies in which authors had no financial incentives.

Wenner Moyer's piece is a balanced rundown of the chaotic world of mindset theory, counterbalanced with a few studies where there were positive outcomes, and focuses heavily on one of the biggest problems in the field — the fact that most of the research is meta-analyses of other people's data: Again, from Wenner Moyer. 

For you data geeks out there, I’ll note that this growth mindset controversy is a microcosm of a much broader controversy in the research world relating to meta-analysis best practices. Some researchers think that it’s best to lump data together and look for average effects, while others, like Dr. Tipton, don’t. “There's often a real focus on the effect of an intervention, as if there's only one effect for everyone,” she said. She argued to me that it’s better try to figure out “what works for whom under what conditions.” Still, I’d argue there can be value to understanding average effects for interventions that might be broadly used on big, heterogeneous groups, too.

The problem, it seems, is that a "growth mindset" is hard to define, the methods of measuring someone's growth (or fixed) mindset are varied, and the effects of each form of implementation are also hard to evaluate or quantify. It’s also the case that, as Dweck’s theory has grown, it’s strayed away from the scientific fundamentals of falsifiability and testability. 

Case in point: In 2016, Carol Dweck introduced the concept of a “false growth mindset.” This is where someone outwardly professes a belief in mindset theory, but their internal monologue says something different. If you’re a social scientist trying to deflect from a growing corpus of evidence casting doubt on the efficacy of your life’s work, this is incredibly useful.

Someone accused of having a false growth mindset could argue, until they’re blue in the face, that they genuinely do believe all of this crap. And the accuser could retort: “Well, you would say that. You’ve got a false growth mindset.”

To quote Wenner Moyer, "we shouldn't pretend that growth mindset is a panacea." To quote George Carlin (speaking on another topic, although pertinent to this post): “It’s all bullshit, and it’s bad for you.”

In Satya Nadella's Hit Refresh, he says that "growth mindset" is how he describes Microsoft's emerging culture, and that "it's about every individual, every one of us having that attitude — that mindset — of being able to overcome any constraint, stand up to any challenge, making it possible for us to grow and, thereby, for the company to grow."

Nadella notes that when he became CEO of Microsoft, he "looked for opportunities to change [its] practices and behaviors to make the growth mindset vivid and real." He says that Minecraft, the game it acquired in 2014 for $2.5bn, "represented a growth mindset because it created new energy and engagement for people on [Microsoft's] mobile and cloud technologies." At one point in the book, he describes how an anonymous Microsoft manager came to him to share how much he loved the "new growth mindset," and "how much he wanted to see more of it," pointing out that he "knew these five people who don't have a growth mindset," adding that he believed that the manager in question was "using growth mindset to find a new way to complain about others," and that was not what they had in mind.

The problem, however, is that this is the exact culture that Microsoft fosters — one where fixed mindsets are bad, growth mindsets are good, and the definition of both varies wildly depending on the scenario. 

One employee related to me that managers occasionally add that they "did not display a growth mindset" after meetings, with little explanation as to what that meant or why it was said. Another said that "[the growth mindset] can be an excuse for anything, like people would complain about obvious engineering issues, that the code is shit and needs reworking, or that our tooling was terrible to work with, and the response would be to ‘apply Growth Mindset’ and continue churning out features."

In essence, the growth mindset means whatever it has to mean at any given time, as evidenced by internal training materials that that suggest that individual contributions are subordinate to "your contributions to the success of others," the kind of abusive management technique that exists to suppress worker wages and, for the most part, deprive them of credit or compensation.

One post from Blind, an anonymous social network where you're required to have a company email to post, noted in 2016 that "[the Growth Mindset] is a way for leadership to frame up shitty things that everybody hates in a way that encourages us to be happy and just shut the fuck up," with another adding it was "KoolAid of the month."

In fact, the big theme of Microsoft's "Growth Mindset" appears to be "learn everything you can, say yes to everything, then give credit to somebody else." While this may in theory sound positive — a selflessness that benefits the greater whole — it inevitably, based on conversations with Microsoft employees, leads to managerial abuse. 

Managers, from the conversations I've had with Microsoft employees, are the archons of the Growth Mindset — the ones that declare you are displaying a fixed mindset for saying no to a task or a deadline, and frame "Growth Mindset" contributions as core to their success. Microsoft's Growth Mindset training materials continually reference "seeing feedback as more fair, specific and helpful," and "persisting in the face of setbacks," framing criticism as an opportunity to grow.

Again, this wouldn't be a problem if it wasn't so deeply embedded in Microsoft's culture. If you search for the term “Growth Mindset” on the Microsoft subreddit, you’ll find countless posts from people who have applied for jobs and internships asking for interview advice, and being told to demonstrate they have a growth mindset to the interviewer. Those who drink the Kool Aid in advance are, it seems, at an advantage. 

“The interview process works more as a personality test,” wrote one person. “You're more likely to be chosen if you have a growth mindset… You can be taught what the technologies are early on, but you can't be taught the way you behave and collaborate with others.”

Personality test? Sounds absolutely nothing like the Church of Scientology

Moving on.

Microsoft boasts in its performance and development materials that it "[doesn’t] use performance ratings [as it goes] against [Microsoft's] growth mindset culture where anyone can learn, grow and change over time," meaning that there are no numerical evaluations of what a growth mindset is or how it might be successfully implemented.

There are many, many reasons this is problematic, but the biggest is that the growth mindset is directly used to judge your performance at Microsoft. Twice a year, Microsoft employees have a "Connect" with managers where they must answer a number of different questions about their current and future work at Microsoft, with sections titled things like "share how you applied a growth mindset," with prompts to "consider when you could have done something different," and how you might have applied what you learned to make a greater impact. Once filled-out, your manager responds with comments, and then the document is finalized and published internally, though it's unclear who is able to see them.

In theory, they're supposed to be a semi-regular opportunity to reflect on your work and think about how you might do better. In practice? Not so much. The following was shared with me by a Microsoft employee.

First of all, everyone haaaaates filling those out. You need to include half-a-year worth of stuff you've done, which is very hard. A common advice is to run a diary where you note down what you did every single day so that you can write something in the Connect later. Moreover, it forces you into a singular voice. You cannot say "we" in a Connect, it's always "I". Anyone who worked in software (or I would suspect most jobs) will tell you that's idiotic. Almost everything is a team effort. Second, the stakes of those are way too high. It's not a secret that the primary way decisions about bonuses and promotions are done is by looking at this. So this is essentially your "I deserve a raise" form, you fill out one, max two of those a period and that's it.

Microsoft's "Connects" are extremely important to your future at the company, and failing to fill them in in a satisfactory manner can lead to direct repercussions at work. An employee told me the story of Feng Yuan, a high-level software engineer with decades at the company, beloved for his helpful internal emails about working with Microsoft's .NET platform, who was deemed as "underperforming" because he "couldn't demonstrate high impact in his Connects."

He was fired for "low performance," despite the fact that he spent hours educating other employees, running training sessions, and likely saving the company millions in overhead by making people more efficient. One might even say that Yuan embodied the Growth Mindset, selflessly dedicating himself to educating others as a performance architect at the company. Feng's tenure ended with an internal email criticizing the Connect experience.

Feng, however, likely needed to be let go for other reasons. Another user on Blind related a story of Feng calling a junior engineer's code "pathetic" and "a waste of time," spending several minutes castigating the engineer until they cried, relating that they had heard other stories about him doing so in the past. This, clearly, was not a problem for Microsoft, but filling in his Connect was.

One last point: These “Connects” are high-stakes games, with the potential to win or lose, depending on how compelling your story is and how many boxes it ticks. As a result, responses to each of the questions invariably takes the form of a short essay. It’s not enough to write a couple of sentences, or a paragraph. You’ve really got to sell yourself, or demonstrate — with no margin for doubt — that you’re on-board with the growth mindset mantra. This emphasis on long-form writing (whether accidental or intentional) inevitably disadvantages people who don’t speak English (or whatever language is used in their office) natively, or have conditions like dyslexia. 


The problem, it seems, is that Microsoft doesn't really care about the Growth Mindset at all, and is more concerned with stripping employees of their dignity and personality in favor of boosting their managers' goals. Some of Microsoft's "Connect" questions veer dangerously close to "attack therapy," where you are prompted to "share how you demonstrated a growth mindset by taking personal accountability for setbacks, asking for feedback, and applying learnings to have a greater impact."

Your career at Microsoft — a $3 trillion company — is largely defined by the whims of your managers and your ability to write essays of indeterminate length, based on your adherence to a vague, scientifically-questionable "mindset theory." You can (and will!) be fired both for failing to express your "growth mindset" — a term as malleable as its alleged adherents — to managers that are also interpreting its meaning in realtime, likely for their own benefit.

This all feels so distinctly cult-y. Think about it. You have a High Prophet (Satya Nadella) with a holy book (Hit Refresh). You have an original sin (a fixed mindset) and a path to redemption (embracing the growth mindset). You have confessions. You have a statement of faith (or close enough) for new members to the church. You have a priestly class (managers) with the power to expel the insufficiently-devout (those with a sinful fixed mindset). Members of the cult are urged to apply its teachings to all facets of their working life, and to proselytize to outsiders.

As with any scripture, its textural meanings are open to interpretation, and can be read in ways that advantage or disadvantage a person. 

And, like any cult, it encourages the person to internalize their failures and externalize their successes. If your team didn’t hit a deadline, it isn’t because you’re over-worked and under-resourced. You did something wrong. Maybe you didn’t collaborate enough. Perhaps your communication wasn’t up to scratch. Even if those things are true, or if it was some other external factor that you have no control over, you can’t make that argument because that would demonstrate a fixed mindset. And that would make you a sinner.  

Yet there's another dirty little secret behind Microsoft's Connects.

Microsoft is actively training its employees to generate their responses to Connects using Copilot, its generative AI. When I say "actively training," I mean that there is an entire document — "Copilot for Microsoft 365 Performance and Development Guidance" — that explains, in detail, how an employee (or manager) can use Copilot to generate the responses for their Connects. While there are guidelines about how managers can't use Copilot to "infer impact" or "make an impact determination" for direct reports, they are allowed to "reference the role library and understand the expectations for a direct report based on their role profile."

Side Note: What I can't speak to here is how common using Copilot to fill in a Connects or summarize someone else's Connects actually is. However, the documents I have reviewed - as I'll explain - explicitly instruct Microsoft employees and managers on how to do so, and frame them doing so positively.

In essence, a manager can't say how good you were at a job using Copilot, but they can use Copilot to see whether you are meeting expectations using it. Employees are instructed to use Copilot to "collect and summarize evidence of accomplishments" from internal Microsoft sources, and to "ensure [their] inputs align to Microsoft's Performance & Development philosophy."

In another slide from an internal Microsoft presentation, Microsoft directly instructs employees how to prompt Copilot to help them write a self-assessment for their performance review, to "reflect on the past," to "create new core priorities," and find "ideas for accomplishments." The document also names those who "share their Copilot learnings with other Microsoft employees" as "Copilot storytellers," and points them to the approved Performance and Development prompts from the company.

At this point, things become a little insane.

In one slide, titled "Copilot prompts for Connect: Ideas for accomplishments," Microsoft employees are given a prompt to write a self-assessment for their performance review based on their role at Microsoft. It then generates 20 "ideas for success measurements" to include in their performance review. It's unclear if these are sourced from anywhere, or if they're randomly generated. When a source ran the query multiple times, it hallucinated wildly different statistics for the same metrics. 

Microsoft's guidance suggests that these are meant to be "generic ideas on metrics" which a user should "modify to reflect their own accomplishments," but one only has to ask it to draft your own achievements to have these numbers — again, generated using the same models as ChatGPT — customized to your own work.

While Copilot warns you that "AI-generated content may be incorrect," it's reasonable to imagine that somebody might use its outputs — either the "ideas" or the responses — as the substance of their Connect/performance review. I have also confirmed that when asked to help draft responses based on things that you've achieved since your last Connect, Copilot will use your activity on internal Microsoft services like Outlook, Teams and your previous Connects.

Side note: How bad is this? Really bad. A source I talked to confirmed that personalized achievements are also prone to hallucinations. When asked to summarize one Microsoft employee’s achievements based on their emails, messages, and other internal documents from the last few quarters, Copilot spat out a series of bullet points with random metrics about their alleged contributions, some of which the employee didn’t even have a hand in, citing emails and documents that were either tangentially related or entirely unrelated to their “achievements,” including one that linked to an internal corporate guidance document that had nothing to do with the subject at hand.

On a second prompt, Copilot produced entirely different achievements, metrics and citations. To quote one employee, “Some wasn't relevant to me at ALL, like a deck someone else put together. Some were relevant to me but had nothing to do with the claim. It's all hallucination.”

To be extremely blunt: Microsoft is asking its employees to draft their performance reviews based on the outputs of generative AI models — the same ones underpinning ChatGPT — that are prone to hallucination. 

Microsoft is also — as I learned from an internal document I’ve reviewed — instructing managers to use it to summarize "their direct report's Connects, Perspectives and other feedback collected throughout the fiscal year as a basis to draft Rewards/promotion justifications in the Manage Rewards Tool (MRI)," which in plain English means "use a generative AI to read performance reviews that may or may not be written by generative AI, with the potential for hallucinations at every single step."

Microsoft's corporate culture is built on a joint subservience to abusive pseudoscience and the evaluations of hallucination-prone artificial intelligence. Working at Microsoft means implicitly accepting that you are being evaluated on your ability to adhere to the demands of an obtuse, ill-defined "culture," and the knowledge that whatever you say both must fit a format decided by a generative AI model so that it can be, in turn, read by the very same model to evaluate you.

While Microsoft will likely state that corporate policy prohibits using Copilot to "infer impact or make impact determination for direct reports" or "model reward outcomes," there is absolutely no way that instructing managers to summarize people's Connects — their performance reviews — as a means of providing reward/promotion justifications will end with anything other than an artificial intelligence deciding whether someone is hired or fired. 

Microsoft's culture isn't simply repugnant, it's actively dystopian and deeply abusive. Workers are evaluated based on their adherence to pseudo-science, their "achievements" — which may be written by generative AI — potentially evaluated by managers using generative AI. While they ostensibly do a "job" that they're "evaluated for" at Microsoft, their world is ultimately beholden to a series of essays about how well they are able to express their working lives through the lens of pseudoscience, and said expressions can be both generated by and read by machines.

I find this whole situation utterly disgusting. The Growth Mindset is a poorly-defined and unscientific concept that Microsoft has adopted as gospel, sold through Satya Nadella's book and reams of internal training material, and it's a disgraceful thing to build an entire company upon, let alone one as important as Microsoft.

Yet to actively encourage the company-wide dilution of performance reviews — and by extension the lives of Microsoft employees — by introducing generative AI is reprehensible. It shows that, at its core, Microsoft doesn't actually want to evaluate people's performance, but see how well it can hit the buttons that make managers and the Senior Leadership Team feel good, a masturbatory and specious culture built by a man — Satya Nadella — that doesn't know a fucking thing about the work being done at his company.

This is the inevitable future of large companies that have simply given up on managing their people, sacrificing their culture — and ultimately their businesses — to as much automation as is possible, to the point that the people themselves are judged based on the whims of managers that don't do the actual work and the machines that they've found to do what little is required of them. Google now claims that 25% of its code is written by AI, and I anticipate Microsoft isn't far behind.

Side note: This might be a little out of the scope of this newsletter, but the 25% stat is suspect at best.

First, even before generative AI was a thing, developers were using autocomplete to write code. There are a lot of patterns in writing software. Code has to meet a certain format to be valid. And so, the difference between an AI model creating a class declaration, or an IDE doing it is minimal. You’ve substituted one tool for another, but the outcome is the same.

Second, I’d question how much of this code is actually… you know… high-value stuff. Is Google using AI to build key parts of its software, or is it just writing comments and creating unit/integration tests? Based on my conversations with developers at other companies that have been strong-armed into using Copilot, I’m fairly confident this is the case.

Third, lines of code is an absolute dogshit metric. Developers aren’t judged by how many bytes they can shovel into a text editor, but how good — how readable, efficient, reliable, secure — their work is. To quote The Zen of Python, “Simple is better than complex… Sparse is better than dense.”

This brings me on to my fourth, and last, point: How much of this code is actually solid from the moment it’s created, and how much has to get fixed by an actual human engineer? 

At some point, these ugly messes will collapse as it becomes clear that their entire infrastructure is written upon increasingly-automated levels of crap, rife with hallucinations and devoid of any human touch.

The Senior Leadership Team of Microsoft are a disgrace and incapable of any real leadership, and every single conversation I've had with Microsoft employees for this article speaks to a miserable, rotten culture where managers castigate those lacking the "growth mindset," a term that oftentimes means "this wasn't done fast enough, or you didn't give me enough credit."

Yet because the company keeps growing, things will stay the same.

At some point, this deck of cards will collapse. It has to. When you have tens of thousands of people vaguely aspiring to meet the demands of a pseudoscientific concept, filling in performance reviews using AI that will ultimately be judged by AI, you are creating a non-culture — a company that elevates those who can adapt to the system rather than service any particular customer.

It all turns my fucking stomach.

Requiem for Raghavan

2024-10-22 02:35:15

Last week, Prabhakar Raghavan was relieved of duty as Senior Vice President of Search, becoming Google's "Chief Technologist." 

An important rule to follow with somebody's title in Silicon Valley is that if you can't tell what it means, it probably doesn't mean anything. The most notorious example of this is when AOL employed "Shingy," a Digital Prophet, and if you have any information about what he did at AOL, please email me at [email protected] immediately.

Anyway, back to Prabhakar.

Although ostensibly less ridiculous, Raghavan has likely been given a ceremonial title and a job that involves "partnering closely with Sundar Pichai and providing technical direction," as opposed to actually leading it.

Back in April, I published probably my most well-known piece — The Man Who Killed Google Search. Using emails revealed as part of the Department of Justice's antitrust trial against Google over search, it told the tale of how Prabhakar Raghavan, then Google's head of ads, led a coup that began the slow descent of the world's most important website toward its current, half-broken form. 

The key event in the piece is a “Code Yellow” crisis declared in 2019 by Google’s ads and finance teams, which had forecast a disappointing quarter. In response, Raghavan pushed Ben Gomes — the erstwhile head of Google Search, and a genuine pioneer in search technology — to increase the number of queries people made by any means necessary. 

Though it's not clear what was done to resolve the "query softness" that Raghavan demanded was reversed, I hypothesize one of the moves involved rolling back changes to search that had suppressed spammy content. Google has since denied this, despite the fact that emails revealed as part of DOJ's trial involved Jerry Dischler — Raghavan's deputy at Google Ads at the time — specifically discussing rollbacks. From The Man Who Killed Google Search

The March 2019 core update to search, which happened about a week before the end of the code yellow, was expected to be “one of the largest updates to search in a very long time. Yet when it launched, many found that the update mostly rolled back changes, and traffic was increasing to sites that had previously been suppressed by Google Search’s “Penguin” update from 2012 that specifically targeted spammy search results, as well as those hit by an update from an August 1, 2018, a few months after Gomes became Head of Search.

Prabhakar Raghavan was made Head of Search a little over a year later in June 2020, and it's pretty obvious how big a decline Google Search has taken since then. Results are filled with Search Engine Optimized spam, ads and sponsored content are bordering on indistinguishable from regular results, and the disastrous launch of Google's AI-powered "summaries" produced results that ranged from hilarious to actively life-threatening.

When Raghavan took over Search (Q3 2020), Google had just experienced its first decline in year-over-year quarterly growth since Q4 2012 — a 1.66% decline in growth that followed by a remarkable recovery, with double-digit year-over-year growth just as Prabhakar turned the screws on search, cresting to a ridiculous 61.58% year-over-year growth in Q3 2021.

Then things began to slow. Every quarter saw progressively lower growth, reaching a nadir in Q4 2022, when Google experienced a mere 0.96% year-over-year growth — something that one might be able to blame on the end of the opulent post-vaccine spending we saw across the entire economy, or the spiraling rates of inflation seen worldwide. And so, one would assume that growth would recover as the wider global economy did, right?

https://www.macrotrends.net/stocks/charts/GOOG/alphabet/revenue

Ehhh. While Google experienced a recovery in its growth rates, it took until Q3 2023 to hit double digits again (11% year-over-year), hitting a high of 15.41% in Q2 2024 before trending down again in Q3 2024 to 13.59%.

The reason these numbers are important is that growth drives everything, and Prabhakar Raghavan drove the most consistent growth engine in the company, which grew 14% year-over-year in Q1 2024, until he didn’t. This context is key to understanding his “promotion” to Chief Technologist, a title that is most decidedly not a Chief Technology Officer, or any kind of officer at all.

Google has, for the most part, enjoyed one of the most incredible runs in business history, with almost an entire decade of 20% year-over-year growth, with a few exceptions, such as Q4 2012 (a few months into Raghavan's tenure at Google, where he started in ads) to Q3 2013, a chaotic period where Google fell behind Amazon in shopping ad revenue, bought Motorola Mobility for $12.5 billion (a 63% premium on its trading price) and seen a 15% year-over-year decline in pricing for its search ads (Google’s earnings also leaked early, which isn't good).

https://www.macrotrends.net/stocks/charts/GOOG/alphabet/revenue

Yet growth is slowing, and isn't showing any signs of returning to the heady days where 17% year-over-year growth was considered a bad quarter. Google has deliberately made its product worse as a means of increasing revenue, spawning a trend of both remarkable revenue growth and worsening search results that started exactly when Raghavan took the wheel of its prime revenue-driver.

The chart tells another story — that this reckless and desperate move only worked for a little bit before growth began to slow again. Recklessness and desperation begets only more recklessness and desperation, and you’ll note that Google’s aggressive push into AI followed its dismal Q4 2022 quarter, where it nearly fell into negative growth (and when you factor inflation, it did). 

If you’ll forgive the mixed metaphors, Google has essentially killed its golden goose — search — and is now in the process of pawning its eggs to buy decidedly non-magical beans, by which I mean data centers and GPUs, with Google increasing its capital expenditures in the financial year 2024 to $50 billion, equivalent to nearly double its average capital expenditures from 2019 to 2023.

Since becoming Head of Search, Raghavan also became the silent leader of most of Google's other revenue centers — Google Ads, Google Shopping, Maps, and eventually Gemini, Google's ChatGPT competitor, which might also explain his newly-diminished position within the company.

2024 was a grim year for Google and a grimmer one for Raghavan, starting in February with its Gemini Large Language Model generating racially diverse nazis (among other things), a mess that Raghavan himself had to apologize for. A few months later, Google introduced AI-powered search summaries that told users to eat rocks and put glue on pizza, which only caused people to remember exactly how bad Google Search already was, and laugh at how the only way that Google seemed to be able to innovate was to make it worse.

Raghavan is being replaced by Nick Fox, a former McKinsey guy who, in the emails I called attention to in The Man Who Killed Google Search, told Ben Gomes that making Google Search more profitable was "the new reality of their jobs," to which Ben Gomes responded by saying that he was "concerned that growth [was] all that [Google was] thinking about."

Fox has, to quote Google CEO Sundar Pichai, "been instrumental in shaping Google's AI product roadmap," which suggests that Google is going all-in on AI at a time when developers are struggling to justify using its models and are actively mad at both the way it markets them and the way they're integrated into Google’s other products

I am hypothesizing here, but I think that Google is desperate, and that its earnings on October 30th are likely to make the street a little worried. The medium-to-long-term prognosis is likely even worse. As the Wall Street Journal notes, Google's ad business is expected to dip below 50% market share in the US in the next year for the first time in more than a decade, and Google's gratuitous monopoly over search (and likely ads) is coming to an end. It’s more than likely that Google sees AI as fundamental to its future growth and relevance. 

As part of the Raghavan reorganization, Google is also moving the Gemini App team (the one handling Google's competitor to ChatGPT) under AI research group DeepMind, a move that might be kind of smart in the "hand the AI stuff to the AI people" kind of way, but also suggests that there is a degree of disarray at the company that isn't going to get better in a hurry.

You see, Raghavan was powerful, and for a time successful. He ruled with an iron fist, warning employees to prepare for "a different market reality" because "things [were] not like they were 15-20 years ago," and "shortening the amount of time that his reports would have to work on certain projects" according to Jennifer Elias of CNBC, which is exactly the kind of move you make when things are going poorly. 

Replacing Raghavan with Nick Fox — a man who has only worked at either McKinsey or Google, and no, I am not kidding — is something that you do because you don't know what to do, and somebody's head has to roll, even if it's going to roll to the foot of a guy who's most famous for running Google's Assistant business, which is best known for kind of sucking and making absolutely no money.

There is a compelling case to be made that we are watching the slow, painful collapse of Google — a company best known for a transformational and beloved product it chose to ruin, helmed by a management consultant that has, for the most part, overseen the decay of its brand. 

Google — like the rest of tech's hyper-scalers — has not had a meaningful new product in over a decade, with its most meaningful acquisition in years involving it paying $2.7 billion for an AI startup that barely made any money specifically to hire back a guy who quit because he was mad that Google wouldn't release an early version of Large Language Models in 2021.

This is a company bereft of vision, incapable of making money without monopolies, and flailing wildly in the hopes that copying everybody else will save itself from perdition — or, I should say, from the Department of Justice breaking it up.

Google is exactly the monster that Sundar Pichai and Prabhakar Raghavan wanted it to be — a lumbering private equity vehicle that uses its crooked money machine to demolish smaller players, except there are no more hyper-growth markets left for it to throw billions at, leaving it with Generative AI, a technology that lacks mass-market utility and burns cash with every prompt.

We are watching the fall of Rome, and it’s been my pleasure to tell you about how much of it you can credit to Prabhakar Raghavan, the Man Who Killed Google Search.

You Can't Make Friends With The Rockstars

2024-10-15 03:29:30

You cannot make friends with the rock stars...if you're going to be a true journalist, you know, a rock journalist. First, you never get paid much, but you will get free records from the record company.

[There’s] fuckin’ nothin' about you that is controversial. God, it's gonna get ugly. And they're gonna buy you drinks, you're gonna meet girls, they're gonna try to fly you places for free, offer you drugs. I know, it sounds great, but these people are not your friends. You know, these are people who want you to write sanctimonious stories about the genius of the rock stars and they will ruin rock 'n' roll and strangle everything we love about it.

Because they're trying to buy respectability for a form that's gloriously and righteously dumb.

Lester Bangs, Almost Famous (2000)

I am tired of hearing about Mark Zuckerberg's "new look," in part because I don't care about it, and in part because I see it as yet another blatant — and successful — attempt to divert attention from the outright decay of the platforms he runs.  

It doesn't matter that Mark Zuckerberg has a gold chain, nor is it of any journalistic importance to ask him why he's wearing it, as any questions around or articles about his new look are, by definition, a distraction from the very real and damaging things that Mark Zuckerberg does every day, like how Facebook and Instagram are intentionally manipulative and harmful platforms or how Meta, as a company, creates far more emissions than it can cover with renewable energy, or that Meta's AI product's only differentiating feature is that its 500 million monthly active users are helping kill our planet by generating meaningless slop.

In a just world, every piece about Mark Zuckerberg would include something about how decrepit Meta's products are, its weak-handed approach to drug cartels and human traffickers, or how Facebook's algorithm regularly boosted and supported anti-vax groups. For over a decade, Mark Zuckerberg and Meta have acted with a complete disregard for user safety, all while pedling a product that actively impedes users from seeing their friends' posts, abusing them with a continual flow of intrusive and annoying ads.

Yet all Mark Zuckerberg has to do to make that go away is wear a shirt that sort of references Julius Caesar for the Washington Post to say that he "has the swagger of a Roman emperor," and publish a glossy feature about Zuckerberg's new "bro-ified" look that asserts he has "raised his stock among start-up founders as Silicon Valley shifts to the right" — a statement that suggests, for whatever reason, that Mark Zuckerberg is some sort of apolitical actor rather than someone that hired a former member of the Bush Administration to run public policy, who in turn intervened to make sure that the COVID conspiracy movie "Plandemic" could run rampant on the platform despite Facebook's internal team trying to block it.

Instead, Mark Zuckerberg has, and I quote the Post, transformed himself from "a dorky, democracy-destroying CEO into a dripped-out, jacked AI accelerationist in the eyes of potential Meta recruits."

I cannot express how little this matters, and how terribly everybody is falling for the bait. The Post's piece has some level of critique, but continually returns to the tropes of Zuckerberg being "unapologetic" for unspecific acts, all while actively helping refurbish his image as somebody unbound by consequence or responsibility. 

While I understand why people might write this up — after all, Zuckerberg is a public figure with immense influence — the only reason that he is dressing differently is so that people will publish a bunch of things talking about how he's dressing differently, and then pontificate about what that means, doing a bunch of free marketing for the unfireable chief executive billionaire of a company built on the back of manipulating and hurting its users.

Yet this keeps happening. Back in August, the Wall Street Journal published an incredibly stupid piece about "Elon Musk's Walk With Jesus," a piece entirely built off of a series of tweets about how Musk suggested that if we don't "stand up for what is fair and right, Christianity will perish." The media simply cannot help itself with people of power. It must decode everything, turning a meaningless and meandering Musk conversation with crybaby demagogue Jordan Peterson into an anecdote about Musk being a "cultural Christian," rather than saying absolutely fucking nothing about it, because every single time guys like Musk say things like this they want people to write it up.

It's branding. When Musk tweets some stupid thing, he's doing so to get a reaction, to get the media to write it up, to create more noise and more attention, and to continually interfere with people's perspectives, in the same way that right wing grifters like Russell Brand convert to Christianity. It's all a fucking grift, and every time the media participates they are helping whitewash the reputations of ultra-rich and ultra-powerful people.

While it may seem (and be) harmless to just talk about Zuckerberg's new clothes, the problem comes when you start interpreting what they mean. There is no reason for the Wall Street Journal to try and connect — and I quote — Zuckerberg's "gilded Zuckaissance" (fuck you, man) with "a resurging period for [Meta's] business." These two things are only connected if you, the journalist, decide to connect them, and connecting them is both what Zuckerberg wants you to do and outright irresponsible. There is absolutely no connection between these two events, other than the desperation of some members of the media to add meaning to an event that is transparently, obviously, painfully built to get headlines that connect his image to the state of his company.

By telling the story of an "evolved" Zuckerberg that's "unapologetic," the media whitewashes a man who has continually acted with disregard for society and exploited hundreds of millions of people in pursuit of eternal growth. By claiming he's "evolving" or "changing" or "growing" or anything like that, writers are actively working to forgive Zuckerberg, all without ever explaining what it is they're forgiving him for, because those analyses almost never happen.

To be clear, Zuckerberg started dressing differently in May, yet he's still getting headlines about it in October. This has been a successful — and loathsome — PR campaign, one where the media has fallen for it hook, line and sinker, all while ignoring the environmental damage of Meta's pursuit of generative AI and the fact that the company fucking sucks.

This is a problem of focus and accountability, and illustrative of a continual failure to catch a conman in the act.

Let me give you an example — somehow there's only one story out there about the massive problems in Meta's ad platform, where ad buyers have seen thousands of dollars of losses from incorrect targeting or ignored budgets, with one buyer reporting "high-five-figure" losses because ads meant for younger audiences exclusively targeted people aged 65 and older. Meta makes around 99% of its revenue from advertising, and it's been just over a year since a massive bug caused thousands of advertisers to run campaigns that were "essentially ineffective" according to CNBC. 

One would think that there being ruinous bugs at the heart of the sole revenue stream of a $1.5 trillion company would be more important than its CEO dressing like a 90s rapper, and you'd be incorrect. Zuckerberg's new look is far more important than basically any other story about Facebook or Instagram or Meta this year, because, despite the obvious harms of these executives, there is still a fascination with them — and, I'd argue, a desperation to try and make these men more interesting and deep than they really are.

Elon Musk, Mark Zuckerberg, and Sam Altman are not, despite their achievements, remarkable people. They are dull, and while they might be intelligent, they’re far from intellectual, appearing to lack any real interests, hobbies or joys, other than Zuckerberg's brief dalliance with mixed martial arts. They all read the same shit, they talk the same way, they have the same faux-intellectualism that usually boils down to how they're "big thinkers" that think about "big things" like "intelligence" and "consciousness," when what they mostly do is dance around issues without adding anything substantive, because they don't really believe anything.


At the core of this problem is, in my mind, a distinct unwillingness — perhaps it's a kind of cognitive dissonance — to believe that somebody could be so rich, powerful, and mediocre. It's much easier to see Sam Altman as a "genius master-class strategist" than as just another rich guy that's really good at manipulating other rich guys into doing things for him, or Elon Musk as a "precocious genius"  rather than a boorish oaf that's exceedingly good at leveraging both assets and his personal brand.

It's far more comfortable to see the world as one where these people have "earned" their position, and that they, at the top of their industries, are special people, because otherwise you'd have to consider that for the most part, they're all frightfully average. 

There is nothing special about Elon Musk, Sam Altman, or Mark Zuckerberg. Accepting that requires you to also accept that the world itself is not one that rewards the remarkable, or the brilliant, or the truly incredible, but those who are able to take advantage of opportunities, which in turn leads to the horrible truth that those who often have the most opportunities are some of the most boring and privileged people alive.

The problem isn't so much how dull they are, but how desperate some are to make them exceptional. Sam Altman's rise to power came, in part, from members of the media propping him up as a genius, with the New Yorker saying that "Altman's great strengths are clarity of thought and an intuitive grasp of complex systems," a needless and impossible-to-prove romanticization of a person done in the name of rationalizing his success. Having watched and listened to hours of Altman talking, I can tell you that he's a pretty bright guy, but also deeply mediocre — one of thousands of different "pretty smart Stanford guys" that you'll find in any bar in the Bay Area.

The New Yorker's article is deeply bizarre, because it chooses to simply assume that people like Marc Andreessen and Reid Hoffman are, by virtue of being rich, are also smart, and that because they think Sam Altman is smart, he is, indeed, smart. Altman's history is steeped in failure and deceit, yet he knows that all he has to do is say some vague epithet about how superintelligence is "a few thousand days away" to get attention, because the media will not sit and think "hey, is Sam Altman someone that would lie to us?" despite him continually lying about OpenAI's progress toward this very goal.

Silicon Valley exists in a kind of bizarre paradox where the youngest companies are often met with the most scrutiny and attention as its most powerful figures can destroy their products and lie in public with barely a hint of analysis. Meanwhile, the most powerful companies enjoy a level of impunity, with their founders asked only the most superficial, softball of questions — and deflecting anything tougher by throwing out dead cats when the situation demands.  

Note: I am saying that said scrutiny should be equal!

OpenAI is probably the most flagrant example — a horrible business that burns money with a product that lacks any mass-market product market fit where most members of the media still have faith because...well, Uber lost a lot of money, I guess? And look at it now, a company that, after fifteen years, occasionally makes a profit. 

There is such unbelievable faith in men who have continually failed to live up to it — an indomitable belief that this much money couldn't be wrong, and that the people running these companies are anything other than selfish opportunists that will say what they need to as a means of getting what they want, and that they got there not through a combination of privilege, luck and connections, but through some sort of superior intellect and guile.

The problem — the real human cost — is that by mythologizing these men, by putting them on a pedestal, we allow them to alter the terms of reality. Generative AI's destructive, larcenous growth exists in large part because people buy Sam Altman's shit, and because they believe that he's somebody special, a genius operator that can do the impossible, which in turn helps them platform his half-truths and outright lies.

Mark Zuckerberg has weathered the storm of his decaying services and the horrifying things that Meta does on a daily basis because people still want him to be some sort of Silicon Valley hero, and he will continue to accumulate wealth and power while actively harming both his products and society at large as long as we keep treating him as such.

I am serious when I say a different approach to these men would have created vastly different outcomes. Had Sam Altman not been mythologized, there is far less of a chance that people — both investors and the media — would've taken OpenAI this seriously, and questions around profitability and sustainability would've been immediate problems to address versus "issues" to be "mitigated." 

Had Mark Zuckerberg been held to account in any measurable way at the beginning, rather than given softball interviews dressed up as criticism, we would've likely had far more immediate and meaningful legislation around our personal data, and far fewer opportunities for Meta to grow based on actively abusing its users.

Had Elon Musk and Tesla been interrogated earlier, I doubt Musk would still be CEO, as the ugliness that we see from him on a daily basis has always been there. Instead, Erin Biba had her life ruined by his acolytes with few standing to defend her. 

Even now, Musk runs rampant, spreading disinformation about hurricane relief efforts and intimating that a presidential candidate should be killed “as a joke,” all while running Twitter into the ground, its revenue cratering by 84%, and lying so often that there’s an entire website chronicling his fibs. Musk has repeatedly and unquestionably proven he’s both a horrible person and a terrible businessman, a craven liar — especially with regard to Tesla products. 

Yet the media still takes the idea that he’d launch a robotaxi network seriously, even though Musk has been lying about this shit since 2019. At Tesla’s We, Robot event — held, bizarrely, in the early hours of Friday, October 11 — Musk once again promised that the Robotaxi was imminent, with the company aiming to launch in 2026 or 2027. 

To the media’s credit, some publications called bullshit. Business Insider spoke to actual industry analysts, which pointed out that: a.) Musk has yet to deliver on full-self-driving technology; b.) Tesla didn’t actually provide evidence for how it would reach the “higher levels of autonomy” needed; c.) Tesla would need to get regulatory approval for literally every part of the vehicle; and d.) Tesla would be competing against established companies like Uber and Lyft, as well as Waymo, who all enjoy a massive head-start.  

Others, regrettably, were all too happy to take Musk at his word. The Guardian, which generally isn’t afraid to go head-to-head with big tech, was almost entirely uncritical in its coverage — save for a few paragraphs buried at the end, where it noted that Tesla faces legal action over its failure to deliver full self-driving technology, and over alleged safety driving in existing autonomous features. 

The Verge did slightly better, noting the years of “false promises and blown deadlines,” and the regulatory challenges ahead, but it still let itself down with the headline “Elon Musk’s robotaxi is finally here.” It isn’t. You saw a prototype and heard a speech.

Then we have the auto press - you know, the people supposed to act as the watchdogs for the auto industry. Top Gear and AutoCar both delivered sickening puff pieces that gushed over the “relaxed, lounge feel” of the interior, or, in the case of the latter, literally could have been a press release. A complete abdication of one’s responsibilities, deferring to a man who has, time and time again, proven himself to be a chronic liar.  

I didn’t even mention the ridiculous “debut” of Tesla’s Optimus “robots” — allegedly “AI-powered” “humanoid robots” that could “assist with household chores.” While there may have been some outlets that were immediately suspicious, far too many simply ran exactly what Musk said and acted as if he’d proven that these were actual robots walking around and doing things on their own accord, as opposed to what was blatantly obvious: that they were controlled remotely by human operators.

In a just world, Musk’s events would be covered like a press conference from a disgraced politician, with the default assumption that he’s going to lie because he almost certainly will. As I previously noted, he already lied about the Robotaxi, saying in 2019 that there would be “more than 1 million robotaxis on the road next year,” with “next year” meaning 2020, which was four years ago. Musk has now updated his prediction to “before 2027.” What are we doing here?

The reason that Elon Musk is able to lie and get away with it is that a far-too-large chunk of the press seems dedicated to swallowing every single one of them. No amount of followup journalism will ever counterbalance the automatic stenography as his events take place — the assumption being, I assume, that readers “want this” — nor will it help when the majority of articles summarizing what happened blandly repeat exactly what promises Musk has made without calling them out as outright lies. 

And no, it doesn’t really help to bury a statement or two in there about how Musk has “yet to deliver.”  The story should start with the fact he’s a liar and end with the new lies he’s telling. 


The media needs to start folding its arms and refusing to accept the terms that these people set. Every single article about OpenAI should include something about the brittleness of its business, and about the many, many half-truths of Sam Altman — yet more often than not these pieces feel like they simply accept that whatever he’s saying is both reasonable and accurate. The same goes for Musk, or Zuckerberg, or Pichai, or Nadella. All of these men have grown rich and powerful off the back of a lack of pushback by the media. 

You cannot — and should not — separate these men from their actions, nor should reporters fall for their distractions. If a dictator ran a successful tech company every single thing they did would be treated as suspicious, if not outright harmful, and at this point, tech's most powerful companies are effectively nation states.

The real problem children are, of course, people like Kara Swisher, who, more than anything, WANTS to be friends with the rockstars and has done so successfully. Swisher’s book tour involved her being interviewed by Sam Altman and Reid Hoffman — a shameful display of corruption, one so flagrant and stomach-turning that it should have led to an industry-wide condemnation of her legacy, rather than a general silence with a few people saying “I liked the book.” 

Then there are people like Casey Newton, who happily parrot whatever Mark Zuckerberg wants them to under the auspices of “objectivity,” dutifully writing down things like “we will effectively transition from people seeing us as primarily being a social media company to being a metaverse company” and claiming that “no one company will run the metaverse — it will be an “embodied internet,”  a meaningless pile of nonsense, marketing collateral written down in service of a billionaire as a means of retaining access. 

I realize that the argument might be that these executives wouldn't give an interview to somebody that won't play nice, and the counterargument is that if nobody accepted those terms, they'd have to start talking to more people. If Zuckerberg couldn't click his heels together and get Alex Heath or Casey Newton to nod approvingly, if every avenue was challenging, he'd be forced to give more honest interviews, or not give any interviews at all.

My frustration with Casey isn’t just that he acts as a stenographer for the powerful — it’s that he’s capable of much, much more, as proven by his brutal reporting on the horrifying conditions that Facebook moderators work under. He can - and should - do better.

As an aside: Some people argue that these "nice" interviews can sometimes "have an edge" where the subject is "hung on their words."

This is, for the most part, false. While Isaac Chotiner has mastered the art of making people bumble into saying something stupid, he is most decidedly not an access journalist.

"Gotcha" moments where you ask a question and they fumble the answer are not gotchas unless you are willing to actually go in for the kill - to say "what does that mean?" and "why don't you have an answer to that question?" If you ask a question and don't even respond to the answer, or quickly move on, you're effectively acting as if it never happened.

One might say that asking a question multiple times or refusing to move on is "aggressive." Please note that every single executive in this piece is worth at least a billion dollars. Suck it up butter cup.

One might also argue that being this daring in an interview environment means you'll never get another interview.

This is only the case because everybody is accepting the terms of access journalism.

None of this is to suggest that there has been no criticism. As much as I criticize the Post's piece on Zuckerberg, Nitasha Tiku (who wrote the original piece) has published deeply critical investigative journalism about Sam Altman, and CNBC's Lora Kolodny and Bloomberg's Dana Hull have spent over a decade reporting on Musk's failures, as have the reporters at the Wall Street Journal been fastidiously and aggressively covering Zuckerberg and his cohort in brutal detail.

The Washington Post's tech desk interrogated OpenAI early and often, covering its unsustainable costs, the jobs it started to take, and why you shouldn't tell ChatGPT your health concerns at a time (early 2023) when some were actively cowering in terror at what a Large Language Model told them to do. Ellen Huet's (Bloomberg-published) "Foundering" series about Sam Altman was insightful (and disturbing), and had the courage to tell the story of Annie Altman, Sam Altman's sister who has accused Altman of multiple acts of abuse. Things can be better.

Sadly, these men are regularly given air cover for their acts and their lies, and opportunities to talk about anything other than the things they've done - or not done, in Musk's case.

Far too often the default position with men like Sam Altman is to give them the benefit of the doubt, to assume that things will work out how they say they will, and to trust what they're saying as well-intentioned and informed. In doing so, readers and viewers are deprived of the truth — that these men are fallible, selfish, and willing to do anything they have to as a means of growing or sustaining their power.

We need to stop wanting to like these people. We need to stop craving to understand them on a deeper level, or finding any fascination in what they do. We cannot take them at their word, nor give them the benefit of the doubt without significant, meaningful proof, the kind that we would expect from literally anybody else.

OpenAI Is A Bad Business

2024-10-03 01:16:40

OpenAI, a non-profit AI company that will lose anywhere from $4 billion to $5 billion this year, will at some point in the next six or so months convert into a for-profit AI company, at which point it will continue to lose money in exactly the same way. Shortly after this news broke, Chief Technology Officer Mira Murati resigned, followed by Chief Research Officer Bob McGrew and VP of Research, Post Training Barret Zoph, leaving OpenAI with exactly three of its eleven cofounders remaining.  

This coincides suspiciously with OpenAI's increasingly-absurd fundraising efforts, where (as I predicted in late July) OpenAI has raised the largest venture-backed fundraise of all time $6.6 billion— at a valuation of $157 billion.

EDITOR'S NOTE: Not long after this newsletter published, OpenAI's funding round closed. As a result, this newsletter reflects the round as pending, when it has now closed.

Yet despite the high likelihood of the round's success, there are quite a few things to be worried about. The Wall Street Journal reported last week that Apple is no longer in talks to join the round, and while one can only speculate about its reasoning, it's fair to assume that Apple (AAPL), on signing a non-disclosure agreement, was able to see exactly what OpenAI had (or had not) got behind the curtain, as well as its likely-grim financial picture, and decided to walk away. Nevertheless, both NVIDIA (NVDA) and Microsoft (MSFT) are both investing, with Microsoft, according to the Wall Street Journal, pushing another $1 billion into the company — though it's unclear whether that's in real money or in "cloud credits" that allow OpenAI to continue using Microsoft's cloud.

Yet arguably the most worrying sign is that SoftBank's Vision Fund will be investing $500 million in OpenAI. While it might seem a little weird to be worried about a half-billion dollar check, SoftBank — best-known for sinking $16 billion or more into WeWork and getting swindled by its founder, and dumping a further €900m into Wirecard, which turned out to be an outright fraud, with one founder now a fugitive from justice in Russia — is some of the dumbest money in the market, and a sign that any company taking it is likely a little desperate. 

While SoftBank has had a number of hits — NVIDIA, ARM and Alibaba, to name a few — it is famous for piling cash into terrible businesses, like Katerra (a construction company that died despite a $2 billion investment in 2021) and Zume Pizza (a robotic pizza company with a product that never worked that closed after raising more than $400 million, with $375 million coming from SoftBank).

No, really, the SoftBank Vision Fund is in a bad way. Last year SoftBank's Vision Fund posted a record loss of $32 billion after years of ridiculous investments, a year after CEO Masayoshi Son promised investors that there would be a "stricter selection of investments." One might think that three years of straight losses would humble Son, and you would be wrong. He said in June that he was "born to realize artificial superintelligence," adding that he was "super serious about it.

One of Son's greatest inspirations (who he begged simply to see the face of when he flew to meet him when he was 16-years-old) is Den Fujita, the thankfully-dead founder of McDonald's Japan, and the author of a book called "The Jewish Way of Doing Business," which suggested that Jews had taken over the business world and implored businesspeople to copy them, while also suggesting that Jews had settled in Osaka 1000 years ago, making the people there "craftier," a comment that McDonald's had to issue a public apology for.

In any case, OpenAI will likely prevail and raise this round from a cadre of investors that will have to invest a minimum of $250 million to put money behind a company that has never turned a profit, that has no path to profitability, and has yet to create a truly meaningful product outside of Sam Altman's marketing expertise. This round is a farce — a group delusion, one borne of one man's uncanny ability to convince clueless idiots that he has some unique insight, despite the fact that all signs point to him knowing about as much as they do, allowing him to prop up an unsustainable, unprofitable and directionless blob of a company as a means of getting billions of dollars of equity in the company — and no, I don't care what he says to the contrary.

Last week, the New York Times reported that OpenAI would lose $5 billion in 2024 (which The Information had estimated back in July), and that the company expected to raise the price of ChatGPT's premium product to $44-a-month over the next five years, and intended to increase the price of ChatGPT to $22-a-month by the end of 2024, a pale horse I've warned you of in the past.

Interestingly (and worryingly), the article also confirms another hypothesis of mine — that "fund-raising material also signaled that OpenAI would need to continue raising money over the next year because its expenses grew in tandem with the number of people using its products" - in simpler terms, that OpenAI will likely raise $6.5 billion in funding, and then have to do so again in short order, likely in perpetuity.

The Times also reports that OpenAI is making estimates that I would describe as "fucking ridiculous." OpenAI's monthly revenue hit $300 million in August, and the company expects to make $3.7 billion in revenue this year (the company will, as mentioned, lose $5 billion anyway), yet the company says that it expects to make $11.6 billion in 2025 and $100 billion by 2029, a statement so egregious that I am surprised it's not some kind of financial crime to say it out loud.

For some context, Microsoft makes about $250 billion a year, Google about $300 billion a year, and Apple about $400 billion a year.

To be abundantly clear, as it stands, OpenAI currently spends $2.35 to make $1.

OpenAI loses money every single time that somebody uses their product, and while it might make money selling premium subscriptions, I severely doubt it’s turning a profit on these customers, and certainly losing money on any and all power users. As I've said before, I believe there's also a subprime AI crisis brewing because OpenAI's API services — which lets people integrate its various models into external products — is currently priced at a loss, and increasing prices will likely make this product unsustainable for many businesses currently relying on these discounted rates.

As I've said before, OpenAI is unprofitable, unsustainable and untenable in its current form, but I think it's important to explain exactly how untenable it is, and I'm going to start with a few statements:

  • For OpenAI to hit $11.6 billion of revenue by the end of 2025, it will have to more than triple its revenue.
  • At the current cost of revenue, it will cost OpenAI more than $27 billion to hit that revenue target. Even if it somehow halves its costs, OpenAI will still lose $2 billion.
  • OpenAI has not had anything truly important since the launch of GPT-3.5, and its recent o-1 model has not been particularly impressive. It's also going to be much, much more expensive to run, as the "chain-of-thought" "reasoning" that it does requires a bunch of extra calculations (an indeterminate amount that OpenAI is deliberately hiding), and OpenAI can't even seem to come up with a meaningful use case.
  • OpenAI's products are increasingly-commoditized, with Google, Meta, Amazon and even Microsoft building generative AI models to compete. Worse-still, these models are all using effectively-identical training data (and they're running out!), which makes their outputs (and by extension their underlying technology) increasingly similar.
  • OpenAI's cloud business — meaning other companies connecting their services to OpenAI's API — is remarkably small, to the point that it suggests there's weaknesses in the generative AI industry writ large. It’s extremely worrying that the biggest player in the game only makes $1 billion (less than 30% of its revenue) from providing access to their models.

And, fundamentally, I can find no compelling evidence that suggests that OpenAI will be able to sustain this growth. In fact, I can find no historical comparison, and believe that OpenAI's growth is already stumbling.

Let's take a look, shall we?

How Does OpenAI Make Money?

To do this right, we have to lay out exactly how OpenAI makes money.

According to the New York Times, OpenAI expects ChatGPT to make about $2.7 billion in revenue in 2024, with an additional $1 billion coming from "other businesses using its technology."

Let's break this down.

ChatGPT Plus, Teams, and Enterprise — 73% of revenue (approximately $2.7 billion).

  • OpenAI sells access to ChatGPT Plus to consumers for $20 a month, offering faster response times, "priority access to new features," and 24/7 access to OpenAI's models, with "5x more messages for GPT-4o," access image generation, data analysis and web browsing. Importantly, OpenAI can use anything you do as training data, unless you explicitly opt-out.
  • OpenAI sells access to a "Teams" version of ChatGPT Plus, a self-service product that allows you to share chatbots between team users, costing $25-a-user-a-month if paid annually (so $300 a year per-user), and $30-a-user-a-month if paid monthly. From this point on, your data is excluded from that used to train OpenAI's models by default.
  • OpenAI sells "enterprise" subscriptions that include an expanded context window for longer prompts (meaning you can give more detailed instructions), admin controls, and "enhanced support and ongoing account management."
    • It isn't clear how much this costs, but a Reddit thread from a year ago suggests it's $60-a-user-a-month, with a minimum of 150 seats on an annual contract.
    • I don’t know for certain, but it’s likely OpenAI offers some kind of bulk discount for large customers that buy in volume, as is the case with pretty much every enterprise SaaS business. I’ll explain my reasoning later in this piece. 
    • Assuming this is the case, that’s bad for OpenAI, as generative AI isn’t like any other SaaS product. Economies of scale don’t really work here, as servicing each user has its own cost (namely, the cloud computing power used to answer queries). That cost-per-user doesn’t decrease as you add more customers. You need more servers. More GPUs. 
    • Cutting prices, therefore, only serves to slash whatever meager margins exist on those customers, or to turn those potentially-profitable customers into a loss center.     

Licensing Access To Models And Services — 27% of revenue (approximately $1 billion).

  • OpenAI makes the rest of its money by licensing access to its models and services via its API. One thing you notice, when looking at its pricing page, is the variety of models and APIs available, and the variation in pricing that exists.
  • OpenAI offers a lot of options: its most powerful GPT-4o model; the less-powerful-yet-cheaper GPT-4o-mini model; the "reasoning" model o1 (and its "mini" counterpart); a "text embeddings" API that is used primarily for tasks where you want to identify anomalies or relationships in text, or classify stuff in text; an "assistants API" for building assistants into an application (which in turn connect to one of the other models, which includes things like interpreting code or searching for files); three different image generation models; three different audio models; and a bunch of older legacy APIs and models.
  • In many cases, customers can get a 50% discount by using the Batch API. This delays completion by as much as 24 hours and requires all tasks to be submitted in one batch, rather than as-and-when. This might be useful for using GPT to dig through masses of data.
    • For example, when using the batch API, the cost of using GPT-4o drops from $5 per 1m input tokens to $2.5, and from $15 per 1m output tokens to $7.5.
    • Batch pricing is not available for o1-preview.
    • Additionally, this discount is not available when buying training tokens for fine-tuning models (although you still get the same discount for input and output tokens).
    • Batch pricing is not available for DALL-E, the Assistants API, or the audio models.
    • It’s also not available for GPT-3.5-turbo-instruct and the latest 4-o model.
  • The pricing of these products gets a little messy, much like it does with basically every cloud company.
  • OpenAI also makes around $200 million a year selling access to its models through Microsoft, according to Bloomberg.
  • In conclusion, this means that OpenAI makes roughly $800 million a year by directly selling access to their API, with a further $200m coming from an external channel.

As a result of these numbers, I have major concerns about the viability of OpenAI's business, and the generative AI market at large. If OpenAI — the most prominent name in all of generative AI — is only making a billion dollars a year from this, what does that say about the larger growth trajectory of this company, or actual usage of generative AI products?

I'll get to that in a bit.

First, we've gotta talk about the dollars.

The Revenue Problem

So, as it stands, OpenAI makes the majority — more than 70% — of its revenue from selling premium access to ChatGPT.

A few weeks ago, The Information reported that ChatGPT Plus had "more than 10 million paying subscribers," and that it had 1 million more that were paying for  "higher-priced plans for business teams." As I've laid out above, this means that OpenAI is making about $200 million a month from consumer subscribers, but "business teams" is an indeterminate split between teams ($25-a-user-a-month paid annually) and enterprise (at least $60-a-user-a-month, paid annually, with a minimum of 150 seats).

One important detail: 100,000 of the 1 million business customers are workers at management consultancy PwC, which has also become OpenAI's "first partner for selling enterprise offerings to other businesses."  It isn't clear whether these are enterprise accounts or teams accounts, or whether PwC is paying full price (I'd wager it isn’t).

Here’s how this would play out in revenue terms across several assumed divisions of the customer base, and an assumption that every Teams customer is paying $27.5 (that plan costs either $25 or $30, depending on whether you pay monthly or yearly, but for the sake of fairness, I went with the middle ground). From there, we can run some hypothetical monthly revenue numbers based on a million "higher-priced plans for business teams."

  • 25% Enterprise, 75% Teams: $35,625,000
  • 50% Enterprise, 50% Teams: $43,750,000
  • 75% Enterprise, 25% Teams: $51,875,000

Sadly, I don't think things are that good, and I honestly don't think these would be particularly-impressive numbers to begin with.

We can actually make a more-precise estimate by working backwards from the New York Times' estimates. ChatGPT Plus has 10 million customers, making OpenAI around $2.4 billion dollars a year (ten million users spending $20 each month equates to $200 million. Multiply that by 12, you get $2.4 billion). This means that business users make up about $300 million a year in revenue, or $25 million a month.

That is, to be frank, extremely bad. These are estimates, but even if they were doubled, these would not be particularly exciting numbers.

For all the excitement about OpenAI's revenue — putting aside the fact that it spends $2.35 to make $1 — the majority of the money it makes is from subscriptions to ChatGPT Plus for consumers, though one can fairly say there are professionals that use it under the consumer version too.

While 10 million paying subscribers might seem like a lot, "ChatGPT" is effectively to generative AI what "Google" is to search. Ten million people paying for this is table stakes. 

OpenAI has been covered by effectively every single media outlet, is mentioned in almost every single conversation about AI (even when it's not about generative AI!), and has the backing and marketing push of Microsoft and the entirety of Silicon Valley behind it. ChatGPT has over 200 million weekly users, and the New York Times reports that OpenAI has "350 million people use [OpenAI's] services each month as of June" (though it's unclear if that includes those using the API). Collectively, this means that OpenAI — the most popular company in the industry — can only convert about 3% of its users.

This might be because it's not obvious why anyone should pay for a premium subscription. Paying for ChatGPT Plus doesn't dramatically change the product, nor does it offer a particularly-compelling new use case for anyone other than power users. As a company, OpenAI is flat-out terrible at product. While it may be able to attract hundreds of millions of people to dick around with ChatGPT (losing money with every prompt), it's hard to convert them because you have to, on some level, show the user what ChatGPT can do to get them to pay for it… and there isn't really much you can charge for, other than limiting how many times they can use it.

And, if we're honest, it still isn't obvious why anyone should use ChatGPT in the first place, other than the fact everybody is talking about it. You can ask it to generate something — a picture, a few paragraphs, perhaps a question — and at that point say "cool" and move on. I can absolutely see how there are people who regularly use ChatGPT's natural language prompts to answer questions that they can't quite phrase (a word that's on the tip of their tongue, a question they're not sure how to phrase, or to brainstorm something) but beyond that, there really is no "sticky" part of this product beyond "a search engine that talks back to you."

That product is extremely commoditized. The free version of ChatGPT is effectively identical to the free version of Anthropic's Claude, Meta's AI assistant, Microsoft's Copilot, and even Twitter's "Grok." They all use similar training data, all give similar outputs, and are all free. Why would you pay for ChatGPT Plus when Meta or Microsoft will give you their own spin on the same flavor? Other than pure brand recognition, what is it that ChatGPT does that Copilot (powered by ChatGPT) doesn't? And does that matter to the average user?

I'd argue it doesn't. I'd also argue that those willing to pay for a "Plus" subscription are more likely to use the platform way, way more than free users, which in turn may (as one Redditor hypothesized regarding Anthropic's "Claude Pro" subscription) lose it the revenue on said premium subscriber. While there's a chance that OpenAI could have a chunk of users that aren't particularly active, one cannot run a business based on selling stuff you hope that people won't use.

A note on “free” products: Some of you may suggest that OpenAI having 350 million free users may be a good sign, likely comparing it to the early days of Facebook, or Google. It’s really important to note how different ChatGPT is to those products. While Facebook and Google had cloud infrastructure costs, they were dramatically lower than OpenAI’s, and both Facebook and Google had (and have) immediate ways to monetize free users.

Both Meta and Google monetize free users through advertising that is informed by their actions on the platform, which involves the user continually feeding the company information about their preferences based on their browsing habits across their platforms. As a result, a “free” user is quite valuable to these companies, and becomes more so as they interact with the platform more.

This isn’t really the case with OpenAI. Each free user of ChatGPT is, at best, a person that can be converted into a paying user. While OpenAI can use their inputs as potential training data, that’s infinitesimal value compared to operating costs. Unlike Facebook and Google, ChatGPT’s most frequent free users actually become less valuable over time, and become a burden on a system that already burns money.

I’ll touch on customer churn later, but one more note about ChatGPT Plus users: as with any other consumer-centric subscription product, these customers are far more likely to cut their spending when they no longer feel like they’re getting value from their product, or when their household budgets demand it. Netflix — the biggest name in streaming — lost a million customers in 2022, around the time of the cost-of-living crisis (and, from 2025, it plans to stop reporting subscriber numbers altogether)

ChatGPT Plus is likely, for many people, a “lifestyle product.” And the problem is that, when people lose their jobs or inflation hikes, these products are the first to get slashed from the household budget. 

OpenAI also has a unique problem that makes it entirely different to most SaaS solutions — the cost of delivering the solution. While 3% conversion of free customers to paying customers might regularly be on the low side of "good," said solutions are nowhere near as expensive as running software using generative AI.

There's also another wrinkle.

If the majority of OpenAI's revenue — over 70% — comes from people paying for ChatGPT Plus, then that heavily suggests the majority of its compute costs come from what is arguably its least-profitable product. The only alternative is that OpenAI's compute costs are so high that, despite making  two-thirds of its revenue, ChatGPT creates so much overhead that it sours the rest of the business.

You see, ChatGPT Plus is not a great business. It's remarkable that OpenAI found 10 million people to pay for it, but how do you grow that to 20 million, or 40 million?

These aren't idle questions, either. At present, OpenAI makes $225 million a month — $2.7 billion a year — by selling premium subscriptions to ChatGPT. To hit a revenue target of $11.6 billion in 2025, OpenAI would need to increase revenue from ChatGPT customers by 310%.

If we consider the current ratio of Plus subscriptions to Teams and Enterprise subscriptions — about 88.89% to 11.11% — OpenAI would need to find 18.29 million paying users (assuming a price increase of $2 a month), while also retaining every single one of its current ChatGPT Plus users at a new price point, for a total $7.4 billion, or $616 million or so a month. It would also have to make $933 million in revenue from its business or enterprise clients, which, again, would require OpenAI to more-than-triple their current users.

OpenAI's primary revenue source is one of the most easily-commoditized things in the world — a Large Language Model in a web browser — and its competitor is Mark Zuckerberg, a petty king with a huge warchest that can never, ever be fired, even with significant investor pressure. Even if that wasn't the case, the premium product that OpenAI sells is far from endearing, still looking for a killer app a year-and-a-half into its existence, with its biggest competitor being the free version of ChatGPT.

There are ways that OpenAI could potentially turn this around, but even a battalion of experienced salespeople will still need paying, and will have the immediate job of "increase revenue by 300%" for a product that most people have trouble explaining.

No, really. What is ChatGPT? Can you give me an answer that actually explains what the product does? What is the compelling use case that makes this a must-have?

I am hammering this point because this is the majority of OpenAI's revenue. OpenAI lives and dies on the revenue gained from ChatGPT, a product that hasn't meaningfully changed since it launched beyond adding new models that do, for most users, exactly the same thing. While some might find ChatGPT's voice mode interesting, "interesting" just isn't good enough today.

And to drill down further, the majority of OpenAI's revenue is from ChatGPT Plus, not its Enterprise or Teams product, meaning that hiring a sales team is far from practical. How do you sell this to consumers, or professionals? Even Microsoft, which has a vast marketing apparatus and deep pockets, struggled to sell Copilot — which is based on OpenAI’s GPT models — on its weird (and presumably expensive) Superbowl ads, or on the countless commercials that dotted the 2024 Olympic Games. 

To triple users, ChatGPT must meaningfully change, and do so immediately, or disclose multiple meaningful, powerful use cases that are so impressive that 18 million new people agree to pay $22 a month. That is an incredible — and some might say insane — goal, and one that I do not think this company is capable of achieving.

Yet this is far from the most worrying part of the current OpenAI story.

The Cloud Services Problem

What's astounded me about this whole story is how little of OpenAI's revenue comes from providing other companies the means to integrate generative AI into their systems, for two big reasons:

  1. Assuming that OpenAI makes $1 billion a year selling API access (and thus letting you integrate their models into your products), it suggests that even the biggest company in generative AI can't find enough customers to make its cloud services viable.
  2. This in turn suggests that there is a remarkably small amount of demand for generative AI integrations, or considered another way, that the companies connecting to OpenAI aren't making them very much money.
    1. This is when things get confusing. According to The Information, OpenAI is projected to get an annualized $200 million from selling access to its models via Microsoft's Azure OpenAI business, where it gets a 20% cut of all revenue. That suggests that Microsoft was, at the time (in June), on course to make a billion dollars in revenue a year from OpenAI's models. Yet a story from two weeks later suggested that OpenAI was "exceeding what Microsoft makes from an equivalent business." 
    2. Compare that to the story from the New York Times, which suggested that OpenAI is still only on course to make one billion dollars from selling access to its own models — which highly suggests that growth has plateaued for its cloud services, and that Microsoft is making about as much as OpenAI is selling access to their services.
    3. It also highlights how ineffective OpenAI’s own sales channels are. If a reseller — which Microsoft effectively is — can match OpenAI’s own sales figures, it suggests that OpenAI isn’t really good at selling its own stuff. Or, perhaps, that the people most likely to use OpenAI’s models and APIs would rather buy access through their existing cloud infrastructure provider than directly from the developer itself. 
    4. And so, it has two options: Either it relies on partnerships and external sales channels, allowing it to potentially increase the gross number of customers, but at the expense of the money it makes, or it can build a proper sales and marketing team.
    5. Both options kinda suck. The latter option also promises to be expensive, costly, and has no guarantees of success. 
  3. There’s very little information about how many developers actively use OpenAI’s models and APIs in their code. At the 2024 OpenAI DevDay event — its  developer conference, which took place on October 1 — the company said that over 3,000,000 developers are building apps using OpenAI’s infrastructure. That works out to about $333 per developer. 
    1. To be fair, any SaaS company that offers API integrations — think Twilio — has a large chunk of users that are either hobbyists, or people just experimenting with a technology out of curiosity. That demographic inevitably skews the average revenue per developer. 
    2. I imagine that OpenAI — by virtue of being the biggest name in generative AI — has a disproportionate number of those non-paying or low-paying users. People who either made an account, but never actually used it, or spent $10 on building some passion project that they can brag about on Twitter or Hacker News, but never actually commercialize.
    3. I also imagine that some companies are using GPT internally, and that these probably account for a huge proportion of its API/model revenue, especially considering that there aren’t many hugely popular consumer apps using GPT. 
    4. PWC — which, as mentioned earlier, recently bought 100,000 ChatGPT Enterprise seats for its own internal use — also has a custom GPT model that it uses in-house called (and I swear I’m not making this up) ChatPWC. According to its 2023 financial review, PWC has around 364,000 employees
    5. I can easily imagine this company being one of OpenAI’s “whales” — even though the reviews of ChatPWC on the PWC subreddit are mixed at best, with some finding the service actually useful, and others describing it as “absolutely shite.”
    6. Let’s go back to Twilio — a company that makes it easy to send SMS messages and push notifications. Over the past quarter, it made around $1bn in revenue. That’s what OpenAI made from renting out its models/APIs over the past year.
    7. Twilio also made roughly $4bn over the past four quarters — which is more than OpenAI’s projected revenue for the entirety of 2024. OpenAI, I remind you, is the most hyped company in tech right now, and it’s aiming for a $150bn valuation. Twilio’s market cap is, at the time of writing, just under $10bn.
    8. Does this sound like an in-demand technology to you? Does this sound like something with a vast, untapped market?
  4. One last thing: At the latest OpenAI DevDay event, the company said it had reduced the cost of accessing its APIs by 99 percent over the past two years — although TechCrunch noted that this was likely due to competition from Meta and Google. 
    1. Remember when I said that generative AI was an incredibly commoditized product? 

In the event that OpenAI and Microsoft are each making about a billion dollars in annualized revenue — again, these are estimates based on current growth trajectories — it heavily suggests that there is...two billion dollars of revenue? For Microsoft and OpenAI combined?

That's god damn pathetic! That's really, really bad! And the fact these "annualized" figures have changed so little since June means that growth has slowed, because otherwise said numbers would have increased (as "annualized" is based on projections). 

Let me explain my reasoning a bit. “Annualized” revenue is a somewhat-vague term that can mean many things, but taking The Information’s definition from mid-June (when annualized revenue was $3.4 billion), annualized revenue is “a measure of the past month’s revenue multiplied by 12.” That suggests that OpenAI’s revenue in that month was $283 million, a number which the New York Times updated in their piece, saying that OpenAI made $300 million of revenue in the month of August, which works out to $3.6 billion in annualized revenue, and OpenAI expects to hit $3.7 billion this year, which works out to $308 million.

Just so we’re abundantly clear — this means that OpenAI, unquestionably the leader and most-prominent brand and the first thought anybody will have when integrating generative AI, is only making about $80 million a month selling access to its models?

This heavily-suggests that generative AI, as a technology, doesn’t necessarily have a product-market fit. According to a survey by Andreessen Horowitz earlier in the year, “the 2023 market share of closed-source models [was estimated at] 80 to 90%, with the majority of share going to OpenAI.” Another survey from IOT Analytics published late last year suggests that number might look a little bit different, with 39% of the market share going to OpenAI and 30% going to Microsoft. 

Assuming that the latter numbers are true, this suggests that the generative AI market is incredibly small. If OpenAI — which dominates with, I’d wager, at least a 30% market share — is only making $1 billion a year from selling access to its models, and at this stage in the massively-inflated hype bubble, there may not even be $10 billion of annual revenue from companies integrating generative AI into their products. And let’s be honest, this is where all of the money should be. If this stuff is the future of everything, why is the revenue stream so painfully weak? Why isn’t generative AI in everything — not just big apps and services hoping to ride a wave as it crests, oblivious to the imminent collapse — but all apps, and present in a big way, where it’s core to the app’s functionality?  

OpenAI making so little selling access to their models suggests that, despite the hype-cycle, there either isn’t the interest in integrating these products from developers, or these integrations aren’t that useful, attractive or impressive to users. Again, these products are charged on usage — and so, it’s possible for generative AI to be integrated into a service, but not actually drive much revenue for OpenAI. 

While ChatGPT has brand recognition, companies integrating OpenAI’s models in their products are far more indicative of the long-term health of both the company and the industry itself, because if OpenAI can’t convince people to integrate and use this shit, do you really think other companies are succeeding? 

There are two counterarguments:

  1. OpenAI is struggling to sign up developers, and this problem is unique to OpenAI. 
    1. This argument is easily-refuted. Microsoft is on course to make a billion dollars selling OpenAI’s models themselves this year, which only reinforces my point that the generative AI market is small. Again, if Microsoft can’t make more than a billion dollars on this, how much could the rest of the market be making?
  2. OpenAI has tons of adoption, but it’s deliberately underpricing their models to scale.
    1. This is perhaps the most plausible, as by its own admission, OpenAI has reduced the cost of its APIs by 99 percent over the past two years.
    2. If that’s the case, then OpenAI has to raise prices. Also, Microsoft has to raise prices. And that will likely spook those who have gotten used to these subsidized prices — which I discussed in The Subprime AI Crisis a few weeks ago — and they’ll start to disentangle themselves from generative AI.
    3. If this is scale, it isn’t working very well? Unless, of course, OpenAI has realized that pricing is their only lever to pull that can influence adoption. 

While I can’t say for certain, and I’ll happily admit if I’m wrong in the future, the numbers here suggest that OpenAI’s cloud services business — as in integrating its supposedly industry-leading technology into other products — is nowhere near as viable a business as selling subscriptions to ChatGPT, which in turn suggests that there is either a lack of interest in integrating generative AI, a lack of adoption once said integration has happened, or a lack of usage by users of features powered by generative AI itself. 

The only other avenue is that OpenAI isn’t charging what it believes these services are worth.

The “Staying Alive” Problem

As discussed earlier in the piece, OpenAI needs to more-than-triple revenue in the next 15 months to hit $11.6 billion in sales. Furthermore, at its current burn rate, OpenAI is currently spending $2.35 to make $1 — meaning that $11.6 billion in revenue will cost $27 billion to make.

While costs could foreseeably come down, all signs point to them increasing. As noted above, GPT-4 cost $100 million to train, and more complex future models will cost hundreds of millions or even a billion dollars to train. The Information also estimated back in July that OpenAI's training costs would balloon to $3 billion in 2024, and it’s fair to assume that models like o1 and GPT-5 (also known as “Orion”) will be significantly more expensive to train. 

OpenAI also has a popularity problem. While it’s usually great news that a product has 300 million free users, every single time somebody uses the service costs OpenAI money, and lots of it. The Information estimated OpenAI will spend around $4 billion on server costs in 2024 to run ChatGPT and host other companies running services using GPT and its other models, effectively meaning that every dollar of revenue is immediately eaten by the costs of acquiring it. And that’s before you factor in paying the more-than 1,500 people that work at the company (another $1.5 billion in costs alone, and OpenAI expects to hire another 200 by the end of the year), and other costs like real estate and taxes.

As a result, I am now updating my hypothesis. I believe that OpenAI, once it has raised its current ($6.5 billion to $7 billion) round, will have to raise another round of at least the same size by July 2025, if not sooner. To quote the New York Times’ reporting, “fund-raising material also signaled that OpenAI would need to continue raising money over the next year because its expenses grew in tandem with the number of people using its products.”

OpenAI could potentially reduce costs, but has shown little proof that it’s able to, and its one attempt (the more efficient “Arrakis” model) failed to launch

One would also imagine that this company — now burning $5 billion a year — is doing all it can to try and bring costs down, and even if it isn’t, I severely doubt that it’s possible to. After all, Anthropic is facing exactly the same problem, and is on course to lose $2.7 billion in 2024 on $800 million in revenue, which makes me think the likelihood of this being a quick fix is extremely low.

There is, however, another problem, one caused by their current fundraise. 

OpenAI is, at present, raising $6.5 to $7 billion in capital at a $150 billion valuation. Assuming it completes this raise — which is likely to happen — it will mean that all future rounds have to be at either $150 billion or higher. A lower valuation (called a down-round) would both upset current investors and send a loud signal to the market that the company is having trouble, which overwhelmingly suggests that OpenAI’s only way to survive is to raise its next round at a valuation of $200 billion, and require yet another giant raise, likely of at least $10 billion.

For context, the biggest IPO valuation in US corporate history was Alibaba, which debuted onto the New York Stock Exchange with a market cap of nearly $170bn. That figure is more than double the runner-up, Facebook, which had a value of $81bn. Are you telling me that OpenAI — a company that burns vast piles of money like the Joker in The Dark Knight, and has no obvious path to profitability, and a dubious addressable market — is worth more than Alibaba, which, even a decade ago, was a giant of a company?   

Further souring the terms are its prior commitments to Microsoft, which owns 75% of all future profits (ha!) until it recoups its $13 billion investment, after which point it receives 49% of all profits until it is paid $1.56 trillion dollars, or 120 times the original amount, though the move to a non-profit structure may remove the limits on OpenAI’s ridiculous “profit participation units” where previous investors are given a slice of (theoretical) profits from the least profitable company in the world. 

Nevertheless, Microsoft’s iron grip on the future profits of this company — as well as complete access to its research and technology — means that anyone investing is effectively agreeing to help Microsoft out, and makes any stock (or PPUs) that Microsoft owns far more valuable.

There’s also the other problem: how OpenAI converts its bizarre non-profit structure into a for-profit, and whether that’s actually possible. Investors are, according to the New York Times, hedging their bets. OpenAI has two years from the deal’s close to convert to a for-profit company, or its funding will convert into debt at a 9% interest rate

As an aside: how will OpenAI pay that interest in the event they can't convert to a for-profit business? Will they raise money to pay the interest rate? Will they get a loan?

As a result, OpenAI now faces multiple challenges:

  1. It must successfully close its current $6.5 billion to $7 billion round within the next month — and the deal has been “almost closing” for a couple of weeks.
  2. It must, on closing said deal, convert itself to a for-profit company, within two years.
  3. In the event that it wishes to grow, OpenAI will have to close another round of funding by the middle of 2025, and raise both more money (likely $10 billion), and raise it at a higher valuation (likely at minimum $175 billion, if not $200 billion to $250 billion.)

Also, at some point, OpenAI will have to work out a way to go public, because otherwise there is absolutely no reason to invest in this company, unless you are doing so with the belief that you will be able to offload your shares in a future secondary sale. At that point, OpenAI resembles a kind of investment scam where new investors exist only to help old investors liquidate their assets rather than anyone investing in the actual value of the company itself.

And, after that, OpenAI will still need to raise more rounds of funding. ChatGPT’s free version is a poison in its system, a marketing channel that burns billions of dollars to introduce people to a product that only ten million people will actually pay for, and its future depends largely on its ability to continue convincing people to use it. And in a few months, OpenAI’s integration with Apple devices launches, meaning that millions of people will now start using ChatGPT for free on their iPhones, with OpenAI footing the bill in the hopes they’ll upgrade to ChatGPT Plus, at which point Apple will take $6 of the $20 a month subscription. 

How does this continue? How does OpenAI survive? 

What Are We Doing Here?

I realize I’ve been a little repetitive in this piece, but it’s really important to focus on the fact that the leader in the generative AI space does not appear to make that much money (less than 30% of its revenue) helping people put generative AI in their products, and makes most of their money selling subscriptions to a product that mostly coasts on hype. 

While it may seem impressive that OpenAI has 10 million paying subscribers, that’s the result of literally every AI story mentioning its name in almost every media outlet, and its name being on the lips of basically everybody in the entirety of the tech industry — and a chunk of the business world at large. 

Worse still, arguably the most important product that OpenAI sells is access to its models, and it’s the least-viable part of its business, despite “GPT” being synonymous with the concept of integrating generative AI into your company. 

Because access to its APIs isn’t a particularly big business, OpenAI’s hopes of hitting its revenue goals is almost entirely contingent on ChatGPT Plus continuing to grow. While this might happen in the short term (though research suggests that 11% of customers stop paying after a month and 26% after three months), ChatGPT Plus has no real moat, little product differentiation (outside of its advanced voice mode, which Meta is already working on a competitor to), and increasing commoditization from other models, open source platforms, and even on-device models (like on Copilot+-powered PCs). 

The features that ChatGPT is best known for — generating code, summarizing meetings, brainstorming and generating stuff — can be done on any number of other platforms, in many cases for free, and OpenAI’s biggest competitor to ChatGPT Plus is ChatGPT itself. 

And I cannot express enough how bad a sign it is that its cloud business is so thin. The largest player in the supposedly most important industry ever can only scrounge together $1 billion in annual revenue selling access to the most well-known model in the industry. 

This suggests a fundamental weakness in the revenue model behind GPT, as well as a fundamental weakness in the generative artificial intelligence market writ large. If OpenAI cannot make more than a billion dollars of revenue off of this, then it’s fair to assume that there is either a lack of interest from developers or a lack of interest from the consumers those developers are serving. 

Remember when I said that OpenAI mentioned in its recent Devay that it had “cut costs for developers to access [their] API by 99% in the last two years” (which Max Zeff of TechCrunch posits may be due to price pressure from Google and Meta)? That suggests that OpenAI really has no room to start charging more for access to its APIs. This technology is commoditized, with few points of differentiation. It can’t, for example, raise prices for the sake of a better, more capable product. OpenAI is trapped.  

All of this is happening at a time when an astronomical amount of talent is leaving the company — the talent that OpenAI desperately needs to build products that people will actually pay for. 

At this point, it’s hard to see how OpenAI survives.

To continue growing ChatGPT Plus, it will have to create meaningful, mass-market use cases, or hope it can coast on a relatively-specious hype wave, one that will have to be so powerful it effectively triples its users. Even then, OpenAI will have to both find ways to give developers more reasons to integrate its models while making sure that said models provide a service that the end-user will actually appreciate, which is heavily-reliant on both the underlying technology and the ability of developers to create meaningful products with it. 

And OpenAI is getting desperate. According to Fortune, OpenAI’s culture is deeply brittle, with a “relentless pressure to introduce products” rushing its o1 model to market as Sam Altman was “eager to prove to potential investors in the company’s latest funding round that OpenAI remains at the forefront of AI development” despite staff saying it wasn’t ready. 

These aren’t the actions of a company that’s on the forefront of anything — they’re desperate moves made by desperate people burning the candle at both ends.

Yet, once you get past these problems, you run head-first into the largest one: that generative AI is deeply unprofitable to run. When every subscriber or API call loses you money, growth only exists to help flog your company to investors, and at some point investors will begin to question whether this company can stand on its own two feet. 

It can’t. 

OpenAI is a disaster in the making, and behind it sits a potentially bigger, nastier disaster — a lack of any real strength in the generative AI market. If OpenAI can only make a billion dollars as the leader in this market (with $200 million of that coming from Microsoft reselling its models), it heavily suggests that there is neither developer nor user interest in generative AI products. 

Perhaps it’s the hallucination problem, or perhaps it’s just that generative AI isn’t something that produces particularly-interesting interactions with a user. While you could argue that “somebody can work out a really cool product,” it’s time to ask why Amazon, Google, Meta, OpenAI, Apple, and Microsoft have failed to make one in the last two years.

Though ChatGPT Plus is popular, it’s clear that it operates — much like ChatGPT — as a way of seeing what generative AI can do rather than a product that customers love. I see no viable way for OpenAI to grow this product at the rate it needs to be grown, and that’s before considering its total lack of profit.

I hypothesize that OpenAI will successfully close its $150 billion round in the next few weeks, but that growth is already slowing, and will slow dramatically as we enter the new year. I believe the upcoming earnings from Microsoft and Google will further dampen excitement around generative AI, which in turn will reduce the likelihood that developers will integrate GPT further into their products, while also likely depressing market interest in ChatGPT writ large. 

While this bubble can continue coasting for a while, nothing about the OpenAI story looks good. This is a company lost, bleeding money with every interaction with the customer, flogging software that’s at best kind-of useful and at worst actively harmful. Unless something significantly changes — like a breakthrough in energy or compute efficiency — I can’t see how this company makes it another two years.

Worse still, If my hypothesis about the wider generative AI market is true, then there might simply not be a viable business in providing these services. While Meta and OpenAI might be able to claim hundreds of millions of people use these services, I see no evidence that these are products that millions of people will pay for long-term. 

For me to be wrong, these companies will have to solve multiple intractable problems, come up with entirely-new use cases, and raise historic amounts of capital. 

If I’m right, we’re watching venture capitalists and companies like Microsoft burn tens of billions of dollars to power the next generation of products that nobody gives a shit about.