2025-10-24 08:00:00
I've been following tech news for decades, and one of the worst trends in the broader cultural conversation about technology — one that's markedly accelerated over the last decade — is the shift from talking about people who create tech to focusing on those who merely finance it.
It's time we change the story. When you see a story that claims to be about "technology", ask yourself:
These questions aren't being asked nearly enough. The result is a hell of a lot of "tech" stories that have approximately nothing to do with technology.
The shift to centering money movers over makers has had incredibly negative effects on innovation, accountability, and even just the basic accuracy of how stories are told about technology.
We see this play out in a number of ways. First, a huge percentage of all stories about new technologies focus solely on startups, even though a small fraction of all tech workers are employed by startups, and the vast majority of new technology innovations come from academia, the public sector, and research and development organizations within other institutions outside of the startup world. As I wrote nine years ago, there is no technology industry — every organization uses technology, so technological innovation can come from anywhere. But we seldom see that broad base of ideas and insight covered accurately, if they're covered at all, because they're not of interest to the investors who are hogging the spotlight.
There's also the fact that a disproportionately large number of "technology" stories are really just announcements about funding events for companies in the technology sector, which has very little to do with the merits or substance of the tech that they create, taking time and space away from other innovations that could be covered, but also distracting from talking about how the tech actually works. This erodes the ability for people who care about technology to share knowledge, which is key for driving broader innovations.
One of the great joys of being in various technology communities is how you can "find your people" — those who geek out about the same minute technical details as you. There's a profound spirit of generosity in so many tech communities, where people will go out of their way to help you troubleshoot or fix bugs or will contribute code, just to share in that spirit of creativity together. There's a magical and rewarding feeling the first time you get some code to successfully run, or the first time you get a bit of hardware to successfully boot, and people who love technology delight in helping others achieve that. I've seen this remain true for people at every stage of their career, with even some of the most expert coders in the world voluntarily spending their time helping beginning coders with questions just because they had a shared interest.
The most common reason that people create technology is because they had an idea about something cool they wanted to see in the world. That's the underlying ethos which connects tech creators together, and which motivates them to share their work as free or open source projects, or to write up their weekend hacks just for the love nerding out. Sometimes there's enough interest that they might turn that side project into a business, but in most cases the fundamental motivation is the creative spirit. And then, sure, if that creative project needs capital to grow into its full potential, then there's a place for investors to join the conversation.
That creative spirit used to be more obvious when more of the cultural story about tech featured actual makers; it's what brought me and most of my peers into this space in the first place. And all of that gets crowded out when people think the only path into creating something begins with appeasing a tiny handful of gatekeepers who control the pursestrings.
There's been a larger cost to this focus on venture capitalists and financiers over coders, engineers, and inventors: It's gone to their heads. Part of the reason is that some of the investors, long ago, used to make products. A handful of them even made successful ones, and some of those successful ones were even good. But after riding on the coattails of those successes for a long time, and spending years in the bubble of praise and sycophancy that comes with being a person that people want to get money from, the egos start to grow. The story becomes about their goals, their agendas, their portfolios.
When we see something like a wildly-distorted view of artificial intelligence get enough cultural traction to become considered “conventional wisdom” despite the fact that it’s a wildly unpopular view held by a tiny, extremist minority within the larger tech sphere — that is the result of focusing on investors instead of inventors. Who cares what the money-movers think? We want to hear what motivated the makers!
We’re also losing the chance for people to see themselves reflected in the stories we tell about technology. It’s obvious that the cabal of check-writers is a closed cohort. But that’s a stark contrast to the warm and welcoming spirit that still suffuses the communities of actual creators. There's a striking lack of historical perspective in how we talk about tech today. Let community voices lead instead of a tiny group of tycoons, and you'd get much more interesting, accurate stories. We couldn’t imagine a film being released without talking about who the director was, or the actors, and they even give out awards for the writers. But when a new app comes out, media talks to the CEO of the tech company — that’s like talking to the head of the studio about the new movie.
We have so much richer stories to tell. At its best, technology empowers people, in a profound and extraordinary way. I’ve seen people change their lives, even change entire communities, by getting just the barest bit of access to the right tech at the right time. There’s something so much more compelling and fascinating about finding out how things actually work, and thinking about how they might work better. The way to get there is by talking to the people who are actually making that future.
2025-10-22 08:00:00
OpenAI, the company behind ChatGPT, released their own browser called Atlas, and it actually is something new: the first browser that actively fights against the web. Let's talk about what that means, and what dangers there are from an anti-web browser made by an AI company — one that probably needs a warning label when you install it.
The problems fall into three main categories:
When I first got Atlas up and running, I tried giving it the easiest and most obvious tasks I could possibly give it. I looked up "Taylor Swift showgirl" to see if it would give me links to videos or playlists to watch or listen to the most popular music on the charts right now; this has to be just about the easiest possible prompt.
The results that came back looked like a web page, but they weren't. Instead, what I got was something closer to a last-minute book report written by a kid who had mostly plagiarized Wikipedia. The response mentioned some basic biographical information and had a few photos. Now we know that AI tools are prone to this kind of confabulation, but this is new, because it felt like I was in a web browser, typing into a search box on the Internet. And here's what was most notable: there was no link to her website.
Unless you were an expert, you would almost certainly think I had typed in a search box and gotten back a web page with search results. But in reality, I had typed in a prompt box and gotten back a synthesized response that superficially resembles a web page, and it uses some web technologies to display its output. Instead of a list of links to websites that had information about the topic, it had bullet points describing things it thought I should know. There were a few footnotes buried within some of those response, but the clear intent was that I was meant to stay within the AI-generated results, trapped in that walled garden.
During its first run, there's a brief warning buried amidst all the other messages that says, "ChatGPT may give you inaccurate information", but nobody is going to think that means "sometimes this tool completely fabricates content, gives me a box that looks like a search box, and shows me the fabricated content in a display that looks like a web page when I type in the fake search box".
And it's not like the generated response is even that satisfying. The fake web page had no information newer than two or three weeks old, reflecting the fact that LLMs rely on whenever they've most recently been able to crawl (or gather without consent) information from the web. None of today's big AI platforms update nearly as often as conventional search engines do.
Keep in mind, all of these shortcomings are not because the browser is new and has bugs; this is the app working as designed. Atlas is a browser, but it is not a web browser. It is an anti-web browser.
Back in the early 1980s, there was a popular game called Zork that was in a category called "text adventure games". The computer would say something like:
You are in a clearing in the forest, there is a rock here.
And then you would type:
Take the rock
And it would say:
Sorry, I can't do that.
So then you would type:
Pick up the rock.
And then it would say:
You have the rock.
And it would go on like this for hours while you tried in vain to guess what the hell it wanted you to type, or you discovered the outdoors, whichever came first.
There were a tiny handful of incredible nerds who thought this was fun, mostly because 3D graphics and the physical touch of another human being hadn't been invented yet. But for the most part, people would tire of the novelty because trying to guess what to type to make something happen is a terrible and exhausting user interface. This was also why people hated operating systems like MS-DOS, and why even all the Linux users reading this right now are doing so in a graphical user interface.
Clicking on things is great, because you can see what your choices are, and then just choose the one you want. Tapping on things on a touch screen is even better. And this kind of discoverability was one of the fundamental innovations of the web: It democratized being able to create a clickable list of options once anybody could make a web page.
In the demo for Atlas, the OpenAI team shows a user trying to find a Google Doc from their browser history. A normal user would type keywords like "atlas design" and see their browser show a list of recent pages. They would recognize the phrase "Google Docs" or the icon, and click on it to get back to where they were.
But in the OpenAI demo, the team member types out:
search web history for a doc about atlas core design
This is worse in every conceivable way. It's slower, more prone to error, and redundant. But it also highlights one of the biggest invisible problems: you're switching "modes". Normally, an LLM's default mode is to create plausible extrapolations based on its training data. Basically, it's supposed to make things up. But this demo has to explicitly walk you through "now it's time to go search my browser history" because it's coercing the AI to look through local content. And that can't be hallucinated! If you're trying to get back to a budget spreadsheet that you've created and ChatGPT decides to just make up a file that doesn't exist, you're probably not going to use that browser anymore.
Most people on the internet aren't old enough to remember this, but people were thrilled to leave command-line interfaces behind back in the 1990s. The explosion of color and graphics and multimedia in that era made a ton of headlines, but the real gains in productivity and usability came precisely because nobody was having to guess what secret spell they had to type into their computer to get actual work done. Links were a brilliant breakthrough in making it incredibly obvious how to get to where you wanted to go on a computer.
And look, we do need innovation in browser interfaces! If Atlas was letting people use plain language to automate regular tasks they want to do online, or even just added more tools that plugged into the rest of the services that people use every day, it might represent a real leap forward.
In the new-era command-line interface of Atlas, though, we're not just facing the challenges of an inscrutable command line. There's the even larger problem that, even if you guess the right magic words, it might either simply get things wrong or completely make things up. Atlas throws away the discoverability, simplicity and directness of the web by encouraging you to navigate even through your own documents and search results with an undefined, unknowable syntax that produces unreliable results. It's another way of being anti-web.
OpenAI is clearly very motivated to gather all the data in the world into their model, regardless of whether or not they have consent to do so. This is why a lot of people have been thinking deeply about what it would take to create an Internet of consent. It's no coincidence that hundreds of people who work at OpenAI, including many of the most powerful executives, are alumni of Facebook/Meta, especially during the era of many of that company's most egregious abuses of people's privacy. In the marketing materials and demonstrations of Atlas, OpenAI's team describes the browser as being able to be your "agent", performing tasks on your behalf.
But in reality, you are the agent for ChatGPT.
During setup, Atlas pushes very aggressively for you to turn on "memories" (where it tracks and stores everything you do and uses it to train an AI model about you) and to enable "Ask ChatGPT" on any website, where it's following along with you as you browse the web. By keeping the ChatGPT sidebar open while you browse, and giving it permission to look over your shoulder, OpenAI can suddenly access all kinds of things on the internet that they could never get to on their own.
Those Google Docs files that your boss said to keep confidential. The things you type into a Facebook comment box but never hit "send" on. Exactly which ex's Instagram you were creeping on. How much time you spent comparing different pairs of shoes during your lunch hour. All of those things would never show up in ChatGPT's regular method of grabbing content off the internet. Even Google wouldn't have access to that kind of data when you use their Chrome browser, and certainly not in a way that was connected to your actual identity.
But by acting as ChatGPT's agent, you can hold open the door so that the AI can now see and access all kinds of data it could never get to on its own. As publishers and content owners start to put up more effective ways of blocking the AI platforms from exploiting their content without consent, having users act as agents on behalf of ChatGPT lets them get around these systems, because site owners are never going to block their actual audience.
And while ChatGPT is following you around, it can create a complete and comprehensive surveillance profile of you — your personality, your behaviors, your private documents, your unfinished thoughts, how long you lingered on that one page before hitting the back button — at a level that the search companies and social networks of the last generation couldn't even dream of. We went from worrying about being tracked by cookies to letting an AI company control our web browser and watch everything we do. The amount of data they're gathering is unfathomable.
All of this gets described as if it is helping you. The truth is, in its current implementation, ChatGPT's "agent" functionality is largely useless. I tried the most standard test: having it book a very simple flight on my behalf. I provided ChatGPT with a prompt that included the fact it was a direct flight for one person, specifying the exact date and the origin and destination airports, and let the browser do the part that was supposed to be magical.
While the browser did a very good job of smoothly navigating to the right place on the airline website, it was only at the point where I would have actually been confirming the booking that I noticed it had arbitrarily changed the date to a completely different day, weeks off from what I had specified. By contrast, entering the exact same information into a standard Google search resulted in direct links that could be clicked on in literally one-tenth the time—and the old-fashioned, non-LLM Google results actually led to a booking link on the correct date.
So why would such an inferior experience be positioned as the most premium part of this new browser? It stands to reason it's because this is the most strategically important goal of the company creating the product. Their robots need humans to guide them around the gates that are quickly being erected around the open web, and if they can use that to keep their eyes on everything the humans are doing at the same time, so much the better. The "agent" story really only works in one direction, and that direction is anti-web.
Here's what's most key for contextualizing the Atlas browser: this is the same company whose chatbot keeps telling vulnerable children to self-harm, and they do, and now a number of them are dead. When those who are in psychological distress engage with these tools, they very frequently get pulled into states of extreme duress — which OpenAI knows keenly well because even one of their own investors has gone off the deep end when over-using the platform. In fact, the user experience feature that OpenAI is most effective at creating is emotional dependency amongst its users, as evidenced by the level of despondency its users showed after the recent release of GPT-5.
When users respond to a software update by expressing deep emotional distress, and that they feel like they've lost a friend, you have a profound bug. If there are enough grieving parents who have been devastated by your technology that they can form a support group for each other, then there should at the very least be a pretty aggressive warning label on this application when it is initially installed. Then, at a far less serious level, if this product is going to have extreme and invasive effects on markets and cultural ecosystems without disclosing the mechanisms it uses to do so, and without asking the consent of the many parties whose intellectual property and labor it will rely on to accomplish those ends, then we need to have a much broader reckoning.
Also, I love the web, and this thing is bad for the web.
I really, really want there to be more browsers! I want there to be lots of weird new ways of going around the web. I have my own LLM that I trained with my own content, and I bet if everybody else could have one like mine that they control, that had perfect privacy and wasn't owned by any big company, and never sent their data anywhere or did anything creepy, they'd want the benefits of that, too. It would even be awesome if that were integrated with their browser — with their web browser. I'm all for people trying strange new geeky things, and innovating on the experiences we have every day so we're not just stuck typing in the same boxes we've been using for decades, or settling for the same few experiences.
Hell, there's even room for innovation on command-line interfaces! They're not inherently terrible (I use one every day!), but regular folks shouldn't have one forced on them for ordinary tasks. And the majority of things people do on a computer are better when they rely on the zeroes-and-ones reliability of computers, when we know if what they're doing is true or false. We need to have fewer things in the world that make us wonder whether everything is just made up bullshit.
The web was designed without the concept of personal identity at all, and without any tracking system built in. It was designed for anybody to be able to create what they want, and even for anybody to be able to make their own web browser. Not long after its invention, people came up with ideas like cookies and made different systems for logging in, and then big companies started coming in and realized that if they could control the browser, they'd control all the users and the ways of making money. Ever since, there's been a long series of battles over privacy versus monetization, but there's been some small protection for users, who benefitted from those smart original design choices back at the birth of the web.
It's very clear that a lot of the new AI era is about dismantling the web's original design. The last few decades, where advertising was targeting people by their interests instead of directly by their actual identity, now sees AI companies trying to create an environment of complete surveillance. That requires a new Internet where there's no concept of consent for either users or those who create content and culture — everything is just raw materials, and all of us are fair game.
The most worrisome part is that Atlas looks so familiar, and feels so innocuous, that people will try it and mistake it for a familiar web browser just like the other tools that they've been using for years. But Atlas is a browser that actively fights against the web, and in doing so, it's fighting against the very idea that you should have control over what you see, where you go, and what watches you while you're there.
2025-10-17 08:00:00
Even though AI has been the most-talked-about topic in tech for a few years now, we're in an unusual situation where the most common opinion about AI within the tech industry is barely ever mentioned.
Most people who actually have technical roles within the tech industry, like engineers, product managers, and others who actually make the technologies we all use, are fluent in the latest technologies like LLMs. They aren't the big, loud billionaires that usually get treated as the spokespeople for all of tech.
And what they all share is an extraordinary degree of consistency in their feelings about AI, which can be pretty succinctly summed up:
What's amazing is the reality that virtually 100% of tech experts I talk to in the industry feel this way, yet nobody outside of that cohort will mention this reality. What we all want is for people to just treat AI as a "normal technology", as Arvind Naryanan and Sayash Kapoor so perfectly put it. I might be a little more angry and a little less eloquent: stop being so goddamn creepy and weird about the technology! It's just tech, everything doesn't have to become some weird religion that you beat people over the head with, or gamble the entire stock market on.
If you read mainstream media about AI, or trade press within the tech industry, you'll basically only hear hype repeating the default stories about products from the handful of biggest companies like OpenAI, Anthropic, Google, and the like. Once in a while, you might hear some coverage of the critiques of AI, but even those will generally be from people outside the tech industry, and they will often solely be about frustrations or anger with the negative externalities of the centralized Big AI companies. Those are valid and vital critiques, but it's especially galling to ignore the voices within the tech industry when the first and most credible critiques of AI came from people who were working within the big tech companies and then got pushed out for sharing accurate warnings about what could go wrong.
Perhaps the biggest cost of ignoring the voices of the reasonable majority of those in tech is how it has grossly limited the universe of possibilities for the future. If we were to simply listen to the smart voices of those who aren't lost in the hype cycle, we might see that it is not inevitable that AI systems use content without the consent of creators, and it is not impossible to build AI systems that respect commitments to environmental sustainability. We can build AI that isn't centralized under the control of a handful of giant companies. Or any other definition of "good AI" that people might aspire to. But instead, we end up with the worst, most anti-social approaches because the platforms that have introduced "AI" to the public imagination are run by authoritarian extremists with deeply destructive agendas.
And their extremism has had a profound chilling effect within the technology industry. One of the reasons we don't hear about this most popular, moderate view on AI within the tech industry is because people are afraid to say it. Mid-level managers and individual workers who know this is the common-sense view on AI are concerned that simply saying that they think AI is a normal technology like any other, and should be subject to the same critiques and controls, and be viewed with the same skepticism and care, fear for their careers. People worry that not being seen as mindless, uncritical AI cheerleaders will be a career-limiting move in the current environment of enforced conformity within tech, especially as tech leaders are collaborating with the current regime to punish free speech, fire anyone who dissents, and embolden the wealthy tycoons at the top to make ever-more-extreme statements, often at the direct expense of some of their own workers.
This is all exacerbated by the awareness that hundreds of thousands of technical staff like engineers have been laid off in recent times, often in an ongoing drip of never-ending layoffs, and very frequently in an unnecessarily dehumanizing and brutal process intended to instill fear in those who remain at the companies afterward.
In that kind of context, it's understandable that people might fear telling the truth. But it's important to remember that there are a lot more of us. And for those who aren't insiders in the tech industry, it's vital that you understand that you've been presented with an extremely distorted view about what tech workers really think about AI. Very few agree with the hype bubble that the tycoons have been trying to puff up. There are certainly a group of hustle bros on LinkedIn or social media trying to become influencers by repeating the company line, just as they did about Web3 or the metaverse or the blockchain (do they still have .ETH after their names?), but the mainstream of tech culture is thoughtful, nuanced and circumspect.
2025-10-07 08:00:00
Much of the conversation about video and content over the last few weeks has been about the silencing of Jimmy Kimmels's show and the fact that we're seeing a shockingly rapid move towards the type of censorious media control typical of most authoritarian regimes.
But there's a broader trend that poses a looming threat to online video creators that I think is going a bit under the radar, so I took a minute to pull together a quick short-form video on the topic:
The key things that have shifted can be summarized with three points:
All of this is trying to make clear to video creators that they need to embrace the same radical control that podcasters have always had.
Separately, I'm also (obviously!) using this as a chance to start sharing a bit more of the videos I've been making lately. It's still very early, and I'm not quite sure what direction they're headed, so please do share any feedback you've got.
In general, I'm going to try to complement my writing here with some videos from time to time, just to make some of these concepts more accessible to different audiences. If you're inclined, please do take a look, and share them with people who might find them interesting. (I'm expecting to use both quick vertical formats and more substantive traditional horizontal videos, and to post across most of the major social networks so as to not be overly dependent on any one platform.)
2025-10-07 08:00:00
So many of the best, most thoughtful, most caring and talented people I’ve collaborated with in my career have had a focus on inclusion and equity as either the primary role or the supporting and enabling context of their work. But thanks to a well-funded, decades-long concerted effort, the reasonable and moral consensus that we should care for one another and offer opportunities to those who haven’t had them has become a vulnerability that those in political power right now are using to target anyone who is trying to empower or uplift the marginalized.
It’s a war on DEI, and it’s left good people feeling afraid to make basic statements of plainly human dignity, like “we should work to undo the harmful effects of decades of racist exclusion”, or “we should fix the pay inequities that have kept women from being paid fairly when they do the same work as men”. These were uncontroversial statements for decades even amongst the most conservative segments of America and the extremist takeover of both social media and conventional media has quickly normalized such a radical shift that people are now often afraid to plainly state these kinds of fundamental truths in public, especially in the workplace.
But there are so many good people who care about this work, whose values have not been corrupted just because the authoritarians currently in power have decided to persecute others, or to strip funding from organizations, if they dare to use “forbidden” language when describing the way they’re going to take care of people. The MAGA extremists aren’t content just to take television shows off the air, or to ban books in schools — they’ve also provided lists of words that can cause organizations to lose federal funding, and now have escalated their attack on empathy and kindness to include firing people who have expressed sympathy or solidarity for communities through demonstrations such as kneeling in a gesture of support.
The net result is a situation I’ve come to describe as the “**Don’t Ask, Don’t Tell” era of DEI. Much of the work of inclusion and support is still going on, because the spirit of kindness and justice is an unstoppable one. But just as there have always been LGBTQ+ people in the military, and the DADT legal framework just allowed institutions to continue to be in denial about reality, many pragmatic organizations have begun evolving to say “fine, we won’t call it DEI if these delicate MAGA crybabies can’t hear those words — but we can still do the work”.
Because the truth is, communities focused on justice and community care have always been able to provide for each other, even when persecution or circumstance required that they be clandestine about it. It was never easy, but there was indeed a railroad that did run underground. And it has always been communities on the margins that invent and evolve language anyway; when the right decided to demonize the word “woke” after belatedly (mis-)appropriating it from Black and queer cultures, I was angry about the injustice of the intellectual dishonesty of that campaign. But I’ve never worried about whether these communities would find even more expressive and joyful ways to communicate the vibrant and vital ideas that vexed these soulless fascists so completely.
And there’s some shedding of the old that might even be a small silver lining to the cloud. Within our communities of practice, many of us have felt some degree of fatigue or burnout at the cynicism and ineffectiveness with which many organizations embraced their DEI efforts, especially those that tried to engage at a superficial level in 2020 and then only maintained a cosmetic embrace of the work without proper resourcing or structural support in the years since. In truth, I think a lot of the institutions whose leaders have followed that pattern were just waiting for this excuse to drop the pretense, and at least now we can all stop the charade.
Not being able to speak plainly about the vital work of inclusion is, to be clear, a grave injustice. But the fact that the petulant children in this administration are desperately hoping that a network of quislings will tattle on their coworkers for using the forbidden word “diversity” reveals just how fragile – and importantly, how unpopular this attack on equity really is.
Though the right wing has been able to game the refs in media for the last decade enough that many people feel “this woke thing has gone too far!”, in reality, most people also really do like the idea that things should be fair. They really do like the feeling that they’re being good to those who’ve been mistreated, and they liked when The Muppets taught them how to be nice to people who were different from them. It’s not fair that we have to endure these indignities and attacks but there is also some solace and comfort in knowing that so many people also know intrinsically what is good and right, even when they may be afraid about how and when they can say it.
So the specific wording we have been using may be dormant for a while, and many people who used to use these descriptions may have to refrain from doing so. Maybe these particular names for these concepts will even slip from popular vernacular, replaced by updated names that reflect a new generation’s sensibilities. I’ll never stop being furious about these liars having misrepresented the work of good people and twisted acts of kindness and love into something that is vilified.
But I’m also heartened to remember past eras of resilience and adaptability when an imperfect and inelegant compromise helped navigate through a tumultuous time until everyone in a community could come out stronger on the other side. If the cruelty of this moment forces all of us to again face a situation where there are no good choices, at least we’ve seen that there are ways we can help preserve the progress that’s been made so far, by any name.
2025-09-29 08:00:00
Anyone who knows me know that I love the esoteric, confusing and complicated aspects of pop music's greatest artists. When a Garth Brooks goes on a detour to become Chris Gaines? I'm there.
So, for decades, I've been a bit obsessed with Mariah Carey's long-lost side project Someone's Ugly Daughter, which she created under the name "Chick", as a grunge-influenced emotional release from the stresses she was under during the creation of her 1995 album Daydream. A not-very-veiled dig at the frustrations she had with her then-husband and label head Tommy Mottola, the songs themselves have been circulating amongst fans for years, with lead vocals from Clarissa Dane, Mariah's friend and collaborator on the project. (Mariah herself was barred from taking lead on the project due to her commitments to Sony at the time.)
The Daughter project came to much higher visibility, though, after Mariah mentioned it in her (incredible! highly recommended!) memoir "The Meaning of Mariah Carey", which came out in 2020. The book taught a lot of people who saw Mariah as primarily a pop artist or merely a remarkable vocalist that she's a truly gifted songwriter, and the breadth of her catalog was exemplified by the revelation to many more casual fans that she had things like an entire Hole-inspired grunge album sitting unreleased in her vault.
But at long last, we're seeing Mariah finally acknowledge this hidden part of her catalog in the promotional tour for her latest record, Here For It All. After a fan flashed a (presumably homemade?) album cover for Daughter at an event, Mariah began to talk about the record, and even let the crowd listen to one of the best songs on the record, "Love Is A Scam".
It's hard to imagine, at the same moment that "Always Be My Baby" was still all over the radio, and when she was recording "Fantasy", that Mariah was listening to acts like Garbage and recording this harder-edged album at night, art-directing an album cover featuring a dead cockroach on the front. But I do have a theory that nearly all great pop artists have at least one great alter ego hiding inside them, and perhaps Chick is that one for Mariah. I'm hoping this belated acknowledgment of the Someone's Ugly Daughter record is a major step towards its eventual, long-overdue, release.
Update: Well, I should have known. Back in January of 2021, the best interview that Mariah's ever done was, to no surprise, her appearance on Questlove Supreme, which was the first time she publicly talked about the Chick album at length. Quest and I have talked about our appreciation of the record a few times since then, debating its place in her catalog, but I guess his opinion is settled now: best in her catalog! (I still think Emancipation of Mimi might be better.)