MoreRSS

site iconMorgan HouselModify

Author of The Psychology of Money and Same As Ever, partner at The Collaborative Fund.Note that the blogger is Morgan Housel and his colleagues.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Morgan Housel

"Reproducing the conditions that made Sequoia’s hallways electric"

2025-06-18 21:38:00

Collaborative Fund recently launched AIR, a new kind of accelerator for design-led AI products. It draws inspiration from the institutions that reshaped creative possibility in their time, places that brought together unlikely collaborators at key moments of technological and cultural inflection.

As we recruit for the first cohort, we’re talking to people who had a hand in creating those lighting rod moments. A few weeks ago we asked Nicholas Negroponte, founder of the MIT Media Lab, to reflect on what happens when culture and technology collide to create new ways of thinking.

Today we’re talking to Tom McMurray, former General Partner at Sequoia, who helped shape Silicon Valley’s first golden age. Tom was an early investor in Yahoo, Redback Networks, C-Cube, NetApp—and, importantly, Nvidia. He now serves on multiple boards focused on science and impact. We spoke to him about pattern recognition, capital discipline, and why he’s an investor in AIR.

TomMcMurray.jpg

Craig Shapiro: Tom, you joined Sequoia just as the first Internet wave was forming. Your portfolio reads like a Hall of Fame roster. What were the core filters you used back then?

Tom McMurray: It was very clear where to invest in the networking space—bandwidth was in high demand. It was less clear in the pure Internet space. Our diligence process was pretty established so we continually developed more refined filters and leveraged off our core capability in chips and enterprise software. In the case of Nvidia we had what I call a Sequoia moment—that wonderful nonlinear diligence process where after parsing through the business, we reached a point where there’s only a single question left to decide. Our secret sauce was that we often knew many times more than the founders did about their business. We understood where the real risks were.

For companies like Nvidia, our expertise in semiconductors from investments in Cypress, Microchip, LSI Logic, and Cadence, plus our experience with gaming companies, meant the market risks were very low. We could easily do due diligence on the founders because many worked for friends of Sequoia Partners—this was our sweet spot in the 1990s. And in the Internet wave, we had special insight through our investments in Cisco and Yahoo. They were market pioneers who saw the world 3-5 years ahead of us. They pointed the way many times, and we just jumped on it.

Speaking of Nvidia, walk us through what happened when Jensen Huang first pitched Sequoia.

Wilf Corrigan, a Sequoia Technology Partner and CEO at LSI Logic (where Jensen worked before starting Nvidia), told him to talk to Don Valentine at Sequoia about his “chip idea.” Jensen pitched the evolving game world and the need for more performance. Honestly, I had no idea at that point why we should invest in the company. But we asked harder and harder questions about team, competition, distribution, and got solid answers.

About 90 minutes into the pitch, Pierre Lamond asked Jensen how big the chip was. Jensen said “12 mm.” Pierre looked at Don and Mark; they nodded, and we committed to the investment on the spot. We led the Series A and Mark joined the board. The rest is history.

The critical question wasn’t about market size or vision—it was “can they build the chip, get the performance, and the price point?” That’s why the chip size question was the deciding factor. The semiconductor partners at Sequoia—Don, Pierre, and Mark—understood the significance immediately.

Fast-forward to today. Collaborative Fund just launched AIR, an AI residency in New York. You are an investor! Why?

Because you’re reproducing the conditions that made Sequoia’s hallways electric in the ’90s—cross-pollination of builders, researchers, and designers who argue, prototype, and iterate in the same room. Great companies are rarely solo acts; they’re jazz ensembles riffing toward a common groove.

What early-stage pattern recognition from Sequoia days should our AIR founders tattoo on their whiteboards?

The secret is to lean on your wins, learn from them, apply it as you expand, iterate, and keep going.

“Stay cheap until it hurts.” Capital efficiency forces clarity. The companies that survived the dot-com crash had burn rates lower than their Series A checks.

Critics worry AI will erase jobs or amplify bias. You’ve seen every tech cycle: where’s your compass pointing?

Every wave starts messy. But history says two truths persist:

  • Jobs evolve faster than they evaporate. Cisco killed some circuit-switch jobs yet birthed the entire network-engineer class.

  • Bias follows data, not silicon. Fix the training data, and you fix 80 percent of the problem. That’s human homework, not machine destiny.

The “force for good” part kicks in when entrepreneurs bake guardrails into the business model, not just the codebase.

Last one. A founder walks into AIR with nothing but conviction. You’ve got one question before you decide to invest—what is it?

I’d look for that “Sequoia Moment”—where after all the questions about team, market, and technology, we identify the single critical factor that determines success. For Nvidia, it was “how many millimeters wide is the chip?” Sometimes, it’s these seemingly simple technical questions that reveal whether a company can execute on its vision.

Golden. Thanks, Tom. Here’s to building AI companies that deserve to exist.

And to founders who remember: progress isn’t inevitable—people make it so.

Very Bad Advice

2025-06-13 02:54:00

A boy once asked Charlie Munger, “What advice do you have for someone like me to succeed in life?” Munger replied: “Don’t do cocaine. Don’t race trains to the track. And avoid all AIDS situations.”

It’s often hard to know what will bring joy but easy to spot what will bring misery. Building a house is complex; destroying one is simple, and I think you’ll find a similar analogy in most areas of life. When trying to get ahead it can be helpful to flip things around, focusing on how to not fall back.

Here are a few pieces of very bad advice.


Allow your expectations to grow faster than your income

Envy others’ success without having a full picture of their lives.

Pursue status at the expense of independence.

Associate net worth with self-worth (for you and others).

Mimic the strategy of people who want something different than you do.

Choose who to trust based on follower count.

Associate engagement with insight.

Let envy guide your goals.

Automatically associate wealth with wisdom.

Assume a new dopamine hit is a good indication of long-term joy.

View every conversation as a competition to win.

Assume people care where you went to school after age 25.

Assume the solution to all your problems is more money.

Maximize efficiency in a way that leaves no room for error.

Be transactional vs. relationship driven.

Prioritize defending what you already believe over learning something new.

Assume that what people can communicate is 100% of what they know or believe.

Believe that the past was golden, the present is crazy, and the future is destined for decline.

Assume that all your success is due to hard work and all your failure is due to bad luck.

Forecast with precision, certainty, and confidence.

Maximize for immediate applause over long-term reputation.

Value the appearance of looking busy.

Never doubt your tribe but be skeptical of everyone else’s.

Assume effort is rewarded more than results.

Believe that your nostalgia is accurate.

Compare your behind-the-scenes life to others’ curated highlight reel.

Discount adaptation, assuming every problem will persist and every advantage will remain

Use uncertainty as an excuse for inaction.

Judge other people at their worst and yourself at your best.

Assume learning is complete upon your last day of school.

View patience as laziness.

Use money as a scorecard instead of a tool.

View loyalty (to those who deserve it) as servitude.

Adjust your willingness to believe something by how much you want and need it to be true.

Be tribal, view everything as a battle for social hierarchy.

Have no sense of your own tendency to regret.

Only learn from your own experiences.

Make friends with people whose morals you know are beneath your own.

Different Kinds of Smart

2025-06-12 00:39:00

“The older I get the more I realize how many kinds of smart there are. There are a lot of kinds of smart. There are a lot of kinds of stupid, too.”

– Jeff Bezos

The smartest investors of all time went bankrupt 20 years ago this week. They did it during the greatest bull market of all time.

The story of Long Term Capital Management is more fascinating than sad. That investors with more academic smarts than perhaps any group before or since managed to lose everything says a lot about the limits of intelligence. It also highlights Bezos’s point: There are many kinds of smarts. We know – in hindsight – the LTCM team had epic amounts of one kind of smarts, but lacked some of the nuanced types that aren’t easily measured. Humility. Imagination. Accepting that the collective motivations of 7 billion people can’t be summarized in Excel.

“Smart” is the ability to solve problems. Solving problems is the ability to get stuff done. And getting stuff done requires way more than math proofs and rote memorization.

A few different kinds of smarts:

1. Accepting that your field is no more important or influential to other people’s decisions than dozens of other fields, pushing you to spend your time connecting the dots between your expertise and other disciplines.

Being an expert in economics would help you understand the world if the world were governed purely by economics. But it’s not. It’s governed by economics, psychology, sociology, biology, physics, politics, physiology, ecology, and on and on.

Patrick O’Shaughnessy wrote an email to his book club years ago:

Consistent with my growing belief that it is more productive to read around one’s field than in one’s field, there are no investing books on this list.

There is so much smarts in that sentence. Someone with B+ intelligence in several fields likely has a better grasp of how the world works than someone with A+ intelligence in one field but an ignorance of that field just being one piece of a complicated puzzle.

2. A barbell personality with confidence on one side and paranoia on the other; willing to make bold moves but always within the context of making survival the top priority.

A few thoughts on this:

“The only unforgivable sin in business is to run out of cash.” – Harold Geneen

“To make money they didn’t have and didn’t need, they risked what they did have and did need. And that is just plain foolish. If you risk something important to you for something unimportant to you, it just doesn’t make any sense.” – Buffett on the LTCM meltdown.

“I think we’ve always been afraid of going out of business.” – Michael Moritz explaining Sequoia’s four decades of success.

A key here is realizing there are smart people who may perform better than you this year or next, but without a paranoid room for error they are more likely to get wiped out, or give up, when they eventually come across something they didn’t expect. Paranoia gives your bold bets a fighting chance at surviving long enough to grow into something meaningful.

3. Understanding that Ken Burns is more popular than history textbooks because facts don’t have any meaning unless people pay attention to them, and people pay attention to, and remember, good stories.

A good storyteller with a decent idea will always have more influence than someone with a great idea who hopes the facts will speak for themselves. People often wonder why so many unthoughtful people end up in government. The answer is easy: Politicians do not win elections to make policies; they make policies to win elections. What’s most persuasive to voters isn’t whether an idea is right, but whether it narrates a story that confirms what they see and believe in the world.

It’s hard to overstate this: The main use of facts is their ability to give stories credibility. But the stories are always what persuade. Focusing on the message as much as the substance is not only a unique skill; it’s an easy one to overlook.

4. Humility not in the idea that you could be wrong, but given how little of the world you’ve experienced you are likely wrong, especially in knowing how other people think and make decisions.

Academically smart people – at least those measured that way – have a better chance of being quickly ushered into jobs with lots of responsibility. With responsibility they’ll have to make decisions that affect other people. But since many of the smarties experienced a totally different career path than less intelligent people, they can have a hard time relating to how others think – what they’ve experienced, how they see the world, how they solve problems, what kind of issues they face, what they’re motivated by, etc. The clearest example of this is the brilliant business professor whose brain overlaps maybe a millimeter or two with the guy successfully running a local dry cleaning business. Many CEOs, managers, politicians, and regulators have the same flaw.

A subtle form of smarts is recognizing that the intelligence that gave you the power to make decisions affecting other people does not mean you understand or relate to those other people. In fact, you very likely don’t. So you go out of your way to listen and empathize with others who have had different experiences than you, despite having the authority to make decisions for them granted to you by your GPA.

5. Convincing yourself and others to forgo instant gratification, often through strategic distraction.

Everyone knows the famous marshmallow test, where kids who could delay eating one marshmallow in exchange for two later on ended up better off in life. But the most important part of the test is often overlooked. The kids exercising patience often didn’t do it through sheer will. Most kids will take the first marshmallow if they sit there and stare at it. The patient ones delayed gratification by distracting themselves. They hid under a desk. Or sang a song. Or played with their shoes. Walter Mischel, the psychologist behind the famous test, later wrote:

The single most important correlate of delay time with youngsters was attention deployment, where the children focused their attention during the delay period: Those who attended to the rewards, thus activating the hot system more, tended to delay for a shorter time than those who focused their attention elsewhere, thus activating the cool system by distracting themselves from the hot spots.

Delayed gratification isn’t about surrounding yourself with temptations and hoping to say no to them. No one is good at that. The smart way to handle long-term thinking is enjoying what you’re doing day to day enough that the terminal rewards don’t constantly cross your mind.

More on this topic:

Casualties of Your Own Success

Conflicting Skill Sets

An Art Leveraging a Science

The Consumer AI Revolution Won’t Be Technical. It’ll Be Emotional.

2025-05-30 00:02:00

In technology, the obvious revolutions often come last.

The internet was around for decades before it felt personal. It started in labs, crept into offices, and only later landed in our homes. The smartphone wasn’t just a new form of communication – it was the moment computing shifted from corporate IT departments to our front pockets.

AI is accelerating faster than any previous wave of technology. In less than five years, we’ve gone from jaw-dropping demos of GPT-3 to a reality where millions of people interact with AI every day (sometimes without realizing it). Most of the attention so far has focused on the technical layer: model performance, token pricing, enterprise use cases. But the most transformative changes are likely to happen at the level of interface, brand, and emotional resonance.

The tools that stick won’t just be the most accurate. They’ll be the most intuitive, the most culturally fluent, the ones that feel like they belong in your life without demanding to be learned. That’s where the real leverage lies, and where consumer AI is about to get very interesting.

Because something big is happening now. Token costs are falling (OpenAI has slashed prices by more than 90% since 2020) making it cheaper and more feasible for developers to build and ship consumer AI applications. Fine-tuning models is becoming easier. The technical moat (if there ever was one) is evaporating. And as infrastructure becomes cheaper and more accessible, AI’s next act is coming into focus: the consumer.

“What once felt modern now feels immovable. These bedrock apps ossified, and AI is revealing just how brittle they’ve become.”

Because consumers don’t care about your model size. They care whether it helps them get through the day with a little less friction.

Think about the tools we use every day: Gmail. iCalendar. Whatsapp. They were built for the web, optimized for mobile, and essentially left to rot. What once felt modern now feels immovable. These bedrock apps ossified, and AI is revealing just how brittle they’ve become. They weren’t designed for a world where your technology listens, adapts, and acts without instruction.

Consumer AI isn’t about chatbots. Not the way we’ve come to know them, anyway. It’s about escaping the tired web of buttons and dropdowns we’ve been clicking through for years. The real opportunity is interface: tools that don’t just respond to commands but anticipate context. It’s not just about better UX. It’s about a new class of interaction altogether.

This could mean a home-buying concierge that doesn’t just show you listings, but understands your daily commute, dog-walking routine, and what kind of light you like in the mornings. A personal finance system that syncs with your partner’s calendar and cash flow, so you plan together without talking about money every day. Or a piano coach that listens as you play and adjusts your practice routine in real time, because you finally have a teacher with infinite patience. 

Chatbots might have kicked off the era of conversational AI, but the revolution is in what comes after: ambient, embedded, invisible tools that behave more like teammates than software.

“The winners will look less like OpenAI and more like Nike or Pixar: emotionally fluent, culturally embedded, behavior-shaping machines hiding in plain sight.”

Some of this is already happening. Well, sort of. Rewind.ai is giving users searchable memory across their digital lives. Rabbit’s R1 promises to remove the “app layer” entirely by turning natural language into actions. Humane’s Ai Pin, while flawed, is at least asking the right question: what does computing look like when you don’t have to look at a screen?

None of these products have nailed it. Most were met with bad reviews. Some feel like punchlines. But they’re reaching for something important. They’re trying to imagine a future where interface is no longer constrained by screens and keyboards. That matters. And while they’re clearly not the final form, they point toward a frontier we haven’t yet fully explored. These are the early missteps of a new genre; not failures of ambition, but signs that something new is struggling to be born.

History reminds us that form factor changes are rarely incremental. The PC didn’t lead to the smartphone – it required a complete reimagining of interface, distribution, and consumer behavior. We are due for another leap like that. Apple’s Vision Pro, Meta’s Ray-Ban smart glasses, and AI-driven wearables like the Oura Ring are early hints of what might come next.

Still, the path to mass adoption isn’t paved in code: it’s paved in trust, usability, and relevance. This is where many investors hesitate. Enterprise AI is easy to underwrite: there’s a sales pipeline, an efficiency metric, an ROI. Consumer AI feels squishier. It relies on taste. On cultural timing. On brand. It’s harder to spreadsheet.

But that’s what makes it so compelling.

“The best consumer AI product of the next five years won’t wow you with its intelligence. It’ll earn your trust. And never ask for your attention.”

Brand, after all, is a proxy for trust. And trust is the most valuable commodity in a world where AI agents will act on your behalf. Within five years, the most beloved AI product won’t have an app, a screen, or a UI, and its brand will be more trusted than your bank. Consumer AI isn’t a technical breakthrough; it’s a cultural one. The winners will look less like OpenAI and more like Nike or Pixar: emotionally fluent, culturally embedded, behavior-shaping machines hiding in plain sight. That’s where the defensibility lies – not in proprietary models, but in emotional resonance and behavioral lock-in. Just like Apple. Just like Spotify. Just like every consumer product that became infrastructure in disguise.

So where does this go?

In the next 18–24 months, most consumer AI experiments will flop. The hardware won’t work. The assistants will misfire. The reviews will be brutal. That’s fine. That’s how interface revolutions always begin; not with polish, but with friction. What matters is that a few teams will get it right. They’ll combine cultural intuition with technical leverage. They’ll design not for features, but for feelings. And when they launch, it won’t be clear whether they’re apps or brands or behaviors – only that they make life easier, more human, more yours. The best consumer AI product of the next five years won’t wow you with its intelligence. It’ll earn your trust. And never ask for your attention.

What Comes Next Isn’t a Product. It’s a Provocation.

2025-05-21 23:51:00

This year marks the 40th anniversary of the MIT Media Lab, a place I helped create not to organize what was already known, but to make space for what wasn’t.

It was interdisciplinary before the word became decor. It was designed for people who didn’t fit into departments. In fact, many of its founding faculty were misfits in the most productive sense: outsiders in their own disciplines, a veritable Salon des Refusés. We weren’t solving problems. We were developing solutions without knowing the problems. That’s how breakthroughs happen, at the edges, where curiosity is free to wander and orthodoxy is politely ignored.

The Lab’s purpose then, as now, was to explore what computers might mean for everyday life. Not computing, but living. It was never only about technology. It was about perspective. As I’ve said before, the best vision is peripheral vision.

The same could be said of Bauhaus, Black Mountain College, and Bell Labs. These were not just institutions of instruction. They were catalysts that attracted visionaries and outsiders, bringing together brilliant minds who might otherwise never have collaborated. They treated form as a form of inquiry. They questioned the boundaries between disciplines and then blurred them deliberately. They approached taste as methodology. And they did it by creating space—not just physical space, but cultural permission—for experimentation without expectation.

Today, we find ourselves in a moment that requires a similar response.

AI is not simply a faster way to do what we’ve already done. It’s a medium shift, a new substrate for thought, interaction, and aesthetics. But instead of treating it that way, we’re drizzling it onto legacy systems in ways that feel fundamentally mismatched. A few assistants here, a couple auto-completes there. More productivity, less imagination.

What we need is not more apps. What we need are new models, ones that challenge assumptions rather than reinforce them.

The environments that foster true innovation combine friction between disciplines, treat computation as a raw material, and allow design to lead rather than lag. One such effort is unfolding in New York: AIR, short for AI Residency, a new kind of accelerator for design-led AI products. Founders are encouraged not to chase metrics, but to ask better questions. What does it feel like to use this? What kind of world does it assume? What kind of relationships does it create?

Too much of today’s AI conversation revolves around scale: How many users? How fast is the inference? How good is the ROI? These are fine questions, but they are not the interesting ones. 

The interesting questions are more fundamental: What new literacies will we need? What new misuses will emerge? Can this technology help us understand ourselves better?

History tells us that significant ideas rarely come from the center. They begin at the margins, where ideas are allowed to be incomplete, even incorrect. Places where success is not the goal, but learning is.

Efforts like AIR matter because they create the conditions where breakthroughs tend to happen. The next big thing often starts as something strange. It does not arrive fully formed, or announce itself with clarity or consensus. It is shaped by small, curious groups, exploring the paths others have overlooked.

The Media Lab taught me that the future doesn’t start with a plan; it starts with an experiment. If history is any guide, the future of technology won’t be determined by incumbents or insiders. It will be authored by small groups with unusual perspectives and the freedom to follow them.

We should support that pursuit. We should build the spaces that make them possible.

Because what comes next is not a product. It is a provocation.

A Few Questions

2025-05-09 04:16:00

Which of my strongest beliefs are formed on second-hand information vs. first-hand experience?

If I could not compare myself to anyone else, how would I define a good life?

Whose views do I criticize that I would actually agree with if I lived in their shoes?

Who do I envy that is actually less happy than I am?

Looking back, am I any good at anticipating how I would feel and react to risks that actually occurred?

Is my desire for more money based on the false belief that it will solve personal problems that have nothing to do with money?

How many of my principles are cultural fads?

Whose silence do I mistake for agreement?

What kind of lifestyle would I live if no one other than my immediate family could see it?

What events nearly happened that would have fundamentally changed my life, for better or worse, had they occurred?

What views do I claim to believe in that I know are wrong but I say them because I don’t want to be criticized by my employer or industry?

How much of what I do is internal benchmark (makes me happy) vs. external benchmark (I think it changes what other people think of me)?

Am I thinking independently or going along with the tribal views of a group I want to be associated with?

Whose approval am I auditioning for?

Which of my principles would I abandon if they stopped earning me praise and recognition?

If I could see myself talk, what would I cringe at the most?

What question am I afraid to ask because I suspect I know the answer?

How much have things outside of my control contributed to things I take credit for?

How do I know if I’m being patient (a skill) or stubborn (a flaw)?

What crazy genius that I aspire to emulate is actually just crazy?

What strong belief do I hold that’s most likely to change?

Which future memory am I creating right now, and will I be proud to own it?

Am I addicted to cheap dopamine?

If I were on my deathbed tomorrow, what would I regret most?