2025-01-23 23:59:36
One thing that’s hit me lately: Google trained us all to be self-sufficient information hunters and synthesizers. When we need to diagnose a health issue, fix a coding bug, choose a vacation destination, or write a paper, we've learned to search, read, compare sources, and synthesize into personalized informed decisions or work. All the world’s knowledge at our fingertips. Better learn how to leverage it. “Let me Google that for you?” captured this ethos. How dare you ask a question that you could find or figure out yourself!
Well… thanks Google. The more I find myself taking advantage of AI, the more I realize I’ve been unlearning this trained self-sufficiency. What we all have now is an expert available to us to ask any question or even delegate many tasks. Yes, sometimes we need to give it some back-up documents, or clearer instructions, and sometimes it has no idea what it’s talking about (not that it will tell you). But if you can move from a mindset of self-sufficiency by default, to asking the LLM expert at your fingertips by default, a world starts to open.
This isn’t about moving queries off of Google into ChatGPT. It’s also a shift in the kind of problems we can tackle, and how we’d approach them:
Google Era: I’m stuck on a bug. Let me search and read a bunch of StackOverflow threads to debug this.
AI Era: Pull request to an AI developer who has access to your entire codebase.
Google Era: I wonder what diet changes or supplements I should be taking given my latest blood test results. [Does a lot of searching.]
AI Era: Uploads latest blood test result into a ChatGPT Project, “You are a medical analysis expert specializing in interpreting blood work results and creating personalized health recommendations….” [full prompt instructions here].
Google Era: I wonder how good our team is at communicating together. [Bookmarks a lot of articles on team communication to read later.]
AI Era: "Tell us something we don't know, or wouldn't want to know about ourselves" [uploads file of team Slack conversations]. (-idea and instructions from Tom Lawrence)
Google Era: My wife just did an embryo transfer. Let me search for articles on what she should be eating to maximize the changes of a healthy pregnancy.
AI Era: "You are the world’s foremost expert on maternal and pediatric nutrition like Dr. Lily Nichols. You’ve studied the nutrition studies and lore from every culture in the world that best promote maternal and baby health and brain development. Help me put together a week by week menu — breakfasts, lunches, dinner, snacks — for my wife from embryo transfer to end of fourth trimester.
My wife is a pescatarian but will do bone broths, butter, dairy, etc. She just doesn’t like eating meat, though maybe that may change with pregnancy. We have a worldly palate.” (-instructions from Aike Ho)
Are the experts perfect for every question? No, of course not. (I like how Ethan Mollick describes the “jagged frontier.”) Is this the worse they will ever be? Yes.
This isn’t just about an AI doing the work we once did of finding/synthesizing information. One thing that hits me is how much more personalized an AI result can be than a Google search will ever be. The paradigm is just so different. When I do a Google search, we all have some expectation of objectivity in the results. We critique Google when two people do the same search (e.g., on a news event) and get different results. And of course the means by which Google personalizes results has been more implicit - inferring from my searches, geo, etc. So personalization gets pushed to the edges to sites that are also stuck in a Google / search paradigm. Take TripAdvisor as an example. I have to figure out the best hotel for me by looking for clues in reviews that are relevant to me personally. Whereas with an AI, I can just tell it about my family and vacation preferences. I can store those preferences in a project, add my own reviews on hotels or trips we’ve taken, and get personalized recommendations.
This transition won't happen automatically. At least for everyone older than GenZ, self-sufficiency is deeply ingrained. Unlearning it will take conscious effort. But once you start, you realize how different the future will be. Have fun.
2025-01-17 22:02:53
For those of you looking to just get started leveraging LLMs beyond ChatGPT, this post is for you (I’ll get into more advanced tricks later!!).
My first “leverage AI” conversation was with Nathaniel Emodi, the cofounder and CEO of Highlight. Inspired by the one and only Patrick O'Shaughnessy, Nat has developed a morning routine leveraging Claude Projects that effectively functions as an AI-powered executive assistant to set his day on the right path. As he wrote:
Personally, as a busy founder I've been using Claude's Projects feature to help organize my day. I have a project called "Morning Goals" that helps unpack a long, often rambling voice memo and organize it into to-dos for myself and others; ideas for different projects; structured questions and notes for meetings coming up that day; and drafts messages for follow-up with different people.
I thought this was a great “starter” use case to write about because it’s one of those ideas that’s tangible, and once you start doing it, you start to come up with more and more ideas for how to leverage Claude or ChatGPT Projects.
For those of you unfamiliar with Claude Projects, Anthropic describes it here.
Nat’s project is “Morning Goals”.
He provided the following instructions, which Nat encourages you both to use yourself and share your feedback on if you have ideas for making it more useful:
**Morning Goals**
As a busy founder and CEO, balancing family life and the fast-paced demands of running a startup, help me organize my thoughts and drive daily productivity.
**Daily Structure**
Every morning, follow these steps:
1. Date & Overview:
Note the current date and time in [YYYY-MM-DD] format.
Record my overall sentiment: [Positive/Neutral/Negative + a brief reason].
Identify two primary focus tags reflecting key themes for the day.
2. Action Planning:
Self: Create a checklist of prioritized action items for myself.
Team: List action items delegated to others, prioritized by importance.
Carryover: Include follow-ups from previous days, noting their original date and any incomplete tasks.
3. Open Questions:
Maintain a running list of open questions I’m exploring, adding context from project documents where needed.
Identify potential people or resources to consult for insights.
4. Key Ideas & Insights:
Summarize key themes and focus areas, adding necessary context.
Propose actionable ideas, follow-up tasks, or challenges to assumptions from the perspective of a [consumer marketplace founder with 20+ years experience.]
5. Draft Communications:
Draft succinct and professional messages for different recipients, tailored to their role or relationship to the project.
Include recipient details, purpose/subject, channel (e.g., email, DM), and keep the tone casual but professional.
6. Knowledge Connections:
Identify connections between current and past notes, referencing prior entries with their date.
Highlight potential strategic implications based on these insights.
7. Outreach Opportunities:
Propose potential outreach strategies, sharing the context of why it’s relevant.
Draft message templates and outline next steps for engagement.
In addition to the instructions, Nat uploaded a handful of documents to Claude’s Project Knowledge, like some Highlight strategy documents, product launch plans, revenue goals.
After dropping his kids off at school, Nat opens Claude and then does a stream-of-consciousness verbal brain dump. Claude transforms this unstructured thinking into actionable output:
Synthesizes key insights from the stream of consciousness
List of to-do’s
Drafts emails based on mentioned communication needs
Structures everything within the context of company priorities
The result? When Nat arrives at his desk ready to begin the day, he has a structured agenda that allows him to execute on his morning thoughts without the typical context-switching tax. As he described it, it’s like having an EA who not only listens to your morning thoughts but transforms them into an optimized work plan.
It’s a great simple example of how AI can bridge the gap between our natural thinking patterns and a structured productivity systems. Most productivity tools require us to adapt to their structure – here, the AI adapts to us.
Of course, this isn’t perfect yet. The output is static - you can’t check to-do’s off, or keep a running list of all the action items over past entries. And there are still times when it makes mistakes (e.g., when it surfaces prior mentioned dates, they are often wrong). But it’s clear this is just the beginning. As the models progress, Nat notes the experience will evolve from having "an experienced EA to having a Harvard MBA at your fingertips."
If you're looking to start experimenting with practical AI applications, this is an accessible entry point with immediate returns.
As always, if you're building tools in this space or have innovative approaches to AI-powered productivity, I'd love to hear from you. And for those already using similar workflows, Nat (and I!) would welcome community iteration on making it even more effective.
Note: This is part of a series exploring how the best operators are leveraging AI technologies. Stay tuned for more insights from upcoming conversations with AI innovators. And yes, a lot of this blog post was written by Claude. Except that last sentence. And this one.
2025-01-15 04:22:42
I've been thinking about my dad lately. He was a successful public equity investor, but when it came to technology, he had his ways. I remember watching him write detailed client updates on a yellow legal pad, which someone else would then type up. He preferred doing complex calculations in his head rather than learning Excel. Whenever his laptop didn't behave exactly as expected, he'd need help getting back on track.
His approach to technology makes me reflect on where we are with AI today. We're all as individuals at a similar inflection point - a moment where we can choose to stick with familiar methods or push ourselves to embrace new tools that could fundamentally change how we work.
We're already seeing a divide emerging. Some coders have become AI savants, seamlessly integrating these tools into their daily workflows. Some stick to their old ways. Those that are leaning in hard on AI are not just using AI; they're reimagining what's possible in their roles - whether they're desk workers, founders, or CEOs.
I'll admit it: I feel like I'm just scratching the surface. While I've experimented with AI tools, I know there's so much more to do. It reminds me of watching my dad stick to his yellow legal pads - there's comfort in familiar tools and methods, but also opportunity costs we might not fully appreciate until later.
So I thought it would be fun to find a way to learn from these AI pioneers. I want to understand how they're leveraging AI to enhance their productivity and creativity. What daily workflows have they transformed? What unexpected use cases have they discovered? How has it changed their approach to problem-solving?
This will be the first in (I hope!) a series of posts exploring these questions. I want to try speaking with non-coders across different roles and industries who are pushing the boundaries of what's possible with AI. My goal is to share their insights and experiences so we can all better understand how to meaningfully integrate AI into our work lives. At least that’s my goal: forgive me if this ends up being a better idea (to me at least) conceptually than in practice!!
Please consider this post an open invitation: If you're someone who has found innovative ways to use AI in your work, or if you know someone who has, I'd love to hear from you.
And as a first step: I got to enjoy the magic of dictating some rough ideas to Claude for this post, and watching it write 95% of it for me (+ create the title image).
Happy New Year!
2024-09-26 22:23:25
In my last post I talked through the big stack game of poker that is happening with the LLM players. So how does the game play out? There is a sentiment you hear in the valley that these players are doomed to razor thin margins, and we all should be grateful for their hard work and sacrifice. I'll take the other side, and long term this is an enormous risk for a whole class of startups being funded.
It seems inevitable that as the underlying foundation models become more powerful, the LLM players will seek to justify the enormous investment that has gone into training their models by moving "up the stack", and evolve from an API or chat interface, to async agents.
I can't help but think back to Twitter's early incarnation as a platform, which then gradually competed with its platform developers. Right now, OpenAI/Anthropic/et al have an API consumable by almost anyone, but it's not hard to imagine a world in which they begin to compete with some of their API developers. I'd guess the async coding agents are most vulnerable to this potential in the near term given the seemingly unbounded economic value of owning this use case, and the already existing product market fit LLMs have found with coding.
But this will extend beyond coding. The most AI-optimists amongst us (Leopold Aschenbrenner does an excellent job articulating this view) believe that as the underlying LLMs get more powerful, they will get to a place where they can power “drop in” async AI workers that will act like super intelligent remote employees with code-creating superpowers. In this view, the AI workers obviate the need for most AI application software.
As an example, imagine a future enterprise buying decision: Why buy a specialized AI application that lets you automate internal IT ticket response, when the foundation model companies offer an AI agent that if you point it in the right direction with a job spec, will read your knowledge base, build its own integrations to connect to your existing systems of record (e.g., Jira), and then handle all the internal requests automatically?
Some might laugh at this scenario, but I’d suggest that if you are a B2B founder building an AI-native application, you NEED to do the thought experiment of assuming it over the next 3-5 years as you consider the strategy for your company. Not just because of the risk of this scenario happening, but because any progress down the path of the scenario will meaningfully increase competition for your company (as I describe in #3 below). So how do you future proof your B2B AI application layer company?
The best answer I have, in the face of this fast-changing future, is three-fold:
A network effect. If you've got one, run like hell to get the flywheel to tip. And email or DM me :). By the way, the last investment I led was because of a cold email from a founder who read one of my posts, so I promise you: it works.
Capture some proprietary data or hard to access data, either that you’ve accrued as you grow, or that you have access to through some other means. This forms a moat.
Execute like hell and land grab in an overlooked vertical. The foundation model companies will inevitably focus on the big markets (e.g., coding, as discussed). But outside of that, it’s hard to imagine the foundation models ever develop a GTM and packaged offering to go after the smaller (but still large!) verticals, which do require more care and packaging for a less sophisticated customer. So if you are going after these other verticals, assume it will be more symmetrical warfare with other focused startups. The difference is that as the underlying LLMs continue to improve, it will become a lot easier for other startups to compete. Imagine what a “LLM Wrapper” startup can accomplish now vs two years from now. So you have to assume more startup competition and more homegrown competition. For example, eventually, it might just take one employee to decide to train an Anthropic agent to compete with the offering you took years to get right. Being obsessively customer focused is always critical. If anything, that obsession will lead you to finding more workflows that you automate faster — which means you’ll add more value out of the box than anything else. That might just be enough to hang your hat on.
2024-08-20 22:35:14
I’m sure you read David Cahn’s provocative piece "AI's $600B Question", in which he argues that, given NVDIA’s projected Q4 2024 revenue run rate of $150B, the amount of AI revenue required to payback the enormous investment being made to train and run large language models is now $600B, and we are at least $500B in the hole on that payback. The numbers are certainly staggering… and are just going to get bigger. Until we reach an efficient frontier of the marginal value of adding more compute, or we hit some other roadblock that causes people to lose faith in the current architecture, this is a contest now of “not blinking first”. If you’re a big stack player like META, MSFT, GOOG, or any of the foundation model pure plays, you have no choice but to keep raising your bet — the prize and power of “winning” is too great. If you blink, you are left empty handed, watching someone else count your chips. It’s likely hundreds of billions will be destroyed, and trillions earned. It’s too early to know who the winner or losers are. But for all of us in the startup ecosystem, among many things, it’s going to create new waves of AI opportunities.
Taking a step back, as LLMs progress, they are able to handle more complicated tasks. If today LLMs can handle tasks that would have taken a human thirty minutes to complete, as LLMs progress, they'll be able to handle increasingly complicated tasks that would have taken a human more time. In the next decade, they should be able to handle tasks that would take years for a human to do. Therefore, as the LLMs become more and more sophisticated, the economic value that they will be able to unlock becomes greater and greater.
For example, annually, it is estimated that we spend $1T on software engineers globally. When people talk about GitHub Copilot, you hear people throw around numbers like 10-20% productivity improvements (of course, GitHub claims higher). That translates to $100-200B of value annually were it to be fully deployed (of which GitHub would capture some percentage).
As LLMs progress and are able to go beyond code completion ("copilot") to code authoring ("autopilot"), there is almost no limit in value creation as it would dramatically expand the market – a potential multi-trillion dollar opportunity if someone emerges a dominant player. And that's just coding. We've all experienced the productivity-improving benefits of LLMs (or been on the receiving end of an automated customer support response). The potential value creation and capture with AI is beyond our existing mental models.
The challenge is the amount of capital required to train each successively more sophisticated LLM increases by an order of magnitude, and once a model is leapfrogged by another, the pricing power of the older model quickly falls to zero. There are now more GPT3.5 equivalents for a developer to choose from than would make sense for them to test. Not surprisingly, when GPT3.5 launched in November 2022 it was head and shoulders ahead of any competitive model and cost $0.0200 for 1000 tokens. It's $0.0005 now – 2.5% of its original pricing in just 1.5 years. I can’t remember another technology that has commoditized as quickly as LLMs. It’s a dynamic that makes it almost impossible to rationalize any ROI at this stage in the game because any investment in a LLM is almost instantly depreciated by the next version. But you can’t really skip a step. You need to go through countless worthless versions to get to the ultimate (the idealized “AGI”).
So you have a bit of a perfect storm:
The economic value you are able to unlock as models become more sophisticated should increase significantly with each upgrade of the model. The economic value of AGI is constrained only by our imaginations.
Pricing leverage comes from being a step function ahead of the competition, at least along some dimension. If you fall behind, the value of your model to external customers gets rapidly commoditized (of course, there is still value for your internal use cases).
MSFT, GOOG, and META have core businesses that produce fire hydrants of cash, Anthropic has found love with GOOG and AMZN, and OpenAI should continue to be able to raise money from sovereigns that have their own (more physical) fire hydrants of cash.
The net result is that in the short term, until an efficient frontier is reached on the marginal value of continuing to invest in infrastructure with the existing transformer architecture, or we run out of electricity, or a group pulls ahead with an untouchable lead thanks to some smart algorithmic work, investment in this space by these giants should continue to increase dramatically, and costs necessarily precede revenue. The prize is theoretically so large, and if a clear winner emerges, their market opportunity so uncapped, you have to keep increasing your bet.
We all are massive beneficiaries of this battle playing out. The extreme pace of investment in infrastructure / training / etc, combined with the urgency that only comes from intense competition, is giving us all the gift of an insane pace of innovation with models that are able to handle increasingly complicated tasks at bargain basement prices. Applications that might not be possible today, let alone economic (such as most voice and video applications), will be profitable before we know it. Giddy up!
2024-04-19 23:19:23
One of the fascinating things about what's happening in AI is that, rather than be a few distinct moments of technological disruption that unlocks new opportunities for startups (e.g., when Apple launched the AppStore, or integrated a GPS chip into the iPhone), I believe we're going to have a rolling thunder of AI breakthroughs that catalyze startup opportunities.
Yes, it's certainly true that as the foundation models progress from 3 to 4 to 5, etc., we will mark time in retrospect with these milestones and how each step-function improvement unlocked increasingly complicated tasks that can be automated by LLMs. What feels different here is that it’s also true that single research papers will unlock new opportunities.
To take two recent examples:
What preceded ElevenLabs? https://arxiv.org/abs/2305.07243
What preceded Krea.ai? https://latent-consistency-models.github.io
The combination of both broad-based (foundation model upgrades) and narrow (research breakthroughs) step-function changes will continue to unlock brand new AI opportunities.
As Ben wrote in his comment on my last post:
One concept I like is that while the raw capacity of something like an LLM is increasing continuously over time, there's a hard threshold at which it crosses from being [not at all useful] to [useful] for a given application. Until we get true human-level AI-generated audio, ElevenLabs is impossible...but the second we do, it's a 10x improvement. Feels like part of the reason it's harder to spot these opportunities in advance.
So if you are a founder worried you’ve missed the window, don’t. It’s a land grab right now, but a single research paper can mean the difference between [not at all useful] to [useful] and therefore a new opportunity unlock. Obvious in hindsight, but tricky timing to predict. It's going to be an exciting (and wild) few years.