MoreRSS

site iconIrrational ExuberanceModify

By Will Larson. CTO at Carta, writes about software engineering and has authored several books including "An Elegant Puzzle."
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Irrational Exuberance

Stuff I learned at Carta.

2025-05-23 19:00:00

Today’s my last day at Carta, where I got the chance to serve as their CTO for the past two years. I’ve learned so much working there, and I wanted to end my chapter there by collecting my thoughts on what I learned. (I am heading somewhere, and will share news in a week or two after firming up the communication plan with my new team there.)

The most important things I learned at Carta were:

  • Working in the details – if you took a critical lens towards my historical leadership style, I think the biggest issue you’d point at is my being too comfortable operating at a high level of abstraction. Utilizing the expertise of others to fill in your gaps is a valuable skill, but–like any single approach–it’s limiting when utilized too frequently.

    One of the strengths of Carta’s “house leadership style” is expecting leaders to go deep into the details to get informed and push pace. What I practiced there turned into the pieces on strategy testing and developing domain expertise.

  • Refining my approach to engineering strategy – over the past 18 months, I’ve written a book on engineering strategy (posts are all in #eng-strategy-book), with initial chapters coming available for early release with O’Reilly next month. Fingers crossed, the book will be released in approximately October.

    Coming into Carta, I already had much of my core thesis about how to do engineering strategy, but Carta gave me a number of complex projects to practice on, and excellent people to practice with: thank you to Dan, Shawna and Vogl in particular! More on this project in the next few weeks.

  • Extract the kernel – everywhere I’ve ever worked, teams have struggled understanding executives. In every case, the executives could be clearer, but it’s not particularly interesting to frame these problems as something the executives need to fix. Sure, that’s true they could communicate better, but that framing makes you powerless, when you have a great deal of power to understand confusing communication. After all, even good communicators communicate poorly sometimes.

  • Meaningfully adopting LLMs – a year ago I wrote up notes on adopting LLMs in your products, based on what we’d learned so far. Since then, we’ve learned a lot more, and LLMs themselves have significantly improved. Carta has been using LLMs in real, business-impacting workflows for over a year. That’s continuing to expand into solving more complex internal workflows, and even more interestingly into creating net-new product capabilities that ought to roll out more widely in the next few months (currently released to small beta groups).

    This is the first major technology transition that I’ve experienced in a senior leadership role (since I was earlier in my career when mobile internet transitioned from novelty to commodity). The immense pressure to adopt faster, combined with the immense uncertainty if it’s a meaningful change or a brief blip was a lot of fun, and was the inspiration for this strategy document around LLM adoption.

  • Multi-dimensional tradeoffs – a phrase that Henry Ward uses frequent is that “everyone’s right, just at a different altitude.” That idea resonates with me, and meshes well with the ideas of multi-dimensional tradeoffs and layers of context that I find improve decision making for folks in roles that require making numerous, complex decisions. Working at Carta, these ideas formalized from something I intuited into something I could explain clearly.

  • Navigators – I think our most successful engineering strategy at Carta was rolling out the Navigator program, which ensured senior-most engineers had context and direct representation, rather than relying exclusively on indirect representation via engineering management. Carta’s engineering managers are excellent, but there’s always something lost as discussions extend across layers. The Navigator program probably isn’t a perfect fit for particularly small companies, but I think any company with more than 100-150 engineers would benefit from something along these lines.

  • How to create software quality – I’ve evolved my thinking about software quality quite a bit over time, but Carta was particularly helpful in distinguishing why some pieces of software are so hard to build despite having little-to-no scale from a data or concurrency perspective. These systems, which I label as “high essential complexity”, deserve more credit for their complexity, even if they have little in the way of complexity from infrastructure scaling.

  • Shaping eng org costs – a few years ago, I wrote about my mental model for managing infrastructure costs. At Carta, I got to refine my thinking about engineering salary costs, with most of those ideas getting incorporated in the Navigating Private Equity ownership strategy, and the eng org seniority mix model.

    The three biggest levers are (1) “N-1 backfills”, (2) requiring a business rationale for promotions into senior-most levels, and (3) shifting hiring into cost efficient hiring regions. None of these are the sort of inspiring topics that excite folks, but they are all essential to the long term stability of your organization.

  • Explaining engineering costs to boards/execs – Similarly, I finally have a clear perspective on how to represent R&D investment to boards in the same language that they speak in, which I wrote up here, and know how to do it quickly without relying on any manually curated internal datasets.

  • Lots of smaller stuff, like the no wrong doors policy for routing colleagues to appropriate channels, how to request headcount in a way that is convincing to executives, Act Two rationales for how people’s motivations evolve over the course of long careers (and my own personal career mission to advance the industry, why friction isn’t velocity even though many folks act like it is.

I’ve also learned quite a bit about venture capital, fund administration, cap tables, non-social network products, operating a multi-business line company, and various operating models. Figuring out how to sanitize those learnings to share the interesting tidbits without leaking internal details is a bit too painful, so I’m omitting them for now. Maybe some will be shareable in four or five years after my context goes sufficiently stale.

As a closing thought, I just want to say how much I’ve appreciated the folks I’ve gotten to work with at Carta. From the executive team (Ali, April, Charly, Davis, Henry, Jeff, Nicole, Vrushali) to my directs (Adi, Ciera, Dan, Dave, Jasmine, Javier, Jayesh, Karen, Madhuri, Sam, Shawna) to the navigators (there’s a bunch of y’all). The people truly are always the best part, and that was certainly true at Carta.

systems-mcp: generate systems models via LLM

2025-05-11 21:00:00

Back in 2018, I wrote lethain/systems as a domain-specific language for writing runnable systems models, and introduced it with this blog post modeling a hiring funnel. While it’s far from a perfect system, I’ve gotten a lot of value out of it over the last seven years, because it allows me to maintain systems models in version control.

As I’ve been playing with writing Model Context Protocol (MCP) servers, one I’ve been thinking about frequently is one to help writing systems syntax, and I finally put that together in the lethain/systems-mcp repository.

More detailed installation and usage instructions are in the GitHub repository, so I’ll just share a couple of screenshots and comments here. Starting with the load_systems_documentation tool which loads a copy of lethain/systems/README.md and a file with example systems into the context window.

The image contains text and code snippets related to creating a systems model for user acquisition in a social network. It includes a brief description of systems modeling tools and a specification for modeling user flow from initial discovery to active contribution within the network.

The biggest challenge of properly writing DSLs with an LLM is providing enough in-context learning (ICL) examples, and I think the idea of providing tools that are specifically designed to provide that context is a very interesting idea. Eventually I imagine there will be generalized tools for this, e.g. a search index of the best ICL examples for a wide variety of DSLs. Until then, my guess is that this sort of tool is particularly valuable.

The second tool is run_systems_model which passes the DSL (and an optional parameter for number of rounds) to the tool and then returns the result.

The image shows an interactive chart and data selection interface for a social network user acquisition model, allowing the user to select and display various metrics such as user types and funnel stages. The chart visualizes data over simulation rounds, indicating trends among different user segments like AwareUsers, CasualUsers, PowerUsers, and EngagedUsers.

I experimented with interface design here, initially trying to return a rendered chart of the results, but ultimately even multi-modal models are just much better at working with text than with images. This meant that I had the best results returning JSON of the results and then having the LLM build a tool for interacting with the results.

Altogether, a fun little experiment, and another confirmation in my mind that the most interesting part of designing MCPs today is deciding where to introduce and eliminate complexity from the LLM. Introduce too little and the tool lacks power; eliminate too little and the combination rarely works.

How to provide feedback on documents.

2025-05-11 19:00:00

At Carta, we recently ran a reading group for Facilitating Software Architecture by Andrew Harmel-Law. We already loosely followed the ideas of an architectural advice process (from this 2021 article by the same Andrew Harmel-Law), but in practice we found that internal tech spec and architecture decision record (ADR) authors tended to exclusively share their documents locally within their team rather than more widely.

As we asked authors why they preferred sharing locally, the most common answer was that they got enough feedback from their team that they didn’t want to pay the time overhead of sharing widely. The wider feedback wasn’t necessarily bad or combative. It just wasn’t good enough to compensate for the additional time it cost to process. This made sense from the authors’ perspectives, but didn’t work well for me from the executive perspective, as I was seeing teams make misaligned decisions due to lack of cross-team communication.

As one step in reducing the overhead of sharing documents widely, I wrote up and shared this recommended process for providing feedback on documents:

  1. Before starting, remember that the goal of providing feedback on a document is to help the author. Optimizing for anything else, even if it’s a worthy cause, discourages authors from sharing their future writing. If you prioritize something other than helping the author, you are discouraging them from sharing future work.

  2. Start by skimming the document to understand its structure and where various kinds of topics are addressed. Why? This helps avoid giving feedback on ways the document’s actual structure diverges from how you imagined it would be structured. It also reduces questions about topics that are answered later in the document.

    Both of these sorts of feedback are a distraction during a discussion on a tech spec. In general, it’s better to avoid them. If you notice an author making the same significant structural mistake over several ADRs, it’s worth delivering that feedback separately.

  3. After skimming, reread the document, leaving comments with concerns. Each comment should include these details:

    1. What your suggested change or concern is
    2. Why you believe this is meaningful to address
    3. How important this seems (from ignorable nitpick to critical)
  4. If you find yourself leaving more than three or four issues, then you should either raise your threshold for commenting or you should schedule time with the individual to talk over the feedback. If the document is unreasonably weak, then it’s appropriate to nudge their leadership to dig into what’s happening on that team.

The most important idea behind these steps is that your goal as a feedback giver is to help the document’s author. It is not to protect your team’s strategy or platform. It is not to optimize for your goals. It’s to help the author. This might feel wrong, but ultimately optimizing for anything else will lead to an environment where sharing widely is an irrational behavior.

As a final aside, I think the user experience around commenting on documents is fundamentally wrong in most document editors. For example, Google Docs treats individual comments as first-order objects, similarly to how old version control systems like CVS tracked changes to individual files without tracking an overall state of the project. Ultimately, you want to collect all your comments into a bundle, then review that bundle for consistency and duplicates, and then submit that bundle as commentary, but editors don’t support that flow particularly well.

Public company comparables.

2025-05-03 22:00:00

A few years ago I wrote about reading a Profit & Loss statement, which is a foundational executive skill. I also subsequently wrote about ways to measure your engineering organization. Despite having written those, I still spend a lot of time wondering about effective ways to represent an engineering organization to your board of directors.

Over the past few years, one of the most useful charts I’ve found for explaining an R&D organization is a scatterplot of R&D spend as a % of margin versus YoY growth of last twelve months (LTM) revenue. Unlike so many other measures, this is an explicit measure of your R&D organization value as an investment relative to peer organizations.

Until recently, I assumed building this dataset required reading financial filings but my strategic finance partner at Carta, Tyler Braslow, pointed out that you can get all of this data for the tech section from Meritech Analytics, for free.

The image shows a webpage for Meritech Analytics, which offers a public software benchmarking solution with features for analyzing SaaS company metrics. It includes options to sign up or explore features, and provides tools like regression analysis for users.

When you login to Meritech, you’re dropped into a table of public company comparables for tech companies. This is the exact dataset I’d been looking for to build this chart.

The image displays a table from the Meritech Software Index showing various metrics such as market cap, enterprise value, and percentage price changes for different companies like Adobe and AppLovin. It also includes averages (mean and median) for these metrics over specified periods.

After logging in, you can then copy the contents of that table into a Google Sheets spreadsheet or Excel, or whatever you’re most comfortable with.

The image shows a spreadsheet from Google Sheets titled “PublicComps,” displaying financial data for various companies, including columns for price changes, market capitalization, enterprise value, and several valuation metrics. The data is presented with calculations for mean and median values across the companies listed.

Within that sheet, the columns you care about are:

  • % YoY Growth LTM Rev (column Q for me) – how much “last twelve months revenue” has grown year over year, as a percentage
  • % LTM Margins for R&D (column U for me) – how much R&D spend is as a percentage of last twelve months margin
  • LTM Revenue (column O for me) – although I don’t show this in the scatterplot, I find this one useful for debugging outlier values

Hiding the other columns gives you a much simpler table.

The image is a table listing companies along with their Last Twelve Months (LTM) Revenue, Year-over-Year (YoY) Growth percentage, and LTM R&D margins. Notably, Adobe has the highest LTM Revenue at $21,505 with an 11% YoY growth and 14% R&D margin.

From that table, you’re then able to build the scatterplot. Note that being “higher” means your R&D spend as a percentage of LTM margin is higher, which is a bad thing. The best companies are to the bottom and to the right; the worst companies are to the top and to the left.

The scatter plot displays the relationship between R&D Spend as a percentage of margin and Year-over-Year Revenue Growth, with data points clustered mostly in the 10%-30% range for both axes. Each point is labeled with a percentage representing the R&D Spend as a percentage of margin.

With this chart as a starting point, you can then plot your company in and show where you stand. You could also show how your company’s position in the chart has evolved over time: hopefully improving. Finally, you might want to cull some of these data points to better determine your public company comparables. The Meritech dataset has 106 entries, but you might prefer a more representative thirty entries.

How to filter out old email from inbox

2025-05-03 21:00:00

Every few years I take a pass at reducing the chaos in my personal inboxes. There are simply too many emails to deal with, and that generally leads to me increasingly failing to follow up on important email.

Up to this point, my strategy has largely been filtering out emails that I never want to read. But there’s another category of email which is stuff I often want to read when it’s fresh, but never want to read after it’s fresh. For example, calendar reminders, some mailing lists, some news letters, etc.

I decided to figure out how I could setup a system where I could mark a number of things as “filter three days after receipt”. This is a nice compromise, because I do want to see those things, but I don’t want to have to remember to archive them after the fact.

You can write a search query for this in GMail:

from:([email protected]) older_than:3d

However, if you try to create a GMail filter using that, it turns the older_than:3d into a fixed point in time rather than doing what you want.

The image shows a Gmail search filter setup, looking for emails from “calendar-notification@google.com” that are older than three days and have a size greater than a specified amount, dated within one day of May 3, 2025. Options to filter by attachments and exclude chats are available, with “Create filter” and “Search” buttons at the bottom.

It seems that this is unsolvable within GMail itself. However, some quick searching suggested it was possible to create a Google App Script to solve this, and asked Claude to write the script for me.

Following those instructions, I went to script.google.com, which I have not gone to in many years. I edited the generated script from Claude to use the tag “TempMsg”, to archive messages (as originally it had those commented out), and to limit itself to the first fifty items matching that tag. You can find the full code in this gist.

This image provides instructions on setting up a Google Apps Script to filter Gmail messages with a tag older than three days. It includes steps for opening Gmail settings and creating a new script project.

I attempted to run this as is, and got an error message that I needed to grant permissions. That requires three clicks within the Google Scripts UI.

The image shows a Google Apps Script interface where a user is adding the Gmail API service to a project. Red arrows highlight the action of selecting the Gmail API from a list of services and the “Add” button.

This also requires approving the somewhat scary message that I trust myself.

The image displays a warning that Google hasn’t verified a specific app, advising users not to use it until the developer verifies it with Google due to its request for sensitive account information. Users have the option to return to safety or proceed with caution if they trust the developer.

From there I tried to run this script, and it failed because the TempMsg tag doesn’t exist in my inbox.

The image shows a script written in JavaScript for Google Apps Script, designed to filter Gmail messages older than three days with a specific label. The execution log indicates that the label “TempMsg” was not found during the script’s execution.

So I went ahead and created that tag, and setup some filters to assign that tag to certain email senders.

This image shows the Gmail filter creation settings page, where emails from “calendar-notification@google.com” are set to be labeled as “TempMsg.” Options for other actions like skipping the inbox, marking as read, or forwarding are available but not selected.

After that, I was able to run the script and it worked properly. Note that I convinced myself that it was failing for a bit, because it doesn’t remove messages from the past three days. That is exactly how it’s supposed to work, but I would run it and then see messages with the tag there and think it was failing. Woops.

After convincing myself it was working, I added a periodic trigger to run this.

This image shows a Google Cloud Platform interface for adding a time-driven trigger to run the function “filterOldGmailMessages” daily between midnight and 1am, with immediate failure notifications.

I now have this running on a daily basis, and it’s given me a nice new tool for managing my email a bit better. After verifying it, I also used the tag manager to “hide” this tag in the inbox, so I don’t have to see the TempMsg tag everywhere. If I ever need to debug things, I can always make it visible again.

How should Stripe deprecate APIs? (~2016)

2025-04-24 21:00:00

While Stripe is a widely admired company for things like its creation of the Sorbet typer project, I personally think that Stripe’s most interesting strategy work is also among its most subtle: its willingness to significantly prioritize API stability.

This strategy is almost invisible externally. Internally, discussions around it were frequent and detailed, but mostly confined to dedicated API design conversations. API stability isn’t just a technical design quirk, it’s a foundational decision in an API-driven business, and I believe it is one of the unsung heroes of Stripe’s business success.

This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and very early, unpublished drafts.

Reading this document

To apply this strategy, start at the top with Policy. To understand the thinking behind this strategy, read sections in reverse order, starting with Explore.

More detail on this structure in Making a readable Engineering Strategy document.

Policy & Operation

Our policies for managing API changes are:

  • Design for long API lifetime. APIs are not inherently durable. Instead we have to design thoughtfully to ensure they can support change. When designing a new API, build a test application that doesn’t use this API, then migrate to the new API. Consider how integrations might evolve as applications change. Perform these migrations yourself to understand potential friction with your API. Then think about the future changes that we might want to implement on our end. How would those changes impact the API, and how would they impact the application you’ve developed.

    At this point, take your API to API Review for initial approval as described below. Following that approval, identify a handful of early adopter companies who can place additional pressure on your API design, and test with them before releasing the final, stable API.

  • All new and modified APIs must be approved by API Review. API changes may not be enabled for customers prior to API Review approval. Change requests should be sent to api-review email group. For examples of prior art, review the api-review archive for prior requests and the feedback they received.

    All requests must include a written proposal. Most requests will be approved asynchronously by a member of API Review. Complex or controversial proposals will require live discussions to ensure API Review members have sufficient context before making a decision.

  • We never deprecate APIs without an unavoidable requirement to do so. Even if it’s technically expensive to maintain support, we incur that support cost. To be explicit, we define API deprecation as any change that would require customers to modify an existing integration.

    If such a change were to be approved as an exception to this policy, it must first be approved by the API Review, followed by our CEO. One example where we granted an exception was the deprecation of TLS 1.2 support due to PCI compliance obligations.

  • When significant new functionality is required, we add a new API. For example, we created /v1/subscriptions to support those workflows rather than extending /v1/charges to add subscriptions support.

With the benefit of hindsight, a good example of this policy in action was the introduction of the Payment Intents APIs to maintain compliance with Europe’s Strong Customer Authentication requirements. Even in that case the charge API continued to work as it did previously, albeit only for non-European Union payments.

  • We manage this policy’s implied technical debt via an API translation layer. We release changed APIs into versions, tracked in our API version changelog. However, we only maintain one implementation internally, which is the implementation of the latest version of the API. On top of that implementation, a series of version transformations are maintained, which allow us to support prior versions without maintaining them directly. While this approach doesn’t eliminate the overhead of supporting multiple API versions, it significantly reduces complexity by enabling us to maintain just a single, modern implementation internally.

    All API modifications must also update the version transformation layers to allow the new version to coexist peacefully with prior versions.

  • In the future, SDKs may allow us to soften this policy. While a significant number of our customers have direct integrations with our APIs, that number has dropped significantly over time. Instead, most new integrations are performed via one of our official API SDKs.

    We believe that in the future, it may be possible for us to make more backwards incompatible changes because we can absorb the complexity of migrations into the SDKs we provide. That is certainly not the case yet today.

Diagnosis

Our diagnosis of the impact on API changes and deprecation on our business is:

  • If you are a small startup composed of mostly engineers, integrating a new payments API seems easy. However, for a small business without dedicated engineers—or a larger enterprise involving numerous stakeholders—handling external API changes can be particularly challenging.

    Even if this is only marginally true, we’ve modeled the impact of minimizing API changes on long-term revenue growth, and it has a significant impact, unlocking our ability to benefit from other churn reduction work.

  • While we believe API instability directly creates churn, we also believe that API stability directly retains customers by increasing the migration overhead even if they wanted to change providers. Without an API change forcing them to change their integration, we believe that hypergrowth customers are particularly unlikely to change payments API providers absent a concrete motivation like an API change or a payment plan change.

  • We are aware of relatively few companies that provide long-term API stability in general, and particularly few for complex, dynamic areas like payments APIs. We can’t assume that companies that make API changes are ill-informed. Rather it appears that they experience a meaningful technical debt tradeoff between the API provider and API consumers, and aren’t willing to consistently absorb that technical debt internally.

  • Future compliance or security requirements—along the lines of our upgrade from TLS 1.2 to TLS 1.3 for PCI—may necessitate API changes. There may also be new tradeoffs exposed as we enter new markets with their own compliance regimes. However, we have limited ability to predict these changes at this point.