2025-10-25 19:00:00
On November 3rd, 2023, I posted Thoughts on writing and publishing Primer to celebrate the completion of my work on my prior book, The Engineering Executive’s Primer. Three weeks later, I posted Engineering strategy notes on November 21st, 2023, as I started to pull together thoughts to write my upcoming book, Crafting Engineering Strategy.
Those initial thoughts turned into my first chapter draft, How should you adopt LLMs? on May 14th, 2024. Writing continued all the way through the Stripe API deprecation strategy, which was my final draft, completed the afternoon of April 5th, 2025. In between there were another 35 chapters: two of which were Wardley maps, four of which are systems models, and two of which are short section introductions.
This post is a collection of notes on writing Crafting Engineering Strategy, how I decided to publish it, ways that I did and did not use foundational models in writing it, and so on.
Buy on Amazon. Read online on craftingengstrategy.com.
One of my decade goals for the 2020s, last updated in my 2024 year in review, is to write three books on some aspect of engineering. I published An Elegant Puzzle in 2019, so that one doesn’t count, but have two other books published in the 2020s: Staff Engineer in 2021, and The Engineering Executive’s Primer in 2024. So I knew I needed one more to complete this decade goal. At one point, I was planning on finishing Infrastructure Engineering as my fourth book, but honestly I’ve lost traction on that topic a bit over the past few years.
Instead, the thing I’d been thinking about a lot instead was engineering strategy. I’ve written chapters on this topic in my last two books, and each of those chapter was tortured by the sheer volume of things I wanted to write. By the end of 2024, I’d done enough strategy work myself that I was pretty confident I could take the best ideas from Staff Engineer–anchoring the essays in amazing stories–and the topic I’d spent so much time on over the past there years, and turn it into a pretty good book.
It’s also a topic that, like Staff Engineer in 2020 when I started writing it, was missing an anchor book pulling the ideas together for discussion. There are so many great books about strategy out there. There are some great books on engineering strategy, but few of them are widely read. My aspiration in writing this book was to potentially write that anchor book. It’ll take a decade or two to determine whether or not I’ve succeeded. At the minimum I think this book will either become that anchor or annoy someone enough that they write a better anchor instead. Either way, I’ll mark it as a win for advancing the industry’s discussion a bit.
Buy the AI Companion to Crafting Engineering Strategy on Amazon.
While I was writing this book, I was also increasingly using foundational models at work. Then I was using them to write tools for managing this book. Over time, I became increasingly fascinated with the idea of having a version of this book optimized for usage with large language models.
For this book in particular, the idea that you could collaborate with it on creating your own engineering strategy was a very appealing idea. I’m excited that this ultimately translated into an LLM-optimized edition that you can also purchase on Amazon, or you can read the AI companion’s text on craftingengineeringstrategy.com, although you’ll have to buy the actual book to get the foundational model optimized file itself (e.g. a markdown version of the book that plays well with foundational models).
This is the culmination of my thinking about the advantage of authors in the age of LLMs, where I see a path to book turning into things that are both read and also collaborated with.
Regarding the foundational model optimized-version, at its core, this is just running repomix against the repository of Markdown files, but there are a number of interesting optimizations for a better product. For example, it’s adding absolute paths to every referenced file and link, including adding a link to each chapter at its beginning to help models detect to them as references.
It’s also translating images into text descriptions. For example, here’s an image from one of the chapters.

Then here is the representation after I wrote a script to pull out every image and replace it with a description.

My guess is that many books are going to move in the direction of having an LLM-optimized edition, and I’m quite excited to see where this goes. I’ll likely work on an LLM-optimized edition of Staff Engineer at some point in the near-ish future as well.
When non-authors talk about LLMs making it easier to write books, they often think about LLMs literally writing books. As a fairly experienced author, I have absolutely no interest in an LLM writing any part of my book. If I don’t bring something unique to the book, the sort of thing that an LLM would not bring, then generally either it isn’t a book worth writing or I am the wrong author to write it. As a result, I wrote every word in this book. I didn’t use LLMs to expand bullets into paragraphs. I didn’t use LLMs to outline or assemble topics. I didn’t use LLMs to find examples to substantiate my approach. Again, this is what I know how to do fairly well at this point.
However, there are many things that I’m not that good at, where I relied heavily on an LLM. In particular, I used LLMs to copy-edit for typos, grammatical errors and hard-to-read sentences. I also used LLMs to write several scripts that I used in writing this book:
grammar.py which sent one or more chapters to an LLM for grammatical and spelling correction,
returning the identified errors as regular expressions that I could individually accept or reject
import.py which translated from my blog post version of the posts into the craftingengstrategy.com
optimized version
links.py which I used for standardizing
format of chapter references, and balancing the frequency that strategies were referenced
Generated craftingengstrategy.com’s FAQ by attaching full LLM-optimized version to Anthropic Claude 3.7 thinking and running this prompt:
I want to make a frequently asked questions featuring topics from this book. What are twenty good question and answer pairs based on this book, formatted as markdown (questions should be H2, and answers in paragraphs). Answers should include links to relevant chapters at the end of each answer.
Some example questions that I think would be helpful are:
1. Is engineering strategy a real thing?
2. How is engineering strategy different from strategy in general?
3. What are examples of engineering strategy?
4. What template should I use to create an engineering strategy?
5. Can engineers do engineering strategy? Is engineering strategy only for executives?
6. How to get better at engineering strategy?
7. Are there jobs in engineering strategy?
8. What are other engineering strategy resources?
Please directly answer those 8 questions, and then include another 10-15 question/answer pairs as well.
This worked quite well, in my opinion. Generating a better FAQ than the one I created for Staff Engineer’s FAQ in a very small amount of time. SEO seems well and truly “cooked” based on this experience.
Just like I don’t see software engineers being replaced by LLMs, I don’t see authors being replaced either.
For Staff Engineer, I put up staffeng.com. For The Engineering Executive’s Primer I just posted things on this blog, without making a dedicated, standalone site. For Crafting Engineering Strategy, I decided to put together a dedicated site again, which maybe deserves an explanation.
My writing over the past few years is anchored in my goal of advancing the industry, and I think that having a well-structured, referencable version of the book online is a valuable part of that. If people want to recommend someone read Staff Engineer, they can simply point them to staffeng.com, and they can read the whole book. They might also buy it, which is great, but it’s fine if they don’t. The Engineering Executive’s Primer is just much less referencable online, so someone ultimately has to decide to buy it before testing the content, which undermines its ability to advance the industry.
For the record, that wasn’t an O’Reilly limitation, just poor planning on my part. An Elegant Puzzle didn’t require a standalone site, so I thought that Primer wouldn’t either, but I think that’s more a reflection of An Elegant Puzzle being a collection of this blog’s writing in the 2010s rather than the best way to support the typical book.
At this point, my experience is that most books benefit from having a dedicated site, and that it doesn’t detract from sales numbers. Rather, if properly done with clear calls to action on the site, it supports sales very effectively.
I worked with O’Reilly to publish this book, same as I did for my previous book, The Engineering Executive’s Primer. My continued experience is that it’s harder to create a book with a publisher, the financial outcomes are significantly muted, but the book itself is almost always a much better book than you would have created on your own.
For me, that was the right tradeoff for this book.
I’m not at all sure this is the last book that I’ll ever write, but I’ve completed my decade writing goal for the 2020s, and I’m committed to this being my last book of the 2020s. For the past seven years I have been continually writing, editing or promoting a book, and I am exhausted with that process. I’ve loved getting to do it, and am grateful for having had the chance to do this four times. But, I’m still exhausted with it!
I could imagine eventually having another book to write, and that is definitely not now. Instead I want to spend more time with my family, my son, personally writing software professionally (rather than exclusively leading teams doing the writing), and other projects that are anything but writing another book.
2025-07-20 19:00:00
Last weekend, I wrote a bit about using Zapier to load Notion pages as prompts to comment on other Notion pages. That worked well enough, but not that well. This weekend I spent some time getting the next level of this working, creating an agent that runs as an AWS Lambda. This, among other things, allowed me to rely on agent tool usage to support both page and block-level comments, and altogether I think the idea works extremely well.
This was mostly implemented by Claude Code, and I think the code is actually fairly bad as a result, but you can see the full working implementation at lethain:basic_notion-agent on Github. Installation and configuration options are there as well.
Watch a quick walkthrough of the project I recorded on YouTube.
To give a sense of what the end experience is, here are some screenshots. You start by creating a prompt in a Notion document.

Then it will provide inline comments on blocks within your document.

It will also provide a summary comment on the document overall (although this is configurable if you only want in-line comments).

A feature I particularly like is that the agent is aware of existing comments on the document, and who made them, and will reply to those comments.

Altogether, it’s a fun little project and works surprisingly well, as almost all agents do with enough prompt tuning.
2025-07-20 19:00:00
One of my side quests at work is to get a simple feedback loop going where we can create knowledge bases that comment on Notion documents. I was curious if I could hook this together following these requirements:
Ultimately, I was able to get it working. So a quick summary of how it works, some comments on why I don’t particularly like this approach, then some more detailed comments on getting it working.
Create a Notion database of prompts.

Create a specific prompt for providing feedback on RFCs.

Create a Notion database for all RFCs.

Add an automation into this database that calls a Zapier webhook.

The Zapier webhook does a variety of things that culminate in using the RFC prompt to provide feedback on the specific RFC as a top-level comment in the RFC.

Altogether this works fairly well.
The best thing about this approach is that it actually works, and it works fairly well. However, as we dig into the implementation details, you’ll also see that a series of things are unnaturally difficult with Zapier:
md2notion
Ultimately, I could only recommend this approach as an initial validation. It’s definitely not the right long-term resting place for this kind of approach.
I already covered the Notion side of the integration, so let’s dig into the Zapier pieces a bit. Overall it had eight steps.

I’ve skipped the first step, which was just a default webhook receiver.
The second step was retrieving a statically defined Notion page
containing the prompt. (In later steps I just use the Notion API directly,
which I would do here if I was redoing this, but this worked too.
The advantage of the API is that it returns a real JSON object,
this doesn’t, probably because I didn’t specify the content-type header or some
such.)

This is the configuration page of step 2, where I specify the prompt’s page explicitly.
)
Probably because I didn’t set content-type, I think I was getting post formatted
data here, so I just regular expressed the data out.
It’s a bit sloppy, but hey it worked, so there’s that.
)
Here is using the Notion API request tool to retrieve the updated RFC (as opposed to the prompt which we already retrieved).
)
The API request returns a JSON object that you can navigate without writing regular expressions, so that’s nice.
)
Then we send both the prompt as system instructions and the RFC as the user message to Open AI.
)
Then pass the response from OpenAI to json.dumps
to encode it for being included in an API call.
This is mostly solving for newlines being \n rather than literal newlines.
)
Then format the response into an API request to add a comment to the document.

Anyway, this wasn’t beautiful, and I think you could do a much better job by just doing all of this in Python, but it’s a workable proof of concept.
2025-07-19 19:00:00
For managers who have spent a long time reporting to a specific leader or working in an organization with well‑understood goals, it’s easy to develop skill gaps without realizing it. Usually this happens because those skills were not particularly important in the environment you grew up in. You may become extremely confident in your existing skills, enter a new organization that requires a different mix of competencies, and promptly fall on your face.
There are a few common varieties of this, but the one I want to discuss here is when managers grow up in an organization that operates from top‑down plans (“orchestration‑heavy roles”) and then find themselves in a sufficiently senior role, or in a bottom‑up organization, that expects them to lead rather than orchestrate (“leadership‑heavy roles”).
You can break the components of solving a problem down in a number of ways, and I’m not saying this is the perfect way to do it, but here are six important components of directing a team’s work:
In an orchestration‑heavy management role, you might focus only on the second half of these steps. In a leadership‑heavy management role, you work on all six steps. Folks who’ve only worked in orchestration-heavy roles often have no idea that they are expected to perform all of these. So, yes, there’s a skill gap in performing the work, but more importantly there’s an awareness gap that the work actually exists to be done.
Here are a few ways you can identify an orchestration‑heavy manager that doesn’t quite understand their current, leadership‑heavy circumstances:
All of these things are still valuable in a leadership‑heavy role, but they just aren’t necessarily the most valuable things you could be doing.
There is a steep learning curve for managers who find themselves in a leadership‑heavy role, because it’s a much more expansive role. However, it’s important to realize that there are no senior engineering leadership roles focused solely on orchestration. You either learn this leadership style or you get stuck in mid‑level roles (even in organizations that lean orchestration-heavy).
Further, the technology industry generally believes it overinvested in orchestration‑heavy roles in the 2010s. Consequently, companies are eliminating many of those roles and preventing similar roles from being created in the next generation of firms. There’s a pervasive narrative attributing this shift to the increased productivity brought by LLMs, but I’m skeptical of that relationship—this change was already underway before LLMs became prominent.
My advice for folks working through the leadership‑heavy role learning curve is:
Think of your job’s core loop as four steps:
If you are not doing these four things, you are not performing your full role, even if people say you do some parts well. Similarly, if you want to get promoted or secure more headcount, those four steps are the path to doing so (I previously discussed this in How to get more headcount).
Ask your team for priorities and problems to solve. Mining for bottom‑up projects is a critical part of your role. If you wait only for top‑down and lateral priorities, you aren’t performing the first step of the core loop.
It’s easy to miss this expectation—it’s invisible to you but obvious to everyone else, so they don’t realize it needs to be said. If you’re not sure, ask.
If your leadership chain is running the core loop for your team, it’s because they lack evidence that you can run it yourself. That’s a bad sign. What’s “business as usual” in an orchestration‑heavy role actually signals a struggling manager in a leadership‑heavy role.
Get your projects prioritized by following the core loop. If you have a major problem on your team and wonder why it isn’t getting solved, that’s on you. Leadership‑heavy roles won’t have someone else telling you how to frame your team’s work—unless they think you’re doing a bad job.
Picking the right problems and solutions is your highest‑leverage work. No, this is not only your product manager’s job or your tech lead’s—it is your job. It’s also theirs, but leadership overlaps because getting it right is so valuable.
Generalizing a bit, your focus now is effectiveness of your team’s work, not efficiency in implementing it. Moving quickly on the wrong problem has no value.
Understand your domain and technology in detail. You don’t have to write all the software—but you should have written some simple pull requests to verify you can reason about the codebase. You don’t have to author every product requirement or architecture proposal, but you should write one occasionally to prove you understand the work.
If you don’t feel capable of that, that’s okay. But you need to urgently write down steps you’ll take to close that gap and share that plan with your team and manager. They currently see you as not meeting expectations and want to know how you’ll start meeting them.
If you think that gap cannot be closed or that it’s unreasonable to expect you to close it, you misunderstand your role. Some organizations will allow you to misunderstand your role for a long time, provided you perform parts of it well, but they rarely promote you under those circumstances—and most won’t tolerate it for senior leaders.
Align with your team and cross‑functional stakeholders as much as you align with your executive. If your executive is wrong and you follow them, it is your fault that your team and stakeholders are upset: part of your job is changing your executive’s mind.
Yes, it can feel unfair if you’re the type to blame everything on your executive. But it’s still true: expecting your executive to get everything right is a sure way to feel superior without accomplishing much.
Now that I’ve shared my perspective, I admit I’m being a bit extreme on purpose—people who don’t pick up on this tend to dispute its validity strongly unless there is no room to debate. There is room for nuance, but if you think my entire point is invalid, I encourage you to have a direct conversation with your manager and team about their expectations and how they feel you’re meeting them.
2025-07-18 19:00:00
I’m turning forty in a few weeks, and there’s a listicle archetype along the lines of “Things I’ve learned in the first half of my career as I turn forty and have now worked roughly twenty years in the technology industry.” How do you write that and make it good? Don’t ask me. I don’t know!
As I considered what I would write to summarize my career learnings so far, I kept thinking about updating my post Advancing the industry from a few years ago, where I described using that concept as a north star for my major career decisions. So I wrote about that instead.
Adopting advancing the industry as my framework for career decisions came down to three things:
The opportunity to be more intentional: After ~15 years in the industry, I entered a “third stage” of my career where neither financial considerations (1st stage) nor controlling pace to support an infant/toddler (2nd stage) were my highest priorities. Although I might not be working wholly by choice, I had enough flexibility that I could no longer hide behind “maximizing financial return” to guide, or excuse, my decision making.
My decade goals kept going stale. Since 2020, I’ve tracked against my decade goals for the 2020s, and annual tracking has been extremely valuable. Part of that value was realizing that I’d made enough progress on several initial goals that they weren’t meaningful to continue measuring.
For example, I had written and published three professional books. Publishing another book was not a goal for me. That’s not to say I wouldn’t write another—in fact, I have—but it would serve another goal, not be a goal in itself. As a second example, I set a goal to get twenty people I’ve managed or mentored into VPE/CTO roles running engineering organizations of 50+ people or $100M+ valuation. By the end of last year, ten people met that criteria after four years. Based on that, it seems quite likely I’ll reach twenty within the next six years, and I’d already increased that goal from ten to twenty a few years ago, so I’m not interested in raising it again.
“Advancing the industry” offered a solution to both, giving me a broader goal to work toward and reframe my decade and annual goals.
That mission still resonates with me: it’s large, broad, and ambiguous enough to support many avenues of progress while feeling achievable within two decades. Though the goal resonates, my thinking about the best mechanism to make progress toward it has shifted over the past few years.
Roughly a decade ago, I discovered the most effective mechanism I’ve found to advance the industry: learn at work, write blog posts about those learnings, and then aggregate the posts into a book.

An Elegant Puzzle was the literal output of that loop. Staff Engineer was a more intentional effort but still the figurative output. My last two books have been more designed than aggregated, but still generally followed this pattern. That said, as I finish up Crafting Engineering Strategy, I think the loop remains valid, but it’s run its course for me personally. There are several reasons:
First, what was energizing four books ago feels like a slog today. Making a book is a lot of work, and much of it isn’t fun, so you need to be really excited about the fun parts to balance it out. I used to check my Amazon sales standing every day, thrilled to see it move up and down the charts. Each royalty payment felt magical: something I created that people paid real money for. It’s still cool, but the excitement has tempered over six years.
Second, most of my original thinking is already captured in my books or fits shorter-form content like blog posts. I won’t get much incremental leverage from another book. I do continue to get leverage from shorter-form writing and will keep doing it.
Finally, as I wrote in Writers who operate, professional writing quality often suffers when writing becomes the “first thing” rather than the “second thing.” Chasing distribution subtly damages quality. I’ve tried hard to keep writing as a second thing, but over the past few years my topic choices have been overly pulled toward filling book chapters instead of what’s most relevant to my day-to-day work.
My current thinking on how to best advance the industry rests on four pillars:
The fourth pillar is my current focus and likely will remain so for the upcoming decade, though who knows—your focus can change a lot over ten years.
Why now? Six years ago, I wouldn’t have believed I could influence my company enough to make this impact, but the head of engineering roles I’ve pursued are exactly those that can. With access to such roles at companies with significant upward trajectories, I have the best laboratory to validate and evolve ways to advance the industry: leading engineering in great companies. Cargo-culting often spreads the most influential ideas—20% time at Google, AI adoption patterns at Spotify, memo culture at Amazon, writing culture at Stripe, etc. Hopefully, developing and documenting ideas with integrity will hopefully be even more effective than publicity-driven cargo-culting. That said, I’d be glad to accept the “mere” success of ideas like 20% time.
Most importantly for me personally, focusing on modeling ideas in my own organization aligns “advancing the industry” with something I’ve been craving for a long time now: spending more time in the details of the work. Writing for broad audiences is a process of generalizing, but day-to-day execution succeeds or fails on particulars. I’ve spent much of the past decade translating between the general and the particular, and I’m relieved to return fully to the particulars.
Joining Imprint six weeks ago gave me a chance to practice this: I’ve written/merged/deployed six pull requests at work, tweaked our incident tooling to eliminate gaps in handoff with Zapier integrations, written an RFC, debugged a production incident, and generally been two or three layers deeper than at Carta. Part of that is that Imprint’s engineering team is currently much smaller— 40 rather than 350—and another part is that industry expectations in the post-ZIRP reentrenchment and LLM boom pull leaders towards the details. But mostly, it’s just where my energy is pulling me lately.
2025-07-06 19:00:00
There’s a lot of excitement about what AI (specifically the latest wave of LLM-anchored AI) can do, and how AI-first companies are different from the prior generations of companies. There are a lot of important and real opportunities at hand, but I find that many of these conversations occur at such an abstract altitude that they border on meaningless. Sort of like saying that your company could be much better if you merely adopted more software. That’s certainly true, but it’s not a particularly helpful claim.
This post is an attempt to concisely summarize how AI agents work, apply that summary to a handful of real-world use cases for AI, and to generally make the case that agents are a multiplier on the quality of your software and system design. If your software or systems are poorly designed, agents will only cause harm. If there’s any meaningful definition of an AI-first company, it must be companies where their software and systems are designed with an immaculate attention to detail.
By the end of this writeup, my hope is that you’ll be well-armed to have a concrete discussion about how LLMs and agents could change the shape of your company, and to avoid getting caught up in the needlessly abstract discussions that are often taking place today.
At its core, using an LLM is an API call that includes a prompt.
For example, you might call Anthropic’s /v1/message
with a prompt: How should I adopt LLMs in my company?
That prompt is used to fill the LLM’s context window, which conditions the model to
generate certain kinds of responses.
This is the first important thing that agents can do: use an LLM to evaluate a context window and get a result.
Prompt engineering, or context engineering as it’s being called now, is deciding what to put into the context window to best generate the responses you’re looking for. For example, In-Context Learning (ICL) is one form of context engineering, where you supply a bunch of similar examples before asking a question. If I want to determine if a transaction is fraudulent, then I might supply a bunch of prior transactions and whether they were, or were not, fraudulent as ICL examples. Those examples make generating the correct answer more likely.
However, composing the perfect context window is very time intensive, benefiting from techniques like
metaprompting to improve your context.
Indeed, the human (or automation) creating the initial context might not know enough to do a good job of providing
relevant context.
For example, if you prompt, Who is going to become the next mayor of New York City?,
then you are unsuited to include the answer to that question in your prompt. To do that, you would need to already know
the answer, which is why you’re asking the question to begin with!
This is where we see model chat experiences from OpenAI and Anthropic use web search to pull in context that you likely don’t have. If you ask a question about the new mayor of New York, they use a tool to retrieve web search results, then add the content of those searches to your context window.
This is the second important thing that agents can do: use an LLM to suggest tools relevant to the context window, then enrich the context window with the tool’s response.
However, it’s important to clarify how “tool usage” actually works. An LLM does not actually call a tool. (You can skim OpenAI’s function calling documentation if you want to see a specific real-world example of this.) Instead there is a five-step process to calling tools that can be a bit counter-intuitive:
Generated text as any other call to an LLM might provide
A recommendation to call a specific tool with a specific set of parameters, e.g. an LLM that knows about a
get_weather tool, when prompted about the weather in Paris, might return this response:
[{
"type": "function_call",
"name": "get_weather",
"arguments": "{\"location\":\"Paris, France\"}"
}]
The important thing about this loop is that the LLM itself can still only do one interesting thing: taking a context window and returning generated text. It is the broader program, which we can start to call an agent at this point, that calls tools and sends the tools’ output to the LLM to generate more context.
What’s magical is that LLMs plus tools start to really improve how you can generate context windows. Instead of having to have a very well-defined initial context window, you can use tools to inject relevant context to improve the initial context.
This brings us to the third important thing that agents can do: they manage flow control for tool usage. Let’s think about three different scenarios:
LLMs themselves absolutely cannot be trusted. Anytime you rely on an LLM to enforce something important, you will fail. Using agents to manage flow control is the mechanism that makes it possible to build safe, reliable systems with LLMs. Whenever you find yourself dealing with an unreliable LLM-based system, you can always find a way to shift the complexity to a tool to avoid that issue. As an example, if you want to do algebra with an LLM, the solution is not asking the LLM to directly perform algebra, but instead providing a tool capable of algebra to the LLM, and then relying on the LLM to call that tool with the proper parameters.
At this point, there is one final important thing that agents do: they are software programs. This means they can do anything software can do to build better context windows to pass on to LLMs for generation. This is an infinite category of tasks, but generally these include:
Alright, we’ve now summarized what AI agents can do down to four general capabilities. Recapping a bit, those capabilities are:
Armed with these four capabilities, we’ll be able to think about the ways we can, and cannot, apply AI agents to a number of opportunities.
One of the first scenarios that people often talk about deploying AI agents is customer support, so let’s start there. A typical customer support process will have multiple tiers of agents who handle increasingly complex customer problems. So let’s set a goal of taking over the easiest tier first, with the goal of moving up tiers over time as we show impact.
Our approach might be:
Note that even when you’ve moved “Customer Support to AI agents”, you still have:
You absolutely can replace each of those downstream steps (reviewing performance statistics, etc) with its own AI agent, but doing that requires going through the development of an AI product for each of those flows. There is a recursive process here, where over time you can eliminate many human components of your business, in exchange for increased fragility as you have more tiers of complexity. The most interesting part of complex systems isn’t how they work, it’s how they fail, and agent-driven systems will fail occasionally, as all systems do, very much including human-driven ones.
Applied with care, the above series of actions will work successfully. However, it’s important to recognize that this is building an entire software pipeline, and then learning to operate that software pipeline in production. These are both very doable things, but they are meaningful work, turning customer support leadership into product managers and requiring an engineering team building and operating the customer support agent.
When an incident is raised within your company, or when you receive a bug report, the first problem of the day is determining how severe the issue might be. If it’s potentially quite severe, then you want on-call engineers immediately investigating; if it’s certainly not severe, then you want to triage it in a less urgent process of some sort. It’s interesting to think about how an AI agent might support this triaging workflow.
The process might work as follows:
This is another AI agent that will absolutely work as long as you treat it as a software product. In this case, engineering is likely the product owner, but it will still require thoughtful iteration to improve its behavior over time. Some of the ongoing validation to make this flow work includes:
The role of humans in incident response and review will remain significant, merely aided by this agent. This is especially true in the review process, where an agent cannot solve the review process because it’s about actively learning what to change based on the incident.
You can make a reasonable argument that an agent could decide what to change and then hand that specification off to another agent to implement it. Even today, you can easily imagine low risk changes (e.g. a copy change) being automatically added to a ticket for human approval.
Doing this for more complex, or riskier changes, is possible but requires an extraordinary degree of care and nuance: it is the polar opposite of the idea of “just add agents and things get easy.” Instead, enabling that sort of automation will require immense care in constraining changes to systems that cannot expose unsafe behavior. For example, one startup I know has represented their domain logic in a domain-specific language (DSL) that can be safely generated by an LLM, and are able to represent many customer-specific features solely through that DSL.
Expanding the list of known-safe feature flags to make incidents remediable. To do this widely will require enforcing very specific requirements for how software is developed. Even doing this narrowly will require changes to ensure the known-safe feature flags remain safe as software is developed.
Periodically reviewing incident statistics over time to ensure mean-time-to-resolution (MTTR) is decreasing. If the agent is truly working, this should decrease. If the agent isn’t driving a reduction in MTTR, then something is rotten in the details of the implementation.
Even a very effective agent doesn’t relieve the responsibility of careful system design. Rather, agents are a multiplier on the quality of your system design: done well, agents can make you significantly more effective. Done poorly, they’ll only amplify your problems even more widely.
If you accept my definition that AI agents are any combination of LLMs and software, then I think it’s true that there’s not much this generation of AI can express that doesn’t fit this definition. I’d readily accept the argument that LLM is too narrow a term, and that perhaps foundational model would be a better term. My sense is that this is a place where frontier definitions and colloquial usage have deviated a bit.
LLMs and agents are powerful mechanisms. I think they will truly change how products are designed and how products work. An entire generation of software makers, and company executives, are in the midst of learning how these tools work.
For everything that AI agents can do, there are equally important things they cannot. They cannot make restoring a database faster than the network bandwidth supports. Access to text-based judgment does create missing tools. Nor does text-based judgment solve access controls, immediately make absent document exist, or otherwise solve the many real systems problems that exist in your business today. It is only the combination of agents, great system design, and great software design that will make agents truly shine.
As it’s always been, software isn’t magic. Software is very logical. However, what software can accomplish is magical, if we use it effectively.