2026-01-26 04:00:00
While Staff Engineer was first and foremost an attempt to pull the industry towards my perspective on staff-plus engineering roles, writing it also changed my opinions in a number of ways. Foremost, it solidified my belief that the industry too often treats engineers like children to be managed, rather than adults to be involved, and that I needed to change some of my own leadership practices that I’d inadvertently adopted.
When I started writing it, I had already shifted my opinion about reporting hierarchy, believing that it was important to have engineers reporting directly to senior leaders rather than reporting to leaf managers. But I hadn’t gone the whole distance to include them in my core leadership meeting. However, for the last six years I have had active engineers in my senior-most leadership group, I’m glad that I’ve adopted this practice, and I intend to continue it going forward.
The core approach here is:
This is a simple formula, but has worked well for me the past decade. There absolutely are topics that some people don’t care about too much, but I try to push my leadership team to care widely, even if things don’t impact them directly.
It’s easy for managers to get into a mode where they are managing the stuff around reality, without managing reality itself. That is much harder when engineers who write and maintain the company’s software are in the room, which is the biggest benefit of including them. Similarly, there are many decisions and discussions that would have to be punted to another forum without effective engineering representation in the room. You might not be able to finalize the technical approach to a complex problem with only a few engineers in the room, but you can at least evaluate the options much further.
Another major benefit is that these engineers become a second propagation mechanism for important context. Sometimes you’ll have a manager who isn’t communicating effectively with their team, and while long-term that has to be solved directly, having these engineers means that information can flow down to their teams through the engineers instead.
Finally, this sets an expectation for the managers in the room that they should be in the details of the technical work, just as the engineers in the room should understand the managerial work.
There are relatively few downsides to this approach, but it does serve as a bit of a filter for folks who have a misguided view of what senior leadership entails. For example, there are some engineers who think senior leadership is having veto power over others without being accountable for the consequences of that veto. Those folks don’t survive long in this sort of meeting. Some would argue that’s a downside, but I wouldn’t.
While I can conceive of working in some future role where I am simply not allowed to implement this pattern, I really can’t otherwise imagine not following it. It’s just been transformative in better anchoring me in the actual work we do as engineers, rather than the “work around work” that is so much of management.
2026-01-26 03:00:00
Remotion is having a bit of a moment at the moment, and I decided to play around with the Claude Code integration. Here are a couple videos I was able to make in <10 minutes summarizring data on my blog.
First, here is published posts over time. I had Claude write some scripts to generate this dataset, and then did a series of prompts to get the right visual. It was pretty straightforward, worked well, and I imagine I could have gotten to the right video much faster if I’d had a clearer destination when I started.
Second, here’s a video of showing the published blog posts over time, my most frequently used tags at that point in time, and how many posts I published at each employer along the way.
Altogether, this was a really fascinating to see how effectively Claude and Remotion together were able to generate fairly complex videos. This is definitely something I could imagine using again.
2026-01-26 01:00:00
Despite my best efforts, I have been wrong a lot over the years. I’ve been wrong about technology patterns (in 2014, I thought microservices would take over the world), I’ve been wrong about management techniques (I used to think systems thinking was the ultimate technique, but I’ve seen so many mistakes rooted in over-reliance on systems thinking), and a bunch of other stuff as well.
Early on, I spent a lot of time thinking about how to be wrong less frequently. That’s a noble endeavor, and one I still aim to improve at today. However, a lot of the problems you encounter later in your career are deeply ambiguous, and it simply isn’t possible to eliminate bad outcomes. Some examples of this are:
In every one of those examples, you know upfront that you simply don’t have all the information you’d like to have, and still need to make a decision to move forward. As a result, these days I spend far more time thinking about how to make being wrong cheap rather than how to avoid being wrong.
Of everything I’ve tried, demonstrating curiosity is consistently the best technique I’ve found to reduce the cost of being wrong. These days, if I regret being wrong about something, it’s almost always because I engaged in problem solving before exercising curiosity. I feel this so strongly that “curiosity is the first step of problem solving” has become a steadfast engineering value in the organizations that I lead.
Some examples of demonstrating curiosity well and poorly:
Someone thinks we shouldn’t hire someone that I’ve worked with closely.
(Bad) Assume they are wrong.
(Good) Explain your mental model of why you think the candidate would work well, and ask them where they do or don’t agree with that mental model.
(Best) Spend time upfront aligning with interviewers on the specific fit you’re focused on for this specific role.
Someone is asking for help logging into an internal dashboard where the internal login steps are extensively documented in your internal wiki.
(Bad) Get slightly snippy with them about not reading the documentation.
(Good) Ask them if they are running into something that isn’t covered by the documentation.
(Best) Replace this with a chat bot that uses the (Good) approach automatically instead of having a human do this.
Someone proposes introducing a new programming language for your internal stack.
(Bad) Tell them that they need to follow the existing architecture document.
(Good) Mention that this seems in conflict with the current architecture doc, and ask them how they’re thinking about that conflict.
(Best) Make sure new-hire onboarding includes links to those materials, and create an LLM-driven RFC reviewer that directs RFC writers to existing materials they should reference in their RFC.
Someone doesn’t show up for an incident that they are on-call for.
(Bad) Tell them they failed to meet on-call expectations.
(Good) Ask them what happened that led them to missing their on-call expectations.
(Best) Create automation to ensure folks going on-call are notified ahead of time, and to detect anyone going on-call whose notification mechanisms aren’t appropriately configured.
In each of these cases, showing curiosity is not about being unwilling to hold folks accountable, and it’s not about consensus-based decision making. Instead, it’s starting each discussion by leaving space for the chance that you’re missing important information. Often you’re not missing information, and then the next step is to hold folks accountable, but demonstrating curiosity helps you avoid applying accountability without context, which damages relationships without providing any benefit.
2026-01-26 00:00:00
I did a lot of hiring at Uber, some days I would be doing back-to-back 30 minute phone screens for several hours in a row. That said, while Uber taught me how to hire at scale, it was Stripe that taught me how to hire creatively.
Some of that was learning the fundamental mechanics like how to cold source, optimizing hiring funnels, and designing interview loops, but Stripe had some fairly unique ideas that I haven’t heard discussed much elsewhere. One of those was Stripe’s Bring Your Own Team (BYOT) concept, which invited teams to apply together as a sort of light-weight acquihire approach.

The BYOT approach captured folks’ attention, but it was ultimately more effective as an idea showing Stripe’s originality rather than as a hiring practice. By the point that I left, zero teams had been hired through the BYOT mechanism, although we did talk to some. On the other hand, one of the hiring ideas that Stripe didn’t blog about publicly, but worked extremely well in practice, is the idea of Lighthouse Hiring.
Lighthouse Hiring is the idea that hiring well-connected folks makes subsequent hires both easier and higher quality. Sure, hiring those well-known folks is difficult, but if you’re trying to hire ten great people, then spending more time hiring one “lighthouse hire” can improve the quality and velocity of the overall hiring push.
A few examples of Stripe’s lighthouse hiring:
There are many other examples you could pick from, and importantly not all of them are widely-known, just widely-connected. Julia is a tech internet mainstay, but Raylene operated more from personal networks rather than social media networks. Any sort of high quality network can be the underpinning of a successful lighthouse hire.
This mechanism worked exceptionally well, but there is a complex underside to Lighthouse Hires: strong networks, particularly publicly visible networks, create a complex power dynamic that some managers can struggle to navigate. If you’re relying on a public personality, and they get frustrated at work, then your lighthouse hiring strategy is going to implode on you. Similarly, hiring them to begin with can be a challenge if you don’t have an interesting role, but carving out a uniquely interesting role for one hire will come across as biased internally, undermining your relationship with the broader team.
These are all navigable, and I think Stripe would have been less successful if it hadn’t used this pattern, but it takes some nuance to deploy effectively.
2026-01-25 23:00:00
When we launched Digg v4, the old site turned off, but the new site didn’t turn on. There was a lot of pressure to get things working, but no one knew what to do about it. It took almost a month to get it wholly functioning. It was not a pleasant month, with many false starts while we tried to dig out of launching an unfinished, desperate product.
That launch was a foundational early career experience for me. However, it was not a unique one, as many leaders inject that sort of pressure into their teams as a routine management technique. Some examples of the pressure without a plan pattern that I’ve seen:
The aforementioned Digg V4 launch was pressure without a plan in two ways. First, a fixed launch date was set to motivate engineering, but we had no clear mechanism to launch on that date. Fixing things after the launch was pressure without a plan as well.
Uber’s service migration had a well-defined platform as the destination for things removed from the Python monolith, but initially did not have a clear plan for leaving the monolith other than escaping the top-down pressure.
Pressure without a plan is almost the defining characteristic of 2010s era service decomposition projects. Calm and Carta’s initial approaches had some elements of this as well, which is why I ended up quickly pausing both after joining.
Metrics-only AI adoption at many companies falls into this bucket, with a focus on chiding non-participation without understanding the reasons behind the non-participation. (At Imprint, we’ve tried hard to go the other direction.)
Pressure itself is often useful, but pressure without a plan is chaos, unless your team has an internal leader who knows how to reshape that energy into pressure with a plan. Your aspiration as a leader should be to figure out how to become that internal leader.
Being that person is not only the best way to help your team succeed (convincing folks who love “pressure without a plan” not to use it is… hard), it’s also some of the most interesting work out there. Crafting Engineering Strategy’s chapter on strategy testing is my approach to iteratively refining a plan before committing to its success, and how I’ve learned to turn “pressure without a plan” into a situation that makes sense.
As an ending aside, I think the origin of this pattern is the misapplication of management by objectives. The theory of management by objectives is that the executing team is involved in both defining the goal and the implementation. That sounds good, almost what Escaping the Build Trap recommends. However, in practice it’s very common to see executives set targets, and then manage a low-agency team to hit those goals which the team believes are impossible, which leads very precisely to the “pressure without a plan” pattern.
2026-01-20 01:00:00
One of the relatively few AI-native products I use is Cora.computer which summarizes my personal inbox. It’s not perfect, but it’s done a much better job than my collection of filters at managing the ever-growing onslaught of spam and unsolicited email that flows in.
I’ve run into a few issues with Cora, which ended up in me following folks at Every to report the issues, and more recently this led me to see their work on compound engineering and specifically the compound-engineering-plugin.

Compound Engineering is two extremely well-known patterns, one moderately well-known pattern, and one pattern that I think many practitioners have intuited but have not found a consistent mechanism to implement. Those patterns are:
Plan is decoupling implementation from research. This is well understood, e.g. Claude’s plan mode, although it can certainly be done better or worse by being more specific about which resources to consult (specs, PRDs, RFCS, issues, etc)
Work is implementing a plan. This is well understood, and the core of agentic coding. Again, this can be done better or worse, but much of that depends more on the quality of your codebase, tests, and continuous integration harness than the agent itself
Review is asking the agent to review the changes against your best-practices, and identify ways it could be improved. I think most practitioners have some version of this, but standardization is low, even within a given company.
Compound is asking the agent to summarize its learnings from a given task into a well-defined, structured format (basically a wiki) which is consulted by future iterations of the plan pattern. This interplay between the compound and plan steps creates the compounding mechanism.
Many practitioners are implicitly compounding, but it’s often done manually through their own
work. For example, I’d often ask the agent to update our AGENTS.md or skills based on a specific
problem encountered in a task, but it required my active attention to notice the issue and
suggest incorporation.
Taken together, these four steps are not shocking but are an extremely effective way to
convert these intuited best-practices into something specific, concrete, and largely automatic
within a company by adding a few commands (e.g. workflow:plan, workflow:review, …) and updating
your AGENTS.md to instruct the agent when and how to use those commands.
Implementing within Imprint’s frontend and backend monorepos was straightforward, taking about an hour.
Most of this was iterating on the last mile of details, for example we want our plans in .claude/plan-*.md
format to match our existing .gitignore pattern, and none of it was complex.
Most importantly, this frees up a topic that many of our engineers (including me) were trying to
find a standard approach. Now we have one, and can move on to the next problem.
If recent history is our guide, it’s a solid guess that many of the practices in compound engineering will get absorbed into the Claude Code and Cursor harnesses over the next couple of months, at which point using these techniques explicitly will be indistinguishable from folks who are entirely unaware they’re using them. But we’ll see. Until then, this is a cheap, useful experiment that you can implement in an hour.