2025-10-16 14:59:22
As companies grow, there is a stark shift from having a team of generalists to a team of specialists.
A young startup might ask an engineer to play roles ranging from engineer to PM to interim manager to solution consultant and more. A young startup might ask a PM to be a PM and pricing expert and a product marketing expert and a documentation writer and an ersatz designer and an interim engineering manager.
In bigger companies, all of these roles are done by specialists.
The upside of this transition is that, most of the time, you get higher quality output for each of these vocational tasks as you specialize. A dedicated pricing specialist is way better than a pinch-hitting PM. A PM is way better than the average engineer doing PM stuff.
There are major downsides though.
The number one downside is that all of these specialists must coordinate to get anything done. Coordination effort skyrockets. This can lead to dramatically increased times to deliver projects. And, sometimes, the coordination effort is so significant that the project slows to a crawl and lacks time for iteration, leading to lower quality.
In a world where slowness kills companies, this can be existential.
Another major issue is that as the number of required specialists goes up, the probability of an underperforming member of a project team goes to 100%. Most companies have 0 ability to react in realtime to an underperforming member of a project (perf management is too slow; embedded roles lack visibility). This can lead to serious deficiencies, slow delivery, and morale issues.
Finally, an amazing engineer is actually better than an average PM at product management. An amazing PM is actually better than an average pricing expert on pricing. As a result, deference to specialists can be deadly. If a great PM says that the design sucks, I don’t care if a middling designer has more experience, the design probably sucks.
Headcount growth is part of any successful company trajectory. But, to avoid these pitfalls, ensure that:
2025-10-10 14:59:22
The traditional advice for tech companies is that you should ignore your competitors. Just talk to customers and put your energy towards making yourself as great as you can be. Hit the gym, practice gratitude, focus on yourself king.
At least in SaaS, I believe that this is wrong. If you ignore your competition, you’ll get to experience capitalism in its rawest form: competitors who are more aggressive, more vicious, or more desperate than you will pillage your customers and put you out of business. This is bad for your startup and your wallet.
Don’t let this happen to you. Having seen many, many rounds of direct SaaS competition, here are a few of the main lessons to keep in mind.
Competition is everywhere. If you don’t see competitors for your product it’s often a sign that you’ve picked a fundamentally unviable or impractical market to attack. All good markets are warzones.
Most markets settle into a long-term stable system where 1-3 players gain effectively all of the market share. The ultimate goal of competing is to be one of the top players in your space who are going to get all of the value.
In order to be one of the top players in your space, you typically need to either be the #1 player in a reliable subset of the market, or you must be the overall category leader. Examples of a defensible niche include: The low-cost player. The very premium player. A player in a very specific industry (“We’re Salesforce, but for life sciences”). The only local player in a market (“We’re Shopify for Asia”). But regardless, you must be considered the best in some discrete market at a minimum.
The way that you become the best in a discrete market is by dealing knockdown blows to your competitors, causing them to eventually retreat.
You do this by incentivizing them to change their product-market fit such that it doesn’t overlap with yours: typically by beating them in deals (or winning over their customers), thereby forcing them to change product, marketing, or sales strategy to avoid you. This is how you carve out your niche; eventually if the niche gets big enough, you become the leader of the category – the Microsoft, Salesforce, Intuit, or Palantir of your industry.
Note that the goal of competition is not to put your competitors out of business. Generally speaking, you should assume that any competitor with >$25m ARR is essentially unkillable – expect to compete against them effectively forever. SaaS margins are too high and SaaS products are too sticky (which is why it’s such a great business).
Knockdown blows come in a few forms, such as:
Essentially, you’re looking to force competitors to take painful steps that make them open to resetting their strategy, with the new strategy being something that is further from your areas of overlap. Knockdown blows can take many forms but they are all downstream of the same cause: a competitor losing the will to fight you for market share.
Knockdown blows are solidified by a very concrete first step: a company’s revenue operations team (who handles sales quotas, territory mapping, and general Go-To-Market (GTM) investment) raises the flag that they view a particular business area as unviable. When this happens, their next step is to either invest more to win, or divest to cut their losses – reallocate resources, spin down investment, stop the bleeding. Your role is to make them reach the conclusion that divesting is necessary as quickly and confidently as possible.
Your only point of leverage against a SaaS competitor is your ability to negatively impact their top sales representatives’ quota attainment. This is the point where your business directly contacts theirs; it is your only point of leverage. Everything else – the LinkedIn influencer posts, the Magic Quadrants, the billboards on 101 – are all just noise compared to the direct grappling of head-to-head sales.
You want all of the best sales reps who compete against you to be nervous and force internal change: this is how you need to break their will. The general way that this works is:
Every company’s timetable varies, but generally speaking you should expect that startups will only allow you to pummel them for 2-3 months, scaleups for 2-3 quarters, and enterprise companies (say, >1000 employees) for 2-3 years before they’re forced to react. Mega-cap companies (Salesforce, Intuit, Adobe, ServiceNow) will hold out the longest – you may need to compete for a decade.
Morale and momentum matters tremendously in enterprise sales. At the end of the day, SaaS is a luxury good – nobody starves tonight if they don’t buy Workday. Sales teams thrive on the confidence to make sales in the absence of urgency and shaking their confidence can break a team.
As a result, knockdown blows will be visible on Glassdoor and Blind 2-6 months before their effects are public. A key metric that you can iterate your competitive strategy towards is Glassdoor reviews that are well-written, well-reasoned, typically quite long (5+ paragraphs), and highly negative. These reviews should call out competitive pressures or flagging product-market fit as the cause of dissatisfaction, as well as strategic gaps (“we don’t have a plan” or “we’re completely reactive”). In some cases they will literally call your company out by name.
The key trait for competing effectively is not cleverness, but persistence. You need to apply competitive pressure with very little positive feedback for many quarters (or years) to get any sort of return. This is why founder-led companies, or companies with founder DNA, are the best at competition in the long run.
But many leaders do not have the stomach for real competition. Competing means admitting that your competitors are strong; it means admitting that you aren’t infallible. You must be willing to show vulnerability, if only internally, in order to take the required competitive steps with the required intensity.
Most steps that are necessary to compete are obvious. Some startups psyche themselves out, looking for that one “differentiated product” (read: magic trick) that will flip the market in their favor. You should just assume that this trick does not exist, and not overthink things. If you just lost 3 deals because you didn’t have feature X, you should just build X rather than wondering whether you’d be better off digging in your magician’s cap for a rabbit one more time.
There’s an order to how you compete:
The scariest competitors will always be companies that can build quickly, because even if your GTM is stronger, they can threaten your product-market fit by leapfrogging your technology. Companies that build fast are disproportionately startups, and the best competitive strategists are very alert to startup competitors.
In some cases, a knockdown blow isn’t enough to make a competitor leave you alone. In some cases the only path to success for a competitor is to beat you in the markets where you compete – they can’t be discouraged from going right after you, because it’s existential for them. These companies will fight you to the bitter end. These desperate competitors can be the most dangerous, because they’ll often use sketchy tactics (negative gross margin deals; unlimited liability; lying about you) to try to win. If they’re product-oriented and founder-led, this is also when they will build the fastest and potentially leapfrog you.
2025-09-14 14:59:22
One of the biggest mistakes is shielding people from the pain they need to learn.
I first became a manager when I was 26. I first had to fire someone when I was 26.
I spent the whole week anxious. The night before I couldn’t sleep. On the day of the firing I was all nerves. On the way to the conference room I felt sick to my stomach. While telling him that he no longer had a job, my hands were shaking under the table. When he reacted to the news, I felt emotional.
I walked him to the elevator and said goodbye. Then I went back to my desk where my team was. I asked everyone to gather and let them know that he was no longer with the team. Everyone sat down and I looked at my screen for an hour without doing anything, mindlessly clicking around, pretending to work. I took off early and got several drinks. I kept thinking for days what he was doing and how he was doing.
It was traumatic.
But, it was supposed to be. When I saw how terrible it was to fire someone, I deeply understood how important it was to hire great people and performance manage well. Every subsequent firing had the same effect - I will hire better, I will performance manage better.
These convictions were not held lighty. In rooms of 10 people all pushing to hire someone because they were “good enough,” I would say absolutely not. Good enough isn’t good enough. Underneath that conviction was a deep knowledge of the experience of firing. I will not be lazy here if it risks that happening again.
The agony of firing was part of what made me much better at management. The problem with may big company environments is that managers are shielded from pain:
And the kicker? The real crazy thing - it’s often remote. Talking into a screen is a joke. It removes the humanity from an interaction like nothing else can. You’re essentially playing a video game.
So what happens? You remove the suffering and people don’t learn. They don’t learn to hire better. They don’t learn to performance manage better. The worst case isn’t a real life horror show of an experience - it’s a 15 minute make believe session and then you’re back to sitting in your home office eating bon bons.
Shipping bugs is another place that you are supposed to learn from pain. As companies get big and cultures get blameless, engineers are often very, very isolated from the impact of their bugs. You write a bug, it causes issues for some amorphous customers you keep hearing about, and you have to scramble in an incident to fix it up. No muss no fuss. Sprint planning happens on Tuesday just like always.
That’s nonsense. That’s hiding from the pain.
Engineers would do better to engage with customers, see their reactions, explain the bug. When you see one of your users distraught over the time and effort they’ve spent managing the fallout of a bug you wrote, when you see how they’re worried about how they vouched for your software, when you hear that their team was doing data fixes all weekend, that hot feeling of shame and apology should sink into your bloodstream.
Those visceral feelings from seeing customer impact of bugs are what can help engineering teams radicalize on quality. It’s what can help teams overcome the discomfort of upholding standards and going the extra mile.
People should be exposed to the right trauma. There are certain transformational experiences that are created by the outcomes of your failures. Ignoring them, hiding from them, or otherwise letting anything deflect them from you blocks a critical signal in your reward/accountability function.
Find ways to lean into the experiences that help you learn about your failures. That means doing more of the firing yourself, not offloading to another. It means engaging with the customers you let down.
These experiences are not meant to make you desensitized. This is a common misconception. Good managers can fire people without having as much anguish, not because they’re used to it, but because they learned from their previous experiences and know they did everything they could to hire well and manage fairly.
Finally, if you ever have a cultural issue of people simply not acting up to your values or standards, find them the people that the failure impacts. Make a result of future failures a required interaction with those people. Things then often fix themselves quickly.
2025-08-21 14:59:22
One of the most consistent observations I’ve had in my time in startups, scale-ups, and public companies is that smart people with context on a tricky situation almost always know exactly what they need to do.
Teams obsess about company strategy, about data, and about frameworks for decision-making. But I’ve found that for most operational decisions the right course of action is actually pretty obvious to people with a lot of business context.
The simple fact is that people hate confronting difficult decisions. And instead, most come up with increasingly painful coping mechanisms to avoid conflict. Eventually they wind up incurring much more aggregate pain than if they had simply bitten the bullet as soon as they recognized the realities of their situation.
Some of the more notable instances where this plays out:
The fact that the right business decision is so often obvious leads to a few actionable operational steps.
If you’re torn between multiple choices, the path that makes you most uncomfortable is unfortunately almost always the right one. There’s a very strong human tendency to avoid hard decisions, so the errors you make will only go in one direction. Pay attention when you hear any of the tells below – in particular watch out for the word “just” which is a calling card for an argument that’s really an excuse:
Since people close to a problem almost always know what to do, teams must be open to difficult feedback from inconvenient sources. Especially as organizations grow, the difference in comprehension between an expert who is close to a situation vs. someone who’s further away or not an expert can be night and day. Favor proximity (and expertise) over title.
A common type of error that one sees is a friendly, charismatic, highly-visible executive who is very popular with more junior members of other teams but seen as incompetent by his peers and senior staff. The senior veterans who are closest to the situation know what to do, but they avoid providing the harsh feedback because it’s awkward and seen as not worth the effort.
The cult of being data-driven, particularly at huge consumer companies with non-intuitive user behavior patterns, has led many people to believe that there are non-obvious nuances to every situation. In reality, most business and management situations do not have a lot of nuance.
Being data-driven has many merits, but its most debilitating flaws are that it favors certainty over speed and trains teams to look for subtle nuances that don’t exist 99% of the time. If you know what to do, you should just do it as soon as possible rather than waiting for more information. If you’re a smart and experienced executive, don’t let the siren call of more data gaslight you into re-running the analysis on a situation where you have certainty.
Having worked with teams as an advisor/investor in the past, I’ve also found that external advisors can be very helpful to simply confirm what teams already know. Importantly, these external forces can also help informed teams to assess severity. In a pinch, one of the easier ways to get a gut check is to ask a trusted friend to rubber duck a difficult management decision with you. Just explain the challenging call to them, and have them ask occasional questions until it’s clear what to do. Some of the most useful discussions that I’ve had with companies begin with someone simply asking “is this normal?”
Finally, you actually owe it to your team to make the hard calls that you know to be correct. Many leaders fall into a state of paralysis because they don’t want their team to blame them for making the wrong decision. They would do better to respect their team’s macro analysis more: everyone who joins your team is ultimately doing so because they trust your judgment on some level. Well run companies are benevolent dictatorships not democracies, and your team will respect you more for decisive action than they will for a theoretically higher hit-rate on your decisions.
2025-08-02 14:59:22
Metrics can be incredibly powerful. But you have too many of them.
Let’s talk about how and when to use metrics.
The golden rule of metrics is this: any metric you maintain should directly drive action if outside expected bounds.
The reason this is an important rule is:
The longer I'm an exec the more confident I become that 80% of metrics dashboards are adult pacifiers for managers with poor strategic sense and anxiety disorders
— staysaasy (@staysaasy) March 24, 2025
A direct corollary - because the cost of setting up, maintaining, and actioning on metrics is high, you shouldn’t have that many metrics.
Let’s talk about setting up metrics and how to use them, as well as how to not use them.
Setting up a metric takes time. You have to:
Depending on your company and the metric, this could be a couple hours of work or a couple months of work via multiple tickets. Some tools give certain metrics by default, but that’s also not free (you’re paying for it).
The takeaway is that setting up metrics cost time and money.
Good metrics:
Regular review of metrics is critical, because any metric you don’t regularly review will eventually become inaccurate.
Many people come back to a metric a year after setting it up and realize it doesn’t look quite right. Upon investigation they realize a week ago someone changed the definition in middleware and it started double counting. This kind of thing happens all the time. The most pernicious version of this happens when the metric looks like things are working well, while actually it’s overreporting and things are having issues.
Regular review scrutinizes the metric value as well as matches it against other information to ensure it seems right. As a general milepost, expect to find something broken in a metric about once a year, needing repair.
Metrics without expectations are just gossip prompts.
Some metrics only matter if they move > 10%. Some matter if they’re .1% off target.
State explicitly what your expectation is for a metric and the conditions of action in either direction. Otherwise, metric review will become an exercise in taking a really long time to realize people don’t know what matters.
If a metric is outside the bounds of certain expectations, you must take action. This is the biggest cost to maintaining metrics - you actually have to do something if the metrics indicate something you’ve said you don’t want to happen.
Too many people have 25 metrics in their dashboard and don’t do a damn thing about anything. They just go back to the same backlog they were working on and put away the shiny metrics for powerpoints when they need to convince someone of something.
Metrics should have expectations. If those expectations are not being met, action must be taken.
Let’s talk through a couple examples.
An ambitious PM has a bunch of metrics in a dashboard. The PM uses them for things like: showing how well their team is doing, and asking for a raise, and asking for more resources.
The problem is that they only review those metrics in preparation for those activities. They actually have no regular review, and no owner, and no clear expectation of what it should or shouldn’t be.
Then one day the PM’s boss wakes up and says wait, this isn’t quite right. And that’s when you find out not only is the usage data wrong, but even if it was right, the whole concept of appropriate growth was not aligned upon. So in fact, all of these metrics were, for their entire lifetime, worse than useless.
Instead of showing the micro-cohort CSAT for 7 different customer profiles across 3 products, their manager should have asked to see one or two metrics, and should have scrutinized them regularly, with clear expectations on growth (not just that up and to the right is good).
It’s common practice to regularly review infrastructure cost at companies with the finance team. Oftentimes these processes have two things correct: they review metrics and each piece of infrastructure has an owner. The common failure, however, is to not discuss what good and bad actually looks like.
When do we actually have to do something about a cost changing in a certain way?
This question is often never asked, and so people debate minor cost fluctuations and lack clarity on if any changes are needed. The simple answer is to set up a working agreement: we’ll review the cost to make sure it all makes sense, but we’re only actioning if it’s more than 2% above the quarterly forecast or 5% over the yearly forecast. And, if it’s above that, you must take effort to reduce it.
With this agreement, you gain efficiency when things don’t require action, and direct clarity when things do.
Having a few great metrics is much better than having a bunch of trash metrics. Metrics require intense focus to keep accurate and to have high integrity. Metrics require expectations and action to be worth spending all of the time it takes to make them accurate and have high integrity.
If someone says “I got metrics” ask them the last time they did something because of a metric. If they don’t immediately know, they don’t have metrics, they have a dashboard of graphs that they use to persuade people and sooth themself.
2025-07-06 14:59:22
Forming a new software team is easy to get wrong in many ways, including:
Here, however, we’ll focus on one of the most important and often bungled aspects of team formation - the team name.
Let’s review how to avoid the common trap of naming teams poorly.
Let’s assume that you have a team with a good mission (this talk is good inspiration for crafting high-quality team missions).
Now you’re ready to name the new team, which is when things commonly go awry. Even if you have a great team mission, you can shoot yourself in the foot with the name.A strong name:
Names are sticky. Missions live in docs nobody reads; names live in brains.
It is incredibly easy for a broad team name to lead to mission expansion down the line. When asked if it is appropriate to upscope a team’s mission, people always say “well we are the _____ team after all!”
People often mildly upscope their name to be more ambitious than their mission. Then, over time two teams who have upscoped their mission end up believing they both own some hot new area of technology.
Then people fight viciously.
A good team name doesn’t overlap with any other team and doesn’t allow for major upscoping of ownership.
People in large organizations burn thousands of hours just finding the right Slack channel to get answers they need. Asking questions to the proper team, finding the right people for projects and incidents, and all other routing activities are a massively expensive set of endeavors when compounded over hundreds or thousands of people. If a team is named properly, you can create major efficiency every time someone needs to figure out who owns what.
There are a couple big failures in team naming as it pertains to routing:
Let’s make these ideas real with an example. Let’s create a fake team that has the following properties:
Here are good and bad team names.
Bad
Good
Teams often want to avoid tightly scoped team names so they can expand in the future. They’ll also claim that it allows them to think more broadly about a problem space. However, in general:
Team names are contracts that define who does what. Getting them right up front is an incredibly important piece of scaling software organizations.