MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

北极星指标:聚焦驱动业务增长

2026-01-26 05:30:08

There is a dirty secret in product management.

\ Most features that get shipped do nothing. Absolutely nothing.

\ These features might sound great, sound fancy, and get a lot of praise internally. But when it comes to impacting the business, they do F all.

\ They do nothing, not because they aren’t designed well, but because they don’t drive your business model in any meaningful way.

\ The only way to impact your business is to have sustained focus on the things that matter.

\ Without defining what matters, it’s easy for teams to scatter changes around the product, get lost in rabbit holes, or micro-optimizations that are a waste of time.

\ This quote from Alice in Wonderland captures it perfectly:

Alice: “Would you tell me, please, which way I ought to go from here?”

\ The Cat: “That depends a good deal on where you want to get to.”

\ Alice: “I don’t much care where—”

\ The Cat: “Then it doesn’t matter which way you go.”

\ Don’t be Alice. Direction matters.

What is a North Star Metric?

All the great software development companies solve this problem by defining a North Star Metric.

\ We did this at Uber and Codecademy. Meta does it, Google does it, Slack does it, and you should do it too.

\ A North Star Metric is a way of quantifying the combined user and business value that your product produces.

\ Users come to your product to solve a problem. Your business makes money somehow.

\ You need to find a way of measuring the leading indicator of both.

Let’s look at some examples and what their north star metrics could be:

  • Facebook - “Daily Active Users” - FB is designed to be a daily use product, so this measures successful user engagement. FB makes money on ads, so users have to show up to see the ads.
  • Uber - “Trips Completed” - They’re a marketplace, so they make money when a ride or delivery happens. “Completed” is key here because it drives value to both sides of the marketplace and captures liquidity + quality.
  • AirBnB - “Nights Booked” - Also a marketplace, so this measures value to both the host and the guest. Their revenue will go up if this number goes up.
  • Slack - “Messages Sent per Team” - B2B, so revenue driven by accounts. The more active the account, the higher the retention. They also charge per seat, so the more messages, the more likely the more people in that account, the higher the revenue.
  • Notion - “Documents Created” - Notion has both solo users and team users. In team environments, the real value is the creation of a document for others to read. The more documents, the more valuable the workspace, the higher the retention.

\ No metric is perfect, but all of these orient product development towards growing the business and helping their users.

Why Use a North Star Metric and Not MRR?

Product development, if not measured carefully, can be insanely wasteful.

\ We learned this painful lesson at Codecademy.

\ It was roughly in 2018, and we hired our first data scientist (shout out to Hillary), who correctly calculated our retention numbers for the first time.

\ I remember staring at our fancy new dashboards and seeing that the last 2 years of our product development did absolutely nothing to improve our usage retention.

\ Nothing. A flat line.

\ We probably spent millions of dollars on engineers, PMs, designers, office space, snacks, etc., to get no movement in our retention, which is the main thing that matters.

\ This taught me a valuable lesson.

\ Good product development isn’t just shipping things. It also isn’t just shipping things that your users like.

\ It is getting the highest ROI for the dollars you put into developing your product.

\ North star metrics give you a way to denominate the value of your work.

\ Without a clear and thoughtfully chosen North Star Metric, you can’t quantify this ROI.

\ Just optimizing around MRR can be very misleading, as MRR is a lagging indicator.

\ If you remember the concept of the growth ceiling from a few posts ago, your MRR might still be growing just from past momentum and not your current work.

\ MRR can still go up even if your current features aren’t working. It can also go down when you’re shipping things that work.

\ If you follow this as a signal, it’s really easy to go astray.

How to Set a North Star Metric

There are 2 basic choices you have to make here.

  1. What are you going to measure?
  2. How are you going to measure it?

\ For “what to measure”, you should try to capture “one complete unit of value” from the user’s perspective.

\ Typically, value exists at a few levels, and it’s tricky to find the right altitude.

\ For example, at Duolingo, this could be the completion of a question, a pack of questions, a collection of packs, etc.

\ The first time you do this, pick something that you can easily measure that signals user value, and you can get more specific with time.

\ For “How to measure it”, there are 3 traditionally ways of doing it.

  1. “Count of Something” - e.g., Uber’s total completed trips per month
  2. “Average of Something” - e.g., Slack’s average number of messages per team.
  3. “X per Y” - e.g., Facebook’s 7 friends in 10 days metric for new users.

\ There are no perfect metrics, and all of these involve tradeoffs.

  1. “Count of Something” - This number is sensitive to new user acquisition. So it might go up, just because you have more users, not because you are improving the product. That might still be a good thing, but be aware of that.
  2. “Average of Something” - This is great at quantifying user-level health, but it also hides outliers/power users. This is important in certain models like content creation, marketplaces, or anything involving usage-based pricing.
  3. “X per Y” - Setting thresholds is really powerful, but you need a lot of data to know you’re setting this threshold correctly. Optimize for the wrong level, and you can go off course.

Advance Topic: North Star Metrics Drive Strategy

The details of how you define this metric will also impact what strategy you take.

\ Airbnb’s metric of “Nights Booked” has only a loose relationship to profit.

\ They could drive much more profit if that was factored into their metric.

\ Not focusing on profit means the teams can focus on expanding geographically, as all nights booked anywhere in the world are equal under this metric, which means they can take more market share.

\ If they set a metric of:

  • “Profit per Night Booked” - this would likely favor optimizing the cities where they already have density.
  • “Avg Nights Booked Per User” - They would focus on high-frequency travelers and address their specific needs.

\ When I was at UberEats, we actively optimized for profit.

\ To do that, we shut down countries that we didn’t think would be profitable (Italy, Hong Kong, Argentina).

\ We definitely lost market share doing this, but it was clearly the right thing to do for the business.

So What Do You Do With This Information?

North Star Metrics is a simple framework, but deceptively hard to get right.

  1. Go from Broad to Narrow - Start with a metric that has broad coverage and narrow your focus with time.
  2. Make Sure Its Easy To Explain - your team can't optimize for it if they don't understand how it's calculated. Simple is better.
  3. Stay focused - in my experience, the life cycle of a North Star metric is somewhere between 1-3 years. If you are struggling to move your north star metric, it might not be the metric’s fault; you might be working on the wrong things.

\ Additionally, here are some good gut checks:

  1. Can everyone in the company impact this metric in some form?
  2. If you are seeing this metric improve, does your MRR eventually go up?
  3. If you see this metric improve, is your usage retention improving?
  4. Is this based on a user action of some kind? E.g., not just landing in your product.

\ As in most things, simple is typically better.

\ Good luck out there.

\ P.S - Are you looking for help setting a metric like this? Reply to this email with “NORTH STAR,” and we will chat.

About Me

Dan has helped drive 100M+ of business growth across his years as a product manager.

\ He ran the growth team at Codecademy from $10M ARR to $50M ARR, which was acquired for $525M in 2022. After that, he was a product manager at Uber.

\ Now, he advises and consults with startups & companies who are looking to increase subscription revenue.

​Learn more about consulting >>>

\ \

妈妈课堂第二期:网络安全

2026-01-26 04:47:10

This is a cybersecurity class for senior citizens.

我构建了一个因果人工智能模型,用于找出导致股票下跌的真正原因

2026-01-26 04:14:22

Every investor talks about risk, but few can explain what truly drives it.

We measure volatility, beta, or max drawdown, and assume we understand exposure. Yet these numbers are only outcomes. They tell us how much pain existed, not why it happened.

Sometimes a high P/E stock crashes harder. Sometimes a low-margin business stays resilient. Correlation alone can’t answer what actually causes those reactions. That’s where most models stop, and where this story begins.

This experiment tries to answer one deceptively simple question:

Do fundamentals and market traits directly cause deeper drawdowns, or do they just move in sync with them?

To find out, I built a causal AI framework that analyzes how valuation, volatility, and profitability affect a stock’s downside. The data comes from EODHD’s historical APIs, covering ten years of S&P 500 stocks, which is more than enough to see how company characteristics shape real risk, not just statistical noise.

The goal isn’t to forecast crashes or design a trading signal. It’s to understand what truly makes a portfolio fragile, and how causal inference can separate reason from coincidence.

From Correlation to Causation

It’s easy to assume that if two things move together, one must cause the other.

High valuations, rising volatility, falling prices; the patterns often look convincing enough. I used to take those relationships at face value until I realized that correlation is mostly storytelling. It looks logical, it feels logical, but it rarely holds up when conditions change.

That’s where causal inference comes in.

Instead of asking what usually happens together, it asks what would have happened if something were different.

For instance, if two companies were identical except for their valuation, would the higher-valued one experience a deeper drawdown? That’s the kind of “what-if” question correlation can’t answer, but causal models can.

In this study, I treated valuation, volatility, and profitability as the key “treatments.” The idea was to see how each factor, when changed, affects a company’s downside risk while holding everything else constant. It’s less about prediction and more about simulation, i.e., building alternate realities to see which traits consistently lead to more pain.

I used EODHD’s data to make this possible: ten years of S&P 500 history, packed with price, volume, and fundamental data. The quality and consistency of that feed made it possible to treat this like a real-world experiment instead of just another backtest.

The Setup: Getting the Data Right

Before getting into the causal analysis, I needed a proper foundation, which is clean, consistent data. Everything starts here.

Importing Packages

I first imported the essential packages. Nothing fancy, just what’s needed to pull data, shape it, and run the models later.

\

import pandas as pd
import numpy as np
import requests
import time
from datetime import datetime, timedelta
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
import matplotlib.pyplot as plt

This covers the basics: pandas and numpy for handling data, requests for API calls, sklearn for causal modeling, and matplotlib for plotting.

Config and Extracting Tickers

Next, I configured the API and built a small universe of S&P 500 tickers. Instead of manually typing them out, I pulled the current list directly from the EODHD Fundamental Data endpoint.

\

api_key = 'YOUR EODHD API KEY'
base_url = 'https://eodhd.com/api'

tickers_url = f'{base_url}/fundamentals/GSPC.INDX?api_token={api_key}&fmt=json'
tickers_raw = requests.get(tickers_url).json()
tickers = list(pd.DataFrame(tickers_raw['Components']).T['Code'])
tickers = [item + '.US' for item in t]

start_date = '2018-01-01'
end_date   = datetime.today().strftime('%Y-%m-%d')

EODHD’s /fundamentals/GSPC.INDX endpoint provides the full list of S&P 500 constituents. By appending .US to each code, I ensured compatibility with the /eod/ endpoint for fetching daily prices.

Fetching Historical Data

Once the tickers were ready, I moved on to fetching price data and calculating returns.

The function below retrieves daily OHLC values using EODHD’s Historical Data endpoint for each symbol between the start and end date, then stitches everything together. \n

def fetch_eod_prices(ticker, start, end):
    url = f'{base_url}/eod/{ticker}?from={start}&to={end}&api_token={api_key}&fmt=json'
    r = requests.get(url).json()

    if not isinstance(r, list) or not r:
        return pd.DataFrame(columns=['date','close','ticker'])

    df = pd.DataFrame(r)[['date','close']]
    df['ticker'] = ticker

    print(f'{ticker} FETCHED DATA')
    time.sleep(1)

    return df

frames = [fetch_eod_prices(t, start_date, end_date) for t in tickers]

prices_df = pd.concat(frames, ignore_index=True).dropna(subset=['close'])

prices_df['date'] = pd.to_datetime(prices_df['date'])
prices_df = prices_df.sort_values(['ticker','date'])
prices_df['ret'] = prices_df.groupby('ticker')['close'].pct_change().fillna(0.0)
prices_df = prices_df[['ticker','date','close','ret']]

prices_df = prices_df.reset_index(drop=True)
prices_df

This creates a unified dataset of over a million rows across all S&P 500 tickers, each with the closing price and daily return. This is how the final dataframe looks like:

The output confirms everything is aligned:

  • Each ticker is ordered by date.
  • The first row per ticker shows a return of zero (which validates the group logic).
  • The data is dense, balanced, and ready for feature engineering in the next step.

Measuring Stress: Calculating Drawdowns

Once the return data was ready, I needed to turn it into something that represents risk.

Returns tell how a stock moved, but not how it behaved under pressure. Drawdown captures exactly that, which is how deep a stock fell from its recent peak during stress.

For this, I defined two major market stress windows:

  • COVID crash (Feb–Apr 2020)
  • Rate-hike shock (Aug–Oct 2022)

Each window captures a different macro shock, one liquidity-driven, the other inflation-driven. This gives a good contrast for causal inference later.

**Calculating drawdowns

\

def max_drawdown(series: pd.Series) -> float:
    cummax = series.cummax()
    dd = series / cummax - 1.0
    return float(dd.min())

stress_windows = [
    ('2020-02-15', '2020-04-30'), 
    ('2022-08-01', '2022-10-31')
]

rows = []
for t in prices_df['ticker'].unique():
    df_t = prices_df.loc[prices_df['ticker'] == t, ['date','close']].set_index('date').sort_index()
    for i, (start, end) in enumerate(stress_windows, start=1):
        s = df_t.loc[start:end, 'close']
        if s.empty:
            continue
        rows.append({
            'ticker': t,
            'window_id': i,
            'start': start,
            'end': end,
            'max_dd': max_drawdown(s)
        })

drawdowns_df = pd.DataFrame(rows)
drawdowns_df

The max_drawdown() function tracks the running peak of a stock’s price and measures the percentage drop from that peak.

Then, for each ticker, I extract prices within the defined stress windows and compute the worst drawdown in that range.

By iterating through two different market crises, the result shows how each company handled external shocks.

Each stock now has up to two entries, one per stress event, along with its corresponding maximum drawdown.

Fetching Fundamentals & Defining Treatments

With the price and drawdown data ready, the next step was to define the “treatments.”

In causal inference, a treatment represents an event or characteristic whose effect we want to measure. Here, I wanted to understand how certain financial traits, like valuation, volatility, and profitability, affect a company’s risk profile during market stress.

The three treatments I picked were:

  1. High PE ratio: stocks that are priced expensively relative to their earnings.
  2. High beta: stocks that move aggressively with the market.
  3. Low profit margin: stocks with thinner profitability cushions.

To build these, I used EODHD’s Fundamentals API, which provides all the key metrics I needed.

Fetching Fundamentals Data

def fetch_fundamentals(ticker):
    url = f'{base_url}/fundamentals/{ticker}?api_token={api_key}&fmt=json'
    r = requests.get(url).json()
    if not isinstance(r, dict):
        return pd.DataFrame()

    data = r.get('Highlights', {})
    general = r.get('General', {})
    val = r.get('Valuation', {})

    row = {
        'ticker': ticker,
        'sector': general.get('Sector', np.nan),
        'market_cap': data.get('MarketCapitalization', np.nan),
        'beta': r.get('Technicals')['Beta'],
        'eps': data.get('EarningsShare', np.nan),
        'div_yield': data.get('DividendYield', np.nan),
        'net_margin': data.get('ProfitMargin', np.nan),
        'pe_ratio': val.get('TrailingPE'),
        'pb_ratio': val.get('PriceBookMRQ'),
    }

    print(f'{ticker} FETCHED DATA')
    time.sleep(1)

    return pd.DataFrame([row])

fund_frames = [fetch_fundamentals(t) for t in tickers]
fundamentals_df = pd.concat(fund_frames, ignore_index=True)

num_cols = ['market_cap','beta','eps','div_yield','net_margin','pe_ratio','pb_ratio']
fundamentals_df[num_cols] = fundamentals_df[num_cols].apply(pd.to_numeric, errors='coerce')

fundamentals_df

Each call to /fundamentals/{ticker} returns multiple nested blocks: General, Highlights, Valuation, and Technicals. I extracted what I needed from each:

  • General: Sector
  • Highlights: Earnings per share, profit margin, dividend yield, market cap
  • Valuation: PE and PB ratios
  • Technicals: Beta

All fields are converted into a single flat record per ticker and combined into one large DataFrame.

The structure is wide enough to hold all core features and still compact enough to merge easily with returns later.

Merging data and creating treatments

This step creates the dataset used for causal inference, combining financial performance with firm-level traits. \n

# merge fundamentals with drawdowns per ticker-window
combined_df = drawdowns_df.merge(fundamentals_df, on='ticker', how='left')

# basic controls
combined_df['log_mcap'] = np.log(combined_df['market_cap'].replace({0: np.nan}))
combined_df['sector'] = combined_df['sector'].fillna('Unknown')

combined_df.head()

\

Each row here represents a stock-window pair: its drawdown during a specific stress period, valuation profile, and sector context.

Log-transforming market cap helps stabilize the data since firm sizes vary wildly, while sector labels later help balance comparisons.

Defining Treatment Flags

With the merged dataset ready, I defined what qualifies as “treated.” For example, a stock is treated for a high PE if it falls above the 70th percentile PE within its own sector. That ensures comparisons happen within similar industries instead of globally. \n

def sector_pe_thresholds(df, min_count=5, q=0.70):
    th = df.groupby('sector')['pe_ratio'].quantile(q).rename('pe_thr').to_dict()
    global_thr = df['pe_ratio'].quantile(q)
    return {s: (thr if not np.isnan(thr) else global_thr) for s, thr in th.items()}

pe_thr_map = sector_pe_thresholds(fundamentals_df)

combined_df['high_pe'] = combined_df.apply(
    lambda r: 1 if pd.notna(r['pe_ratio']) and r['pe_ratio'] > pe_thr_map.get(r['sector'], np.nan) else 0, axis=1
)

combined_df['high_beta'] = (combined_df['beta'] > 1.2).astype(int) # market sensitivity proxy
combined_df['low_margin'] = (combined_df['net_margin'] < 0).astype(int) # weak profitability

treat_cols = ['high_pe', 'high_beta', 'low_margin']
ctrl_cols = ['log_mcap', 'sector']
model_df = combined_df[['ticker','window_id','start','end','max_dd','pe_ratio','beta','net_margin','market_cap'] + treat_cols + ctrl_cols].copy()

model_df.head(10)

The logic is simple but deliberate:

  • High PE stocks represent overvalued firms within each sector.
  • High beta stocks are those that move more than 20% above market sensitivity.
  • Low margin stocks reflect profitability risk.

The resulting dataset looks like this:

\

Each stock now carries its drawdown, financial metrics, and binary treatment flags. We have exactly what’s needed for causal analysis in the next section.

Estimating Causal Effects

With all the inputs ready, drawdowns, fundamentals, and treatment flags, it was time to move beyond correlations and actually test for causation.

The question was straightforward:

How much more (or less) do high-PE, high-beta, and low-margin stocks fall during market stress after accounting for factors like size and sector?

That’s a classic causal inference problem. Instead of using traditional regressions, I decided to rely on two modern estimators:

  • Inverse Probability Weighting (IPW) reweights observations to mimic a randomized experiment.
  • Doubly Robust (DR) Estimator combines regression and reweighting to reduce bias.

Implementing the Estimators

The first function computes the IPW estimate. It fits a logistic model to estimate the propensity score, which is the probability of being “treated” (for example, being a high-PE stock) given the controls.

Then it adjusts the sample weights to make the treated and control groups comparable.

The second function goes a step further by combining a regression model with weighting to create a doubly robust estimate. Even if one of the models is slightly misspecified, the results tend to remain stable.

\

def causal_ipw(data, treatment, outcome, controls):
    df = data.dropna(subset=[treatment, outcome] + controls).copy()
    X = pd.get_dummies(df[controls], drop_first=True)
    T = df[treatment]
    y = df[outcome]

    # propensity score model
    ps_model = LogisticRegression(max_iter=500)
    ps_model.fit(X, T)
    p = ps_model.predict_proba(X)[:,1].clip(0.01, 0.99)

    # inverse weighting
    weights = T/p + (1-T)/(1-p)
    ate = np.mean(weights * (T - p) * y) / np.mean(weights * (T - p)**2)

    return ate

def causal_dr(data, treatment, outcome, controls):
    df = data.dropna(subset=[treatment, outcome] + controls).copy()
    X = pd.get_dummies(df[controls], drop_first=True)
    T = df[treatment].values
    y = df[outcome].values

    # models
    m_y = RandomForestRegressor(n_estimators=200, random_state=42)
    m_t = LogisticRegression(max_iter=500)

    m_y.fit(np.column_stack([X, T]), y)
    m_t.fit(X, T)
    p = m_t.predict_proba(X)[:,1].clip(0.01,0.99)
    y_hat_1 = m_y.predict(np.column_stack([X, np.ones(len(X))]))
    y_hat_0 = m_y.predict(np.column_stack([X, np.zeros(len(X))]))

    dr_scores = (T*(y - y_hat_1)/p) - ((1-T)*(y - y_hat_0)/(1-p)) + (y_hat_1 - y_hat_0)
    ate = np.mean(dr_scores)
    return ate

results = []
for t in ['high_pe','high_beta','low_margin']:
    ate_ipw = causal_ipw(model_df, t, 'max_dd', ['log_mcap','sector'])
    ate_dr = causal_dr(model_df, t, 'max_dd', ['log_mcap','sector'])
    results.append({'treatment': t, 'ate_ipw': ate_ipw, 'ate_dr': ate_dr})

effects_df = pd.DataFrame(results)
effects_df

Results

Interpretation

This table is where things start getting interesting.

Each row represents one treatment, and the two columns (ateipw and atedr) show how that characteristic influences the maximum drawdown on average.

Here’s what stands out:

  • High PE stocks tend to fall about 0.21 more (IPW) during stress periods. The DR estimator shows a smaller effect, but the direction is consistent. Overvalued stocks feel the heat first.
  • High beta stocks show a 0.25 deeper drawdown, which is not surprising. They amplify market moves both ways.
  • Low-margin companies are hit the hardest, with nearly 0.37 additional downside. This suggests that profitability cushions truly help during panic phases.

While the magnitudes differ slightly across estimators, the pattern is clear. Fragile fundamentals amplify stress.

And that’s exactly the kind of causal evidence I wanted to see. Not just which stocks correlate with volatility, but which ones actually cause higher risk exposure when the market turns.

Visualizing and Validating the Effects

Once I had the causal estimates, I wanted to see what they actually looked like. Numbers can tell you the “what,” but visuals often reveal the “how.”

At this point, the objective wasn’t to add more math, but to see whether the intuition from the earlier steps holds up when plotted.

This stage also acts as a sanity check. If the visual patterns contradict the statistical results, something’s usually off in the setup. But if both align, you can be more confident that the model’s logic is sound.

Plotting the Causal Effects

The first visualization compares the estimated effects from the two causal models, IPW and DR.

This helps check whether both estimators point in the same direction and how strongly each treatment affects the drawdown. \n

effects_plot = effects_df.melt(id_vars='treatment', value_vars=['ate_ipw','ate_dr'], var_name='method', value_name='ate')

plt.figure(figsize=(6,4))
for m in effects_plot['method'].unique():
    subset = effects_plot[effects_plot['method']==m]
    plt.bar(subset['treatment'], subset['ate'], alpha=0.7, label=m)

plt.axhline(0, color='black', linewidth=0.8)
plt.ylabel('Estimated Causal Effect on Max Drawdown')
plt.title('Causal Effect Estimates (IPW vs DR)')
plt.legend()
plt.tight_layout()
plt.show()

\

The bar chart above captures the story behind all those computations.

Each bar represents the estimated causal effect of a financial trait on maximum drawdown.

Across the three treatments, the trend is consistent. High PE, high beta, and low margin all deepen downside risk, with low-margin stocks showing the sharpest impact.

The difference between the blue and orange bars (IPW and DR) is minor, which is exactly what you want to see. It means both models are converging toward the same conclusion.

The takeaway is clear: fundamental fragility doesn’t just coincide with volatility. It drives it.

Checking Overlap Between Treated and Control Groups

Before trusting any causal estimate, it’s essential to check whether the treated and control samples are comparable.

In theory, causal inference assumes overlap, meaning every treated stock has a comparable control stock with similar characteristics, except for the treatment itself.

If there’s no overlap, the results become meaningless because we’re comparing two entirely different worlds.

Propensity Score Distribution

To test this, I plotted the propensity scores for each treatment.

The score represents the probability of a stock being “treated” (for instance, having a high PE ratio) given its market cap and sector. \n

for t in ['high_pe','high_beta','low_margin']:
    df = model_df.dropna(subset=[t,'log_mcap','sector']).copy()
    X = pd.get_dummies(df[['log_mcap','sector']], drop_first=True)
    T = df[t]

    ps_model = LogisticRegression(max_iter=500)
    ps_model.fit(X, T)
    ps = ps_model.predict_proba(X)[:,1]

    plt.figure(figsize=(6,4))
    plt.hist(ps[T==1], bins=20, alpha=0.6, label='treated', color='tab:blue')
    plt.hist(ps[T==0], bins=20, alpha=0.6, label='control', color='tab:orange')
    plt.title(f'Propensity Score Overlap — {t}')
    plt.xlabel('propensity score')
    plt.ylabel('count')
    plt.legend()
    plt.tight_layout()
    plt.show()

\

\

The histograms above show how well the model managed to balance the treated and control groups across the three traits.

  • For high PE stocks, there’s a healthy overlap between blue and orange bars, which means the weighting process has done its job fairly well.
  • For high beta, the distributions are more scattered, hinting that these stocks behave differently enough to make perfect matching difficult.
  • Low margin companies show the least overlap, which is expected since unprofitable firms often cluster in specific sectors like biotech or early-stage tech.

While the overlaps aren’t perfect, they’re sufficient to move forward. The key point is that there’s enough shared ground between treated and control groups to make causal comparison meaningful.

Verifying Covariate Balance

Even though the visual overlaps looked decent, I wanted a numerical check to confirm that the weighting step did what it was supposed to.

The idea is simple: after reweighting, the treated and control groups should look nearly identical in their key characteristics, except for the treatment itself.

To test this, I compared the mean difference in log market capitalization between high-beta stocks and their controls, both before and after weighting.

\

t = 'high_beta'
df = model_df.dropna(subset=[t,'log_mcap','sector']).copy()
X = pd.get_dummies(df[['log_mcap','sector']], drop_first=True)
T = df[t]

ps_model = LogisticRegression(max_iter=500)
ps_model.fit(X, T)
ps = ps_model.predict_proba(X)[:,1]
w = T/ps + (1-T)/(1-ps)

m_treat_raw = df.loc[T==1,'log_mcap'].mean()
m_ctrl_raw = df.loc[T==0,'log_mcap'].mean()
m_treat_w = np.average(df['log_mcap'], weights=T*w)
m_ctrl_w = np.average(df['log_mcap'], weights=(1-T)*w)

print('Covariate balance for log_mcap (high_beta)')
print(f'Raw diff: {m_treat_raw - m_ctrl_raw:.4f}')
print(f'Weighted diff: {m_treat_w - m_ctrl_w:.4f}')

\

Before weighting, the average difference in log market cap between high-beta and low-beta stocks was around 0.19, which means the two groups weren’t perfectly comparable.

After applying the weights, that difference dropped to 0.0008 (nearly zero).

This confirms that the balancing process worked as intended. By reweighting observations according to their propensity scores, the treated and control groups now share almost identical distributions in their control variables.

That alignment is what gives the causal estimates credibility. Without it, we would simply be comparing random subsets of the market instead of isolating the effect of specific traits like valuation, volatility, or profitability.

What This Means for Risk Management

After spending hours inside Python, it’s easy to forget what this all connects to, which is the real market. The value of causal modeling is that it reframes how we think about risk. Instead of assuming or guessing which factors drive volatility, it gives us a measurable way to quantify how much each one truly contributes.

In traditional analysis, drawdowns are treated as outcomes of randomness or sentiment. But once we look through a causal lens, the randomness begins to fade. We start to see structure and the patterns that repeat when certain traits combine.

A portfolio manager can use this insight to design more resilient allocations. For instance, if low-margin firms consistently amplify downside during stress windows, those positions can be hedged or sized down ahead of similar conditions. If high-beta stocks display asymmetric losses even after accounting for size and sector, then they deserve a risk premium or tighter stop rules.

The same principle applies to stress testing.

Rather than simulating arbitrary shocks, we can construct counterfactual scenarios:

What would have happened if the portfolio held fewer high-PE names during a sell-off?

How much of the observed risk was structural, and how much was avoidable?

Causal models turn these questions into measurable answers. They allow risk managers to move from intuition to evidence, from correlation to attribution.

Closing Thoughts

This entire workflow, from raw data to counterfactual reasoning, was never meant to predict the next crash or rally.

It was meant to understand why portfolios behave the way they do when markets turn unstable.

By combining fundamental traits, stress windows, and causal estimators, we built a framework that explains structural drivers of risk rather than chasing signals.

Each piece of code served a purpose: defining clean treatments, balancing controls, estimating effects, and validating the logic through overlap and weighting.

Causal inference will not replace forecasting or technical analysis. But it adds a new dimension. The one focused on understanding rather than prediction. When markets fall, it helps separate what was inevitable from what was amplified by exposure to specific traits.

Markets don’t just move randomly. They react to structure, and causal AI helps us see that structure clearly.

简明AI内部揭秘:驱动实时、符合HIPAA标准的临床文档架构

2026-01-26 04:04:53

Modern healthcare technology demands more than incremental improvements—it requires fundamental architectural innovation that balances performance, security, and clinical usability. Brevity AI, Inc. has developed a groundbreaking clinical documentation platform that transforms how healthcare professionals interact with patient data, compressing hours of preparation and documentation into minutes via Artificial Intelligence. As Co-Founder and CTO, Purv Rakeshkumar Chauhan designed and implemented the entire software architecture, demonstrating exceptional software engineering capabilities by building a production-grade healthcare system from concept to reality.

Engineering a Complex Healthcare Solution: Full-Stack Development Leadership

The Brevity AI platform represents a sophisticated technical achievement requiring mastery across the entire software development lifecycle. Purv independently architected and developed the complete system, from initial requirements analysis through production deployment and ongoing maintenance. This comprehensive technical ownership encompasses frontend user interfaces optimized for clinical workflows, backend services processing massive medical datasets, AI/ML pipeline integration for intelligent document analysis, and secure data infrastructure managing sensitive healthcare information.

\ The platform's architecture solves a complex technical challenge: processing hundreds of pages of unstructured medical records in real-time while maintaining HIPAA compliance and clinical accuracy. This necessitated designing scalable data processing pipelines capable of ingesting diverse medical document formats, implementing natural language processing workflows that extract clinically relevant information with precision, and creating intuitive user interfaces that seamlessly integrate into existing clinical workflows. Each architectural decision reflects careful consideration of healthcare's unique technical constraints—from latency requirements for real-time transcription to data residency requirements for patient privacy.

Advanced System Architecture: Building for Healthcare Scale

The technical architecture of Brevity AI demonstrates sophisticated software engineering principles applied to healthcare's demanding requirements. The system implements a microservices-based architecture that enables independent scaling of compute-intensive AI processing, document parsing, and real-time transcription services. This modular design allows the platform to handle variable workloads—from processing extensive patient histories during visit preparation to managing real-time documentation during patient encounters—without performance degradation.

\ Purv's implementation includes advanced caching strategies that optimize response times for frequently accessed patient data, asynchronous processing queues that manage resource-intensive document analysis tasks, and intelligent load balancing that ensures consistent performance across clinical environments.

The platform's data layer employs optimized database schemas specifically designed for medical record structures, enabling sub-second query performance even when analyzing patient histories spanning decades with hundreds of documents.

Real-Time AI Integration: Technical Innovation in Clinical Documentation

The platform's real-time documentation capabilities showcase sophisticated software engineering in AI integration. Purv architected a speech-to-text pipeline for recognizing medical terminology in clinical conversations. This system processes audio streams in real-time, implementing advanced noise reduction algorithms that filter ambient noise in clinical environments while preserving conversational clarity.

\ The AI-powered note generation system represents particularly complex technical work—transforming unstructured conversations into structured clinical notes requires sophisticated natural language understanding, medical entity recognition, and template generation algorithms. Purv designed this pipeline to process extended patient conversations (often 20-30 minutes) and generate concise, standards-compliant clinical notes within seconds after the conversation concludes. The system employs intelligent chunking strategies to manage long conversations, contextual analysis to maintain clinical coherence across conversation segments, and validation algorithms ensuring generated notes meet documentation standards.

Visit Preparation Intelligence: Processing Complex Medical Histories

The platform's visit preparation capabilities solve one of healthcare's most time-consuming challenges through advanced document processing architecture. By analyzing a patient's complete medical history—often 300+ pages spanning multiple healthcare systems—the system extracts clinically relevant information, identifies key medical conditions and trends, synthesizes laboratory results across different formats, and presents findings in a clinically useful format.

\ Purv engineered a multi-stage processing pipeline that handles this complexity: document ingestion and format normalization for diverse medical record types, intelligent page analysis using computer vision to identify document types, natural language extraction of clinical entities and relationships, temporal analysis to track condition progression and treatment outcomes, and intelligent summarization that prioritizes information by clinical relevance. This pipeline processes massive document sets in minutes, a task that traditionally requires hours of manual review.

Security-First Development: Integrating Compliance into Architecture

While Purv's cybersecurity background provides foundational expertise, the platform's security implementation demonstrates how security principles translate into practical software architecture. Rather than treating security as an afterthought, the system implements security-by-design principles throughout the technical stack. This includes implementing a Zero Trust Architecture requiring authentication and authorization at every system boundary, designing data encryption protocols for data at rest and in transit, architecting audit logging systems that track all data access for compliance reporting, and building automated security testing into the continuous integration pipeline.

\ The HIPAA compliance framework extends beyond security to encompass comprehensive technical controls: data retention policies enforced through automated lifecycle management, access control mechanisms with role-based permissions at granular levels, and automated compliance reporting systems that generate audit trails for regulatory review. These technical implementations ensure the platform meets healthcare's stringent regulatory requirements while maintaining the performance necessary for clinical workflows.

Measurable Platform Impact: Technical Achievement Driving Clinical Outcomes

The platform's technical sophistication translates directly into measurable healthcare improvements. Reducing visit preparation time from hours to minutes demonstrates the document processing pipeline's efficiency. The elimination of after-hours documentation through real-time transcription showcases the speech processing system's accuracy and reliability. Improved clinical accuracy from AI-assisted summarization validates the natural language processing algorithms' effectiveness. These outcomes reflect successful software engineering—building systems that solve real problems through technical excellence.

From Concept to Production: Complete Development Lifecycle Management

Purv's responsibilities encompass the entire development process—defining technical requirements through collaboration with clinicians, designing system architecture and selecting appropriate technology stacks, implementing features through hands-on software development, conducting comprehensive testing including unit, integration, and end-to-end validation, managing production deployment and monitoring system performance, and iterating based on user feedback and performance metrics. This comprehensive technical ownership ensures cohesive architecture where every component serves the platform's clinical mission.

Technical Foundation Enabling Healthcare Innovation

Brevity AI's platform represents a significant software engineering achievement—a production-grade healthcare system built through exceptional architectural design and development expertise. Purv Rakeshkumar Chauhan's comprehensive technical leadership, spanning from low-level implementation details to high-level system architecture, has created a platform that fundamentally improves clinical workflows through intelligent automation. With his strong foundation in security research and academic excellence from Arizona State University informing the platform's robust security architecture, Purv continues advancing healthcare technology through software engineering innovation that prioritizes both technical excellence and clinical utility.

About Purv Rakeshkumar Chauhan

Purv Rakeshkumar Chauhan serves as Co-Founder and CTO of Brevity AI, Inc., where he leads all aspects of software development and system architecture for the company's clinical documentation platform. With academic credentials from Arizona State University and a foundation in cybersecurity research, including DARPA-funded projects, he brings comprehensive technical expertise to healthcare AI development. As the architect and principal developer of the Brevity AI platform, Purv demonstrates exceptional software engineering capabilities across the full technology stack, from user interface design to backend infrastructure, security implementation, and AI/ML integration.

\

:::tip This story was distributed as a release by Sanya Kapoor under HackerNoon’s Business Blogging Program.

:::

\

押注自己:预期价值远超你的想象

2026-01-26 03:56:27

You’re not risk-averse. You’re math-averse.

The reason you haven’t started that business, published that work, or made that career move isn’t because you’re scared. It’s because you’re running the wrong calculation.

Most people think about risk like accountants. They see a potential move - start a podcast, launch a product, quit their job - and immediately calculate what they could lose.

Time. Money. Reputation. Stability.

The numbers add up to “too risky.”

So they don’t move.

They stay in jobs they’ve outgrown. They sit on ideas that could change their lives. They watch other people - often less talented, less prepared people - build the things they dreamed about building.

And they tell themselves they’re being smart. Being prudent. Being realistic.

But here’s what nobody tells you: the math you’re using to justify staying stuck is fundamentally wrong.

You’re optimizing for the wrong variable. You’re calculating downside while ignoring asymmetry. You’re treating all risks as equal when they’re not even in the same universe.

And that miscalculation is costing you everything.

This isn’t one of those letters about “following your dreams” or “just believe in yourself.”

This is about the actual mathematics of asymmetric upside. Why the downside of trying and failing is almost always survivable. Why the downside of never trying is guaranteed regret. And why betting on yourself has a higher expected value than you think.

Let’s begin.

I - The Risk Calculation That Keeps Smart People Stuck

I spent two years hesitating on my podcast.

I knew I wanted to do it. I had the ideas. I knew what I would talk about. I’d written out episode concepts. I’d even recorded test episodes that nobody would ever hear.

But I kept finding reasons not to launch.

“I need better equipment first.” “I should have more of a following before I start.” “I need to be more of an expert in my topics.” “I should wait until I have a clear monetization strategy.”

Every reason sounded rational. Every excuse felt legitimate.

And I was very good at performing preparation. I researched microphones for weeks. I studied successful podcast formats. I planned out 20 episodes in detail. I told people at parties “I’m thinking about starting a podcast” and enjoyed the validation of being someone with ambitious plans.

But preparation became my drug of choice. The dopamine hit of planning without the risk of failing.

You know exactly what I’m talking about. Because you do this too.

You’ve been “thinking about” starting that thing for how long now? Six months? Two years?

You have the Notion doc with the perfect structure. The bookmarks folder labeled “Research.” The screenshot folder of inspiration. Maybe you’ve even bought a course or two. Told yourself you’re building a foundation.

But here’s what was actually happening in my brain - and yours:

My brain was running a calculation.

“What if I invest six months and it doesn’t work?” “What if I look stupid?” “What if I fail and prove I’m not capable?” “What if I lose the money on equipment?”

The potential losses felt massive. Concrete. Guaranteed.

The potential gains felt distant. Uncertain. Unlikely.

So I waited. I told myself I needed more preparation, more certainty, more proof that it would work.

What I didn’t realize then - but understand clearly now - is that I wasn’t being careful. I was being mathematically illiterate.

Because the calculation I was running - “What could I lose?” - is only half the equation.

The complete equation is: “What could I lose?” versus “What could I gain?” versus “What’s the probability of each outcome?” versus “What’s the cost of never finding out?”

And when you run the complete calculation, the math changes entirely.

You want to know what actually happened when I finally launched the podcast?

Nothing catastrophic. No embarrassment. No public failure.

I published the first episode. Twenty-three people listened in the first week. Most of them were friends. The audio quality was mediocre. I stumbled over words. I said “um” too much.

And… that was it.

No one mocked me. No one called me out for not being an expert. The world didn’t end.

Within three months, I had 200 regular listeners. Within six months, one of those listeners introduced me to someone who became a client. Within a year, the podcast became a primary way I connected with my audience and developed ideas that became newsletter content.

All of the disaster scenarios I’d imagined for two years? None of them happened.

The worst case was so much less bad than I’d feared. The upside was so much better than I’d imagined.

That’s the pattern. The fear is always worse than the reality.

Let me show you with real numbers.

Starting a newsletter (same principle applies to podcasts, YouTube, any content):

The Actual Downside:

You invest 3-6 months writing consistently. Maybe 100 hours total over that period. That’s about 30 minutes per day, 5 days a week.

You promote it to your existing network and on social platforms.

Nobody reads it. Your first few posts get 3 views. Your mom and your two best friends.

You feel embarrassed. You imagine people seeing you try and fail. You experience the discomfort of being visible while not being validated.

That’s the downside. Let’s assess: Is it survivable?

You’re out 100 hours spread over six months. In exchange for those hours, you:

  • Developed clarity on what you think about important topics
  • Built a skill (writing for an audience)
  • Created a portfolio of work that demonstrates your thinking
  • Learned what doesn’t resonate (valuable market research)
  • Proved to yourself that you can finish what you start
  • Made progress while others stayed stuck in planning mode

Your ego takes a hit for maybe a week. Then life moves on. Nobody actually cares that much. Everyone’s too busy thinking about their own lives.

Downside: Survivable.

The Actual Upside:

Now let’s say it works. Not perfectly. Not viral-level success. Just… it works.

The newsletter connects with 1,000 people over a year. That’s less than 3 new subscribers per day. Completely achievable.

Those 1,000 people care about what you think. They read what you write. They respond with their own thoughts. They remember your name.

That audience of 1,000 becomes:

  • A distribution channel for anything you want to create
  • A testing ground for new ideas and products
  • A network of people who can introduce you to opportunities
  • Social proof that you can build something from scratch
  • A source of income if you ever want to monetize
  • Conversations you couldn’t have had otherwise
  • Relationships that wouldn’t have formed

One of those 1,000 people sends your newsletter to someone influential in your field. That person reaches out. One conversation leads to an opportunity you never would have encountered.

Or: Ten of those 1,000 people buy a product you create. $100 each. That’s $1,000. For a first attempt at monetization. Which validates that you can create value people will pay for. Which gives you confidence to build more.

Or: Your writing clarifies your thinking so much that you become better at your day job. You get promoted. The writing skills transfer. The audience gives you leverage.

These aren’t hypotheticals. This is the actual, documented pattern that happens to people who start creating and stick with it.

But here’s what nobody tells you about the year you spend “preparing” instead of starting:

You don’t spend that year in neutral. You spend it moving backwards.

Every month you wait to start, someone else publishes their first post. Someone else sends their first email. Someone else builds their first 10 subscribers.

You’re not standing still while you prepare. You’re watching a gap open up between you and the people who started before they felt ready.

And that gap isn’t just about audience size. It’s about confidence, skill, resilience, and the compounding knowledge that comes from actually doing the thing.

By the time you finally start - if you ever do - you’re not competing with who you are now. You’re competing with a version of yourself who started a year ago and has been improving ever since.

That’s asymmetry.

Same effort. Same time investment. But the upside and downside aren’t even in the same galaxy.

The downside is capped (100 hours of learning, temporary ego hit, valuable skills developed). The upside is uncapped (career transformation, financial freedom, opportunities you can’t predict).

Most of the bets you’re not taking have a capped, survivable downside and an uncapped, transformative upside.

That’s not risk in the traditional sense. That’s asymmetry. And asymmetry is mathematically smart.

Yet smart people don’t take these bets.

Why? Because they’re running the wrong calculation.

They see the downside clearly - the time, the money, the potential embarrassment.

They discount the upside heavily - “That probably won’t happen to me.”

And they completely ignore the fourth variable: the cost of never trying.

Here’s what the miscalculation reveals: You’re not actually trying to make a smart decision. You’re trying to find a reason to stay comfortable.

The math is just cover. The spreadsheet is just permission to avoid risk while feeling rational about it.

If you actually cared about making the optimal choice, you’d run the complete equation. You’d see the asymmetry. You’d move.

But you don’t want to see it. Because seeing it would mean you have no more excuses.

So why does your brain work this way? Why do smart people systematically miscalculate risk?

Because your brain isn’t running math. It’s running something far more primitive.

II - Why Your Brain Gets The Math Wrong

“We suffer more often in imagination than in reality.”

— Seneca

Your brain isn’t doing math. It’s running ancient survival software that was calibrated for a world that no longer exists.

The Mismatch Between Ancestral Risk and Modern Opportunity

Ten thousand years ago, risk assessment was brutally simple:

Will this kill me? Yes or no.

Trying a new hunting ground: Might encounter predators. Risk of death: High.

Challenging the tribe leader for status: Might get exiled or killed. Risk of death: High.

Eating unknown berries: Might be poisonous. Risk of death: High.

In that environment, loss aversion made perfect evolutionary sense.

The cost of being wrong once was death. The benefit of being right ten times was… not being dead.

Natural selection strongly favored caution. The humans who took too many risks didn’t survive to pass on their genes. The ones who avoided uncertainty, who stuck with what was known and safe, lived long enough to reproduce.

Your brain is descended from those cautious survivors. It’s running their software. And that software has one primary directive: avoid anything that feels like it might threaten your survival.

The problem?

Your amygdala - the almond-shaped cluster of neurons responsible for processing threats - can’t tell the difference between “there’s a predator in the grass” and “someone might not like my business idea.”

Both trigger the same neurological response. Both flood your system with the same stress hormones. Both generate the same behavioral impulse: avoid, retreat, stay safe.

But the actual stakes are completely different.

Starting a podcast won’t kill you. It can’t. The worst-case scenario is temporary embarrassment.

Launching a product won’t exile you from the tribe. Modern society doesn’t work that way. The worst-case scenario is you learn what customers don’t want.

Publishing your writing won’t poison you. It’s not possible. The worst-case scenario is some people don’t resonate with your ideas.

All survivable. All recoverable. All ultimately beneficial in terms of learning and growth.

But your brain can’t process that distinction. When you consider making a bold move - starting a business, publishing creative work, changing careers - your threat detection system treats it like mortal danger.

The anxiety you feel isn’t proportional to the actual risk. It’s proportional to how your ancient brain categorizes the situation.

And it categorizes all uncertainty as potential death.

The Loss Aversion Trap

This gets amplified by a well-documented cognitive bias called loss aversion.

Psychological research consistently shows that humans feel losses approximately 2-2.5x more intensely than equivalent gains.

Losing $100 feels much worse than gaining $100 feels good. The pain of rejection hits harder than the pleasure of acceptance. The fear of failure weighs more heavily than the excitement of potential success.

This made sense for survival. Missing one meal was more immediately dangerous than finding extra food was beneficial. Losing social status in a small tribe had severe consequences. Avoiding threats was more critical than pursuing opportunities.

But in the modern world, where most risks are survivable and most losses are recoverable, loss aversion becomes a systematic source of bad decisions.

It causes you to:

  • Overweight small, temporary losses (time, money, ego)
  • Underweight large, lasting gains (career transformation, financial freedom, personal growth)
  • Avoid actions with asymmetric upside because you can’t stop fixating on the downside
  • Stay in situations that are “okay” because changing feels like risking a loss

Your brain is optimized for surviving threats, not capitalizing on opportunities.

And in a world where threats are mostly non-fatal and opportunities are abundant, that optimization makes you systematically worse at assessing risk.

Here’s where it gets insidious.

You don’t just feel the fear. You rationalize it.

You’re smart. You’re analytical. So your brain provides you with sophisticated-sounding reasons for why you shouldn’t try:

“I need to be more prepared first.” “I should wait for better market conditions.” “I don’t have the resources yet.” “I need to research more before I commit.”

These sound like rational calculations. They feel like smart risk management.

But they’re not. They’re your threat detection system generating cover stories for the fear you don’t want to admit you’re feeling.

You’re not being careful. You’re being scared and dressing it up in the language of prudence.

And because you’re smart, you’re very good at this. You can build an entire intellectual framework around why now isn’t the right time. You can create spreadsheets that “prove” the risk is too high. You can cite examples of people who failed to validate your caution.

But strip away the sophistication and here’s what’s actually happening:

Your amygdala sees uncertainty. It generates fear. Your prefrontal cortex - the part that handles reasoning - works backwards to justify that fear with logic.

You’re using math to rationalize emotion, not to make optimal decisions.

And you know this is true because of what happens when you see other people take the exact bet you’re avoiding:

You don’t think “Good, they confirmed my analysis was correct.”

You think “Damn, I should have done that.”

The Illusion of Safety

Here’s where it gets really insidious.

Not only does your brain overestimate the risk of action, it also creates an illusion that inaction is safe.

Staying at your job feels safe because it’s familiar. The pattern is established. You know what to expect.

Not starting the business feels safe because you’re not exposing yourself to potential failure. You’re not risking embarrassment.

Keeping your ideas private feels safe because you’re not subjecting them to judgment or rejection.

But that feeling of safety is an illusion.

Your job can disappear tomorrow. Companies restructure. Industries shift. The “stability” of employment is often just the absence of immediate threat, not actual security. You think you’re safe because you’re on payroll, but you’re actually maximally exposed - your entire income depends on one decision-maker who doesn’t owe you anything.

Not starting the business means you’re entirely dependent on someone else’s decisions about your income. That’s not safe. That’s vulnerable. That’s putting all your risk into one basket labeled “employment” and hoping nobody drops it.

Keeping your ideas private means you never develop them, never test them, never build anything of your own. That’s not safe. That’s guaranteeing you’ll never have the resources or leverage that come from building in public.

The safe path isn’t actually safe. It’s just familiar.

And your brain confuses familiarity with security.

Here’s the thought that should haunt you: The riskiest thing you can do is to spend your entire life avoiding risk.

Because while you’re busy playing it safe, the world is changing around you. The job you think is secure becomes obsolete. The industry you thought was stable gets disrupted. The career path you thought was guaranteed disappears.

And when that happens - when the “safe” choice reveals itself as an illusion - you have nothing. No skills from attempting bold things. No network from building in public. No track record of betting on yourself. No resilience from failing and recovering.

You optimized for safety and ended up with fragility.

The people who took the “risky” bets? They built anti-fragility. They developed the capacity to survive failure. They created options. They’re not dependent on one income source, one employer, one path.

You’ve been calculating risk backwards your entire life.

This is why people stay in jobs they hate for decades. Why they remain in relationships that aren’t working. Why they never pursue the things they claim they want to do.

Not because they’re lazy or stupid. Because their threat detection system is wired to prefer the familiar danger over the unfamiliar opportunity.

The devil you know feels safer than the devil you don’t - even when the devil you know is slowly destroying your potential and the devil you don’t is actually just an opportunity in disguise.

You’re not playing it safe. You’re playing not to lose. And playing not to lose is the only guaranteed way to lose.

When you make that calculation conscious and explicit, you can see past the emotional response to the underlying reality:

In a world where failure isn’t fatal, the asymmetry favors action.

The biggest risk isn’t trying and failing. It’s letting an outdated survival mechanism dictate your entire life trajectory.

The people who win aren’t smarter than you. They’re not more talented. They’re not luckier. They just have a higher tolerance for feeling scared while doing the thing anyway.

That’s it. That’s the only difference.

Now let’s talk about what that faulty calculation is actually costing you.

III - The Guaranteed Cost of Never Trying

“Twenty years from now you will be more disappointed by the things you didn’t do than by the ones you did.”

— Mark Twain

But let’s say you don’t take the bet.

You decide the risk is too high. You stay where you are. You keep preparing, keep planning, keep waiting for certainty.

What’s the cost?

Most people think the cost is zero. They think “no decision” means “no risk.”

They’re catastrophically wrong.

The cost of not trying isn’t zero. It’s regret. And regret isn’t survivable. It’s permanent.

I need to be very direct about this because it’s the part of the equation that people systematically ignore until it’s too late.

When you try something and fail, here’s what actually happens:

You feel disappointed for a period of time. Days, maybe weeks.

Then you process it. You extract the lessons. You identify what didn’t work. You develop skills from the attempt. You have concrete data about what to avoid next time.

You move forward with more information, more experience, and more resilience than you had before the attempt.

The failure resolves. It becomes a data point. Eventually, it becomes a story you tell about your journey.

I’ve failed at multiple business ideas. A clothing brand that never sold a single item. A software product that took six months to build and zero people wanted. Content projects that went nowhere.

Do those failures haunt me? No.

They taught me what markets don’t want. They showed me what approaches don’t work. They built my tolerance for rejection and uncertainty. They’re now just part of the path that led to the things that did work.

Failure is temporary data collection.

But when you don’t try - when you stay in the safe lane, when you keep preparing indefinitely, when you wait for certainty that will never come - here’s what happens:

Nothing.

And that nothing compounds.

You’re 30. You have an idea for a business. You don’t start because you’re “not ready.” You tell yourself you’ll do it when you have more savings, more skills, more certainty.

You’re 35. The idea is still there. So is the hesitation. You’re busier now. The excuse shifts to “I don’t have time.” You watch someone else launch something similar. It hurts, but you convince yourself they probably got lucky or had advantages you don’t have.

Here’s what you don’t say out loud: You spend 10 hours a week on Netflix, social media, and activities that don’t matter. You “don’t have time” the same way people “can’t afford” the gym membership but can afford daily coffee. Time isn’t the constraint. Courage is.

You’re 40. The idea has been with you for a decade. You’ve refined it in your head. You know exactly how you’d do it. But now the excuse is “I’m too old to start over” or “I have too many responsibilities.”

The truth you won’t admit: You’re more afraid now than you were at 30. Because now you have a decade of evidence that you’re someone who doesn’t follow through. And starting would mean confronting that identity.

You’re 50. You don’t think about the idea as much. It’s been relegated to the “could have been” section of your mind. But every time you see someone doing what you wanted to do, something twists in your chest.

And you tell yourself a story: “I had too many obligations.” “The timing was never right.” “I couldn’t afford to take the risk.”

But the real story is simpler: You were scared. And you let that fear make your decisions for you for 20 years.

You’re 60. The idea is a ghost. You tell younger people “I always wanted to do something like that” with a wistful tone that carries two decades of accumulated regret.

You’re 70. You’re talking to your grandkids. They ask about your life. You have the job history, the stability, the safety. But there’s an absence. A path not taken. A version of yourself that never got to exist.

I’ve had this exact conversation with dozens of people in their 40s, 50s, 60s. The details change but the pattern is always the same: an idea they saw clearly, a move they knew they should make, years of rational-sounding excuses, and now a quiet weight they carry everywhere.

Here’s the truth that should shatter your entire risk calculation: You’re not avoiding the bet. You’re already taking it.

Every day you don’t try is a bet. A bet that staying the same is better than trying and failing. A bet that comfort today is worth regret tomorrow. A bet that your fear is giving you accurate information about the future.

You’re betting on yourself either way. The only question is what version of yourself you’re betting on.

The one who tries and maybe fails? Or the one who never tries and definitely wonders?

That’s the real cost.

Not failure. Not embarrassment. Not lost time or money.

Permanent uncertainty about who you could have become. A life spent wondering “what if?”

And unlike a failed business or a rejected pitch or a product that didn’t sell, you can’t recover from that. You can’t A/B test it. You can’t iterate. You can’t go back and try.

The opportunity window closes. The market shifts. Your energy changes. Your circumstances evolve. And the version of your life where you found out what was possible disappears forever.

They all say the same thing, in different words:

“I had this idea when I was younger. I knew what I wanted to build. I saw the opportunity clearly. But I was scared. I told myself it was too risky. I convinced myself I needed to be more prepared. Now I watch other people - often less capable people - doing exactly what I wanted to do. And I can’t stop thinking about what my life would look like if I had just tried.”

The regret isn’t just about the missed opportunity. It’s about not knowing who they could have been. It’s about a door that’s now permanently closed.

Here’s the mathematical truth nobody wants to confront:

The probability of regret from not trying approaches 100% over a sufficient time horizon.

The probability of catastrophic, unrecoverable failure from trying approaches 0%.

Let me unpack that.

If you don’t try:

  • You will never know if it could have worked (100% certainty of permanent uncertainty)
  • You will watch others succeed in spaces you wanted to enter (compounds regret)
  • You will accumulate “what if” thoughts that become more painful over time
  • You will reach a point where the opportunity is no longer available
  • You will face the reality that you let fear dictate your choices

All of those outcomes are guaranteed. That’s not risk. That’s certainty.

If you do try and it fails:

  • You learn what doesn’t work (valuable)
  • You develop skills from the attempt (transferable)
  • You build resilience and reduce future fear (compounds capability)
  • You have data for your next attempt (actionable)
  • You move forward without the weight of wondering

The “catastrophic failure” scenarios people imagine - complete financial ruin, total social ostracism, permanent career damage - almost never happen in reality.

What actually happens: You’re out some time and money. Your ego bruises. You feel disappointed. Then you recover and try something else.

That’s it.

You’re not avoiding risk by staying where you are. You’re just choosing a different, more painful, more permanent form of risk.

The risk of action: Temporary discomfort, recoverable losses, valuable learning.

The risk of inaction: Permanent regret, compounding opportunity cost, lifelong wondering.

When you frame it correctly, the choice is obvious.

This is what people mean when they say “the greatest risk is no risk at all.”

Not because safety is impossible. But because the risk of a life unlived - the risk of reaching the end and realizing you never let yourself try - is worse than any survivable failure you could experience along the way.

And here’s the cruel irony: the people who take the calculated bets, who try and sometimes fail, who build the tolerance for uncertainty - they’re the ones who end up with fewer regrets.

Not because everything they try works. But because they know. They have data. They don’t wonder. They tried, they learned, they moved forward.

The people who spend their lives avoiding risk? They’re the ones carrying the heaviest burden. The weight of all the unlived possibilities. All the versions of themselves they never let exist.

That’s not safety. That’s a different kind of prison.

IV - How To Actually Bet On Yourself (And Why You Won’t)

But before I give you the steps, let me tell you what’s about to happen.

You’re going to read these steps. You’re going to think “This is useful.” You might even screenshot them.

And then you’re going to close this newsletter and do exactly what you were doing before you opened it.

You’ll tell yourself “I’ll come back to this later.” You’ll save it in a folder labeled “Important.” You’ll add it to your ever-growing list of things you’re going to do “when you have time.”

But later never comes. And you know it.

Because this isn’t the first article you’ve read about taking action. This isn’t the first framework you’ve saved. This isn’t the first time you’ve felt motivated to start.

It’s just the latest iteration of a pattern you’ve been running for years: consume information about change, feel inspired, do nothing.

So let me be very clear about something:

These steps only work if you actually do them. And you’re not going to do them unless you admit something uncomfortable first.

You don’t have an information problem. You have a courage problem.

You don’t need more frameworks. You need to stop pretending that one more system will be the thing that finally makes you feel ready.

You’re not going to feel ready. Ever. The only way forward is to act before you feel ready.

So here’s what to actually do - not because you need it, but because if you’re going to ignore it like you’ve ignored every other framework, I want you to at least be conscious about what you’re choosing.

Start With The Smallest Asymmetric Bet

You don’t have to quit your job tomorrow. You don’t have to bet everything on one idea. You don’t have to make a dramatic, all-or-nothing move.

Start with the smallest bet that still has asymmetric upside.

The kind of bet where the downside is so small it’s almost negligible, but the upside could be transformative.

Publish one piece of writing online.

Downside: 2 hours to write and edit. Maybe nobody reads it. You feel slightly vulnerable.

Upside: It resonates with someone. They share it. It reaches people you don’t know. It opens a conversation. It demonstrates your thinking. It’s the first step toward building an audience.

Reach out to one person you admire.

Downside: 10 minutes to write a thoughtful email. They don’t respond. Your ego feels a small sting.

Upside: They respond. You have a conversation. They share an insight that shifts your entire approach. They introduce you to someone in their network. They become a mentor. They hire you.

Ship one small product or service.

Downside: 20-40 hours to build something simple. You put a landing page up. Nobody buys. You feel disappointed.

Upside: 10 people buy. You make $500. You prove you can create value people will pay for. You validate an idea. You have a foundation to build on. You gain confidence to create more.

Record one video or podcast episode.

Downside: 3 hours to plan, record, and publish. It gets 20 views. You cringe at the sound of your own voice.

Upside: 100 people watch. Some of them subscribe. One person shares it. It starts a conversation. You develop the skill. You realize you enjoy the medium. It becomes a weekly practice that builds into something meaningful over time.

Do you see the pattern?

The downside is so small - a few hours, temporary discomfort, maybe $100 - that it barely matters.

The upside is completely disproportionate - skills, connections, opportunities, proof of concept, momentum.

That’s asymmetry at the micro level.

Before you take any of these bets, run this exercise. Tim Ferriss calls it “fear-setting” - a systematic way to define your fears instead of letting them define you.

Take the bet you’re considering. Write it at the top of a page. Then create three columns:

Column 1: Define - What’s the worst that could realistically happen? Be specific. Not vague anxiety, but concrete outcomes. “I waste 3 months and $500.” “Nobody reads it and I feel stupid.” Write down every fear.

Column 2: Prevent - For each fear, what could you do to reduce the likelihood? “Start with a smaller version - 1 month and $100.” “Share with a small group first.” Most fears can be mitigated with basic precautions.

Column 3: Repair - If the worst case happens, how would you recover? “I could save $500 in two months.” “I could use what I learned to try a different approach.” Be specific about the recovery path.

What you’ll discover when you write it out is that the worst case is: unlikely to happen, not that bad, and recoverable.

Most of your fears dissolve when you define them specifically.

The vague anxiety of “what if it doesn’t work” feels insurmountable. But “I’ll be out $500 and some time, and I can recover both in two months” is manageable.

That’s the difference between imagined catastrophe and actual risk.

Pick one small bet. The smallest one that scares you just enough to matter.

And do it in the next 7 days.

Not eventually. Not when you’re ready. In the next 7 days.

Do the thing. Ship the thing. Hit publish. Send the email. Launch the offer.

Then take another bet. Then another. Build the pattern.

Each bet de-risks the next one because you’re developing skills, building resilience, and accumulating proof that you can survive uncertainty.


Let me tell you what’s really happening right now.

You’re reading this newsletter. You’re nodding along. You’re thinking “This makes sense. The math is clear. I should take more asymmetric bets.”

And tomorrow, you’ll do nothing.

You’ll go back to your job. You’ll scroll social media. You’ll think about starting that project. You’ll tell yourself “soon.”

You’ll make the same calculation you’ve been making for years: The risk is too high. I’m not ready. I need more time.

And the days will stack into weeks. The weeks into months. The months into years.

And five years from now, you’ll be reading another article about asymmetric upside and thinking “This makes sense” and doing nothing.

Because here’s what you haven’t realized yet: understanding the math changes nothing.

You already knew most of this before you started reading. You already knew that the downside of trying is survivable. You already knew that regret is worse than failure. You already knew that smart people stay stuck not because they’re stupid but because they’re scared.

You didn’t need me to explain asymmetric upside. You needed me to give you permission to keep waiting.

And I’m not giving you that permission.

The asymmetry of upside makes bold moves mathematically smart. That’s not inspiration. That’s fact.

The downside of trying and failing is usually survivable. You recover. You learn. You move forward. That’s data.

The downside of never trying is guaranteed regret. You never know. You never build. You spend decades wondering what could have been. That’s certainty.

Bet on yourself more often. The expected value is higher than you think. That’s math.

But none of that matters if you don’t actually do the thing.

And you know what the thing is. You’ve known for months. Maybe years.

It’s the project you keep planning but never starting.

It’s the email you keep drafting but never sending.

It’s the business idea you keep refining but never launching.

It’s the content you keep researching but never creating.

It’s the bet you keep calculating but never taking.

You know exactly what it is. The thing that would change everything if you just did it. The thing that scares you precisely because it matters.

Here’s the uncomfortable truth you’ve been avoiding:

You’re not actually trying to make a smart decision. You’re trying to find a mathematical justification for your fear.

The spreadsheet is cover. The research is procrastination. The “one more course” is avoidance.

You don’t need more information. You don’t need better tools. You don’t need perfect conditions.

You need to stop lying to yourself about why you haven’t started.

And the biggest lie you tell yourself?

“I’ll do it when I’m ready.”

But here’s what “ready” actually means in your internal calculus: “When it doesn’t feel scary anymore. When I’m guaranteed not to fail. When I’m certain it will work.”

That moment never comes.

You will never feel ready. The fear will never fully go away. The uncertainty will never completely resolve.

The people who succeed at the things you want to do aren’t more ready than you. They’re not more talented. They’re not more prepared.

They just started before they felt ready. They took the bet while it still scared them. They understood that the feeling of readiness comes AFTER you start, not before.

So here’s the only question that matters:

Are you going to make the same excuses next year?

Because that’s the actual decision you’re making right now. Not “should I start this thing.” You already know you should.

The decision is: Am I the kind of person who lets fear make my decisions? Or am I the kind of person who runs the math and makes the bet?

You can’t be both.

The math is clear. The asymmetry is obvious. The only question is: what are you going to do about it?

Thanks for reading,

— Scott

\

阿拉文德·巴拉为逾十万名员工重构企业服务交付体系

2026-01-26 03:50:25

In the increasingly complex landscape of enterprise service management, the remarkable transformation led by Aravind Barla at a global Fortune 100 company stands as a compelling testament to visionary leadership and strategic technical execution. This next-generation Employee Experience Platform (EXP) implementation, serving over 100,000 employees across multiple continents, has redefined industry standards for service delivery integration and AI-driven automation while establishing new benchmarks for what's possible in large-scale digital transformation initiatives.

Inheriting a Fragmented Ecosystem

When Aravind Barla stepped into the technical leadership role for this ambitious project in early 2023, the organization was struggling with deeply entrenched challenges that had evolved over decades of technological evolution. Service delivery across IT, HR, and Workplace teams operated in isolated silos, creating inconsistent user experiences, redundant processes, and frustrating inefficiencies that affected every level of the organization. Employee satisfaction scores had reached concerning lows, with internal surveys revealing that employees spent an average of 5.2 hours per week navigating various service portals and tracking request statuses.

\ The financial impact of this fragmentation was equally troubling. The operational overhead of maintaining multiple disconnected systems was draining approximately $4.7 million in annual resources, while the productivity loss from inefficient service delivery was estimated at over $9 million yearly. Previous attempts to address these issues had resulted in only incremental improvements, with departmental boundaries and technical debt presenting seemingly insurmountable obstacles.

\ The complexity of unifying service delivery across an organization of this scale presented formidable challenges that had derailed similar initiatives. Two previous implementation attempts had stalled due to cross-departmental resistance, technical integration challenges, and the sheer complexity of standardizing processes across global operations. What the company needed was not just technical expertise, but a leader who could align diverse stakeholders around a unified vision while delivering a technically robust solution that would scale across the enterprise.

Architectural Vision and Implementation Strategy

At the heart of Aravind's approach was a commitment to fundamentally reimagine how employees interacted with enterprise services. Rather than pursuing incremental improvements to existing systems, he championed a comprehensive transformation built on ServiceNow's platform capabilities, enhanced by custom AI implementations, virtual agents, and intelligent workflow automation. His vision extended beyond technical architecture to encompass the entire employee journey – from initial need identification through to resolution and feedback.

\ Aravind Barla assembled an initial discovery team comprising business analysts, UX researchers, and service delivery leaders to conduct a thorough assessment of the current state. This six-week investigation revealed that employees were navigating an average of 12 different portals for various service needs, with inconsistent terminology, conflicting interfaces, and redundant data entry requirements creating significant friction. Armed with these insights, Aravind developed a comprehensive transformation roadmap that balanced quick wins with long-term structural improvements.

\ The cornerstone of this strategy was a unified intake layer that would serve as the foundation for enterprise service delivery. Aravind personally architected this sophisticated system, consolidating over 15 legacy intake points into a single, intuitive portal accessible across all devices and locations. He pioneered the implementation of natural language processing capabilities that could interpret employee requests regardless of the terminology used, effectively breaking down the language barriers between departments.

\ Perhaps most impressively, he introduced a revolutionary new taxonomy for service categorization that created a common language across previously siloed departments. This standardized approach to service classification required extensive collaboration with subject matter experts from each functional area, with Aravind facilitating numerous workshops to build consensus around the new framework. His diplomatic skills proved as valuable as his technical expertise during this critical phase, as he navigated competing priorities and entrenched departmental perspectives.

\ The technical implementation phase showcased Aravind Barla's exceptional leadership capabilities under pressure. He assembled and guided a cross-functional team of 18 developers, UX specialists, and business analysts, fostering a collaborative environment despite aggressive timelines and complex integration challenges. His hands-on approach to designing end-to-end workflow automation ensured that the theoretical architecture translated into practical, efficient solutions that would deliver measurable business value.

\ Throughout the development process, Aravind instituted weekly stakeholder reviews and bi-weekly user testing sessions, creating a feedback loop that allowed for continuous refinement. This iterative approach enabled the team to identify and address potential issues early, resulting in a solution that not only met technical requirements but truly resonated with end users. His insistence on maintaining this rigorous review process, despite timeline pressures, proved instrumental in delivering a platform that achieved exceptional adoption rates.

Breakthrough Results and Organizational Impact

The results of Aravind Barla's implementation were both immediate and substantial, exceeding expectations across all key performance indicators. Within the first quarter of launch, the platform had achieved a remarkable 70% increase in self-service adoption – a metric that directly translated to significant operational efficiencies and enhanced user satisfaction. Average resolution times for employee requests plummeted from 72 hours to under 24 hours, representing a transformative improvement in service delivery speed and quality.

\

The financial impact was equally impressive and provided clear validation of the investment in this initiative. By automating more than 65% of Tier-1 service requests through intelligent virtual agents and workflow automation, the platform enabled annual cost savings of $3.2 million in operational overhead. This achievement alone would have justified the implementation, but the qualitative improvements were perhaps even more significant in terms of organizational impact.

\ Employee satisfaction scores rose by 40% within the first quarter, creating a measurable improvement in workforce experience across the global organization. This improvement was particularly notable among remote and hybrid workers, who reported feeling more connected to organizational support systems through the unified platform. The implementation also yielded unexpected benefits in terms of data visibility and analytics, providing leadership with unprecedented insights into service demand patterns and operational bottlenecks.

\ The platform's success during a period of significant organizational change further demonstrated its resilience and value. During a major acquisition that brought 12,000 new employees into the organization, Aravind Barla's system seamlessly expanded to accommodate the increased load and new service requirements, maintaining performance standards throughout the integration process. This scalability validated his architectural decisions and reinforced the strategic value of the implementation.

Industry Recognition and Career Advancement

The exceptional performance and innovative approach demonstrated by Aravind quickly garnered attention both within and beyond the organization. His implementation became the subject of an in-depth industry case study highlighting best practices in enterprise service management and AI-driven automation. A prominent technology publication featured the project in a press article showcasing innovative applications of AI in employee services, positioning both Aravind and the organization as thought leaders in digital workplace transformation.

\ At technology conferences across North America, the implementation was referenced as a model example of successful enterprise service integration, with several competitors subsequently launching similar initiatives based on the framework Aravind had pioneered. This industry recognition not only elevated the organization's reputation for innovation but created valuable recruitment advantages in attracting top technical talent.

\ Within the organization, Aravind's leadership earned him a promotion to technical lead for enterprise automation strategy – a role that expanded his influence across the company's global technology initiatives and positioned him as a key advisor to the CIO on future digital transformation efforts. His team members also benefited from their association with the successful project, with several receiving promotions and recognition for their contributions.

\ Most significantly, the framework Aravind developed has since been adapted by multiple business units worldwide, creating a lasting legacy that extends far beyond the initial implementation. The modular architecture he designed has proven flexible enough to accommodate regional variations while maintaining the core benefits of standardization and integration, allowing for global scaling without sacrificing local relevance.

A Blueprint for Enterprise Transformation

For Aravind Barla, this project represented more than just a successful implementation – it marked his evolution from technical practitioner to thought leader and innovator in the enterprise automation space. The approach he pioneered demonstrated how thoughtful architecture, combined with strategic leadership, could overcome entrenched challenges that had previously seemed insurmountable in large-scale enterprise environments.

\ The success validated Aravind Barla's fundamental philosophy toward technology implementation: that truly transformative solutions must balance technical excellence with deep understanding of human experience. Throughout the project, he maintained a dual focus on system optimization and user-centered design, recognizing that even the most sophisticated automation would fail without genuine adoption by employees at all levels of the organization.

\ His commitment to measuring impact through both quantitative and qualitative metrics established a new standard for evaluating technology investments within the organization. By consistently connecting technical decisions to tangible business outcomes, Aravind changed the conversation around enterprise technology from cost center to value driver, securing executive support for continued innovation in the employee experience space.

\ The impact of Aravind's work continues to resonate throughout the organization and beyond its boundaries. By creating a blueprint for intelligent service integration, he has influenced how enterprises approach employee experience design and service delivery automation. His success validates a fundamental principle: that technology implementations, when guided by clear vision and executed with technical excellence, can transform not just systems but the everyday experiences of thousands of employees.

\

As organizations worldwide continue to prioritize digital employee experience in an increasingly distributed work environment, Aravind Barla's implementation stands as a powerful example of how leadership excellence can drive transformative outcomes in enterprise technology. The ripple effects of his innovation continue to expand, inspiring a new generation of technical leaders to pursue similarly ambitious visions for workplace transformation.

About Aravind Barla

Aravind Barla is a distinguished leader in enterprise technology innovation, specializing in the intersection of automation, AI, and employee experience solutions. Based in Dublin, California, Aravind combines deep technical expertise with strategic vision to deliver transformative implementations that redefine how organizations approach service delivery and digital employee experience. With dual Master's degrees in Computer Science and Information Technology & Management from prestigious institutions, he brings a unique blend of technical proficiency and business acumen to complex enterprise challenges.

\ Throughout his career, Aravind has been recognized for his ability to bridge the gap between advanced technology capabilities and practical business applications. His technical expertise spans ServiceNow platform architecture, AI/ML implementation, workflow automation, and integration of disparate enterprise systems. Beyond his technical skills, Aravind has demonstrated exceptional leadership in building high-performing teams and guiding organizations through complex digital transformations.

\ His career has been defined by a commitment to creating systems that not only enhance operational efficiency but fundamentally improve how people experience work in the digital age. Through his leadership on high-impact initiatives for Fortune-ranked organizations, Aravind has established himself as a thought leader and innovator in the enterprise automation space, consistently driving strategic transformation that delivers measurable business outcomes while enhancing human experiences.

\ Aravind Barla regularly contributes to industry conversations through speaking engagements, technical publications, and participation in professional communities focused on the future of work and enterprise technology. His forward-thinking approach to technology implementation continues to influence how organizations conceptualize and execute digital workplace strategies in an increasingly complex business environment.

\

:::tip This story was distributed as a release by Sanya Kapoor under HackerNoon’s Business Blogging Program.

:::

\