2026-04-16 05:58:29
Backtesting is crucial in developing quantitative trading strategies, offering a data-driven way to validate ideas and evaluate performance under historical market conditions. However, many underestimate the complexity of replicating real execution, often ignoring impacts like slippage, bid-ask spread, and trading costs. These omissions can inflate performance metrics and reduce robustness when moving from simulation to live trading.
\ Including execution frictions and market microstructure effects ensures backtest results reflect realistic, practical strategies rather than idealized models. The most important components that we often overlook, and which we are going to analyze in this article, are:
\ In this article, we will use Python for backtesting a relatively simple strategy, experiment with the above components, and finally present and discuss the results of our backtesting.
First, we will perform our imports and provide the functions to retrieve the prices using EODHD’s Historical Intraday API.
from typing import Dict, List, Any, Callable
import pandas as pd
import itertools
from datetime import datetime, timedelta
import requests
import numpy as np
from itertools import product
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
token = 'YOUR EODHD API KEY'
def get_prices(symbol: str, from_date: str, to_date: str) -> pd.DataFrame:
url = f'https://eodhd.com/api/intraday/{symbol}'
from_ts = int(pd.to_datetime(from_date, utc=True).timestamp())
to_ts = int(pd.to_datetime(to_date, utc=True).timestamp())
params = {'api_token': token, "fmt": "json", "interval": "5m", "from": from_ts}
resp = requests.get(url, params=params)
df = pd.DataFrame(resp.json())
if df.empty:
return pd.DataFrame(columns=['open', 'low', 'high', 'close', 'volume'])
df['datetime'] = pd.to_datetime(df['datetime'])
df.sort_values(by='datetime', inplace=True)
df.set_index('datetime', inplace=True)
df = df[['open', 'low', 'high', 'close', 'volume']]
return df
def get_prices_day_by_day(symbol: str, start_date: str, end_date: str) -> pd.DataFrame:
start_dt = pd.to_datetime(start_date).normalize()
end_dt = pd.to_datetime(end_date).normalize()
all_prices = []
current_dt = start_dt
while current_dt <= end_dt:
from_str = current_dt.strftime('%Y-%m-%d')
day_df = get_prices(symbol, from_str, end_date)
if day_df.empty:
break
day_df = day_df[day_df.index.normalize() <= end_dt]
day_df = day_df[day_df.index.normalize() >= current_dt]
if not day_df.empty:
all_prices.append(day_df)
max_dt = day_df.index.max().normalize() if not day_df.empty else None
if max_dt is None or max_dt <= current_dt:
break
current_dt = max_dt + timedelta(days=1)
if not all_prices:
return pd.DataFrame(columns=['open', 'low', 'high', 'close', 'volume'])
df = pd.concat(all_prices)
df = df[~df.index.duplicated(keep='first')]
df.sort_index(inplace=True)
return df
df_1min = get_prices_day_by_day("AAPL.US", "2025-01-01", "2026-01-01")
df_1min
Note: Replace YOUR EODHD API KEY with your actual EODHD API key. If you don’t have one, you can obtain it by opening an EODHD developer account.
\ You will notice that after our imports, we are creating two separate functions.
\ get_prices: is the function that practically gets the 5-minute candles of a stock for a specific period using the intraday price data API.
\ getpricesdaybyday: is the function that will loop through some days for a defined start and end date, and add all the results to a single dataframe.
\ The final dataframe should look like this:
\

The reason we are doing it this way is that typical historical data APIs usually provide up to a certain number of rows. In our case, we will backtest the entire 2025, which means we will have around 20K candles, an amount that is impossible to retrieve with a single call. That is why we will gradually call to get the necessary data.
\ The other reason for using the 5-minute timeframe is that we will treat it as a live price feed. However, our strategy will operate on the 4-hour timeframe. For this reason, we will utilize Python’s resample function to also calculate the 4H timeframe.
resampled = df_1min.resample('4H').agg({
'open': 'first',
'high': 'max',
'low': 'min',
'close': 'last',
'volume': 'sum'
}).dropna()
resampled['ma_fast_4h'] = resampled['close'].rolling(window=10, min_periods=1).mean()
resampled['ma_slow_4h'] = resampled['close'].rolling(window=50, min_periods=1).mean()
close_4h = resampled['close']
volume_4h = resampled['volume']
ma_fast_4h = resampled['ma_fast_4h']
ma_slow_4h = resampled['ma_slow_4h']
signal = (ma_fast_4h - ma_slow_4h).clip(-1, 1)
df_1min['close_4h'] = df_1min.index.map(close_4h.reindex(df_1min.index, method='ffill'))
df_1min['volume_4h'] = df_1min.index.map(volume_4h.reindex(df_1min.index, method='ffill'))
df_1min['MA_fast'] = df_1min.index.map(ma_fast_4h.reindex(df_1min.index, method='ffill'))
df_1min['MA_slow'] = df_1min.index.map(ma_slow_4h.reindex(df_1min.index, method='ffill'))
df_1min['signal'] = np.where(df_1min['MA_fast'] > df_1min['MA_slow'], 1,
np.where(df_1min['MA_fast'] < df_1min['MA_slow'], -1, 0))
df_1min
In order to keep everything clean and easy to understand, we are going to keep all the data that we need in the initial 5-minute dataframe. Before we explain the code, let’s talk about the strategy.
\ It will be a simple crossover strategy, where we will have two moving averages, a fast one (period 10) and a slow one (period 50). When the fast is above the slow, we consider that the market is uptrending, so we will invest long. In the opposite scenario, we will invest short-term.
\ Now, let’s explain the code:
\ Our dataframe should look like the following:

As you notice, we still have the 5-minute candles with the extra 4-hour columns. That is why you will see that the 4-hour columns are repetitive and will only change every 4 hours.
Now that we have our data to run our strategy, it is time to develop the actual backtesting engine.
def backtest(df, initial_capital=100000, commission_pct=0.001,
spread_pct=0.0005, slippage_base_pct=0.0005):
capital = initial_capital
position = 0.0
entry_price = 0.0
entry_time = None
trades = []
def trade_costs(price, size):
return abs(size * price * commission_pct)
def adjust_price(price, direction, volume_4h, spread_pct, slippage_base_pct):
# Volume slippage
vol_norm = max(volume_4h / 20_000_000, 0.1)
slippage_pct = slippage_base_pct / vol_norm
slippage_pct = min(slippage_pct, 0.005)
half_spread = price * spread_pct / 2
slippage = price * slippage_pct
# Buy high (pay ask), sell low (receive bid)
if direction > 0: # Buy / long entry
return price + half_spread + slippage
else: # Sell / short entry or long exit
return price - half_spread - slippage
sig = 0
for idx, row in df.iterrows():
prev_sig = sig
sig = row['signal']
price_open = row['open']
# Exit on signal reverse OR zero
if position != 0 and sig * position < 0:
direction_out = -1 if position > 0 else 1
adj_exit = adjust_price(row['close'], direction_out, row['volume_4h'], spread_pct, slippage_base_pct)
pnl_gross = position * (adj_exit - entry_price)
costs = trade_costs(adj_exit, abs(position))
pnl_net = pnl_gross - costs
capital += pnl_net
trades.append({
'entry_time': entry_time, 'exit_time': idx,
'entry_price': entry_price, 'exit_price': adj_exit,
'position_size': position, 'pnl_gross': pnl_gross,
'costs': costs, 'pnl_net': pnl_net, 'capital_after': capital
})
position = 0
# Enter new position
if position == 0 and sig != prev_sig and sig != 0:
direction = sig
adj_entry = adjust_price(price_open, direction, row['volume_4h'], spread_pct, slippage_base_pct)
position_size = capital / adj_entry # Fixed sizing
entry_costs = trade_costs(adj_entry, abs(position_size))
if capital > entry_costs:
position = position_size * direction
entry_price = abs(adj_entry)
entry_time = idx
capital -= entry_costs
# Close final position
if position != 0:
last_row = df.iloc[-1]
direction_out = -1 if position > 0 else 1
adj_exit = adjust_price(last_row['close'], direction_out, last_row['volume_4h'], spread_pct, slippage_base_pct)
pnl_gross = position * (adj_exit - entry_price)
costs = trade_costs(adj_exit, abs(position))
pnl_net = pnl_gross - costs
capital += pnl_net
trades.append({
'entry_time': entry_time, 'exit_time': df.index[-1],
'entry_price': entry_price, 'exit_price': adj_exit,
'position_size': position, 'pnl_gross': pnl_gross,
'costs': costs, 'pnl_net': pnl_net, 'capital_after': capital
})
trades_df = pd.DataFrame(trades)
# Metrics (unchanged)
total_return = (capital - initial_capital) / initial_capital * 100
num_trades = len(trades_df)
win_rate = (trades_df['pnl_net'] > 0).mean() * 100 if num_trades > 0 else 0
avg_win = trades_df[trades_df['pnl_net'] > 0]['pnl_net'].mean() if win_rate > 0 else 0
avg_loss = trades_df[trades_df['pnl_net'] < 0]['pnl_net'].mean() if (100 - win_rate) > 0 else 0
profit_factor = abs(avg_win * (win_rate / 100) / (abs(avg_loss) * ((100 - win_rate) / 100))) if avg_loss != 0 else float('inf')
metrics = {
'final_capital': capital,
'total_return_pct': total_return,
'num_trades': num_trades,
'win_rate_pct': win_rate,
'avg_win': avg_win,
'avg_loss': avg_loss,
'profit_factor': profit_factor,
'total_costs': trades_df['costs'].sum()
}
return trades_df, metrics
As you will notice, we have a function that takes our initial capital and the percentages of the components we discussed earlier: commission, spread, and slippage.
\ The code practically goes line by line, checking if there is already an open position, and depending on the signal it opens, closes a trade, or apparently does nothing if nothing has changed.
\ In the end, it will return a detailed dataframe with the trades and some result metrics of the strategy, such as return, number of trades, winning percentage, etc.
\ Let’s run it with some basic values.
trades_df, metrics = backtest(
df_1min,
initial_capital=50000,
commission_pct=0.0005,
spread_pct=0.0002,
slippage_base_pct=0.0001
)
metrics
\ The metrics returned are the following:
{'final_capital': np.float64(33776.36077208968),
'total_return_pct': np.float64(-32.44727845582063),
'num_trades': 29,
'win_rate_pct': np.float64(37.93103448275862),
'avg_win': np.float64(1132.900075678812),
'avg_loss': np.float64(-1560.2231972177665),
'profit_factor': np.float64(0.44373639954880745),
'total_costs': np.float64(602.5884081034602)}
We see that our strategy took 15 trades, lost money with a return of minus 32%, and the accumulated costs were $602. We should clarify here that, apparently, the strategy is not an exciting one and it loses money, but the aim of this article is to demonstrate the impact of costs on a strategy and not to showcase a winning one.
Now is the time to run this backtesting engine with numerous possible cost combinations.
param_grid = {
'commission_pct': [0.0, 0.0005, 0.001, 0.002],
'spread_pct': [0.0, 0.0002, 0.0005, 0.001],
'slippage_base_pct': [0.0, 0.0002, 0.0005, 0.001]
}
# Generate all combinations
scenarios = list(itertools.product(
param_grid['commission_pct'],
param_grid['spread_pct'],
param_grid['slippage_base_pct']
))
For that reason, we will create a parameter grid with costs starting from zero (to establish a baseline for comparison). Please note that those numbers, such as a maximum of 0.2% for commission or 0.1% for spread and slippage, do exist in the market; therefore, we have scenarios that are valid depending on your broker.
\ Now is the time to run the backtesting of all the scenarios:
results_list = []
for i, (comm_pct, spread_pct, slip_pct) in enumerate(scenarios):
trades_df, metrics = backtest(
df_1min,
initial_capital=10000,
commission_pct=comm_pct,
spread_pct=spread_pct,
slippage_base_pct=slip_pct
)
scenario_metrics = {
'commission_pct': comm_pct,
'spread_pct': spread_pct,
'slippage_base_pct': slip_pct,
**metrics
}
results_list.append(scenario_metrics)
print(f"#{i+1:2d}: C{comm_pct:.4f} S{spread_pct:.4f} Slip{slip_pct:.4f} → "
f"{metrics['total_return_pct']:+6.1f}% ({metrics['num_trades']:2d} trades)")
metrics_df = pd.DataFrame(results_list)
metrics_df
The code runs all the scenarios, one by one, and stores the results of the metrics in an array, finally transforming it into a dataframe.
\

The initial observation is that the count of trades remains consistent across all runs. This is logical because the only way commissions could impact trade numbers is by changing the signal, which does not appear to happen here.
\ Let’s first compare the zero-cost scenario with a scenario that has costs around the middle of our possible range.
zero_cost = metrics_df[(metrics_df['commission_pct']==0) &
(metrics_df['spread_pct']==0) &
(metrics_df['slippage_base_pct']==0)]
realistic = metrics_df[(metrics_df['commission_pct']==0.001) &
(metrics_df['spread_pct']==0.0005) &
(metrics_df['slippage_base_pct']==0.0005)]
print(f"Gross edge: {zero_cost['total_return_pct'].iloc[0]:+.1f}%")
print(f"Net realistic: {realistic['total_return_pct'].iloc[0]:+.1f}%")
print(f"Cost drag: {zero_cost['total_return_pct'].iloc[0] - realistic['total_return_pct'].iloc[0]:+.1f}%")
\ This will return:
Gross edge: -28.5%
Net realistic: -40.5%
Cost drag: +12.0%
This indicates that when realistic levels of commission (0.1%), spread (0.05%), and slippage (0.05%) are included, the performance declines from -28.5% to -40.5%, adding a cost of +12.0 percentage points. This demonstrates that, even for a losing strategy, execution frictions worsen results, increase losses, and cause capital to erode more quickly than a basic PnL calculation would show.
\ Let’s now create three visualizations from the backtest results: a scatter plot illustrating how higher total costs are associated with lower returns, a line chart depicting the linear decline as commission rates increase, and a bar chart that compares average returns at different friction levels (ideal, retail, worst).
df = metrics_df.copy()
df = pd.read_csv('full_cost_impact.csv') # your metrics_df
df['total_friction'] = df['commission_pct'] + df['spread_pct'] + df['slippage_base_pct']
fig, axes = plt.subplots(1,3, figsize=(15,4))
# 1. Cost Drag Explosion
axes[0].scatter(df['total_costs'], df['total_return_pct'])
axes[0].set_xlabel('Total Costs ($)'); axes[0].set_ylabel('Net Return (%)')
axes[0].set_title('Higher Costs = Lower Returns')
# 2. Commission Sensitivity
comm_means = df.groupby('commission_pct')['total_return_pct'].mean()
axes[1].plot(comm_means.index * 100, comm_means.values, marker='o')
axes[1].set_xlabel('Commission (%)'); axes[1].set_title('Commission Impact')
# 3. Friction Buckets
friction_bins = ['Ideal (0%)', 'Retail (0.2%)', 'Worst (0.3%)']
friction_means = df.groupby(pd.cut(df['total_friction'],
bins=[0, 0.001, 0.002, 1], labels=friction_bins))['total_return_pct'].mean()
axes[2].bar(friction_bins, friction_means.values)
axes[2].set_ylabel('Net Return (%)'); axes[2].set_title('Friction Impact')
plt.tight_layout()
plt.savefig('cost_impact_charts.png')
plt.show()
\

The charts clearly show an inverse relationship between execution costs and strategy performance. Even small costs greatly increase losses for this marginal strategy, with commission displaying a strong linear sensitivity. Total friction buckets indicate that “retail” levels (0.2%) can reduce returns by over five percentage points compared to optimal conditions.
\ We will now determine how sensitive returns are to each cost component through linear regression across all scenarios. The process involves converting percentages to actual values, building a model that predicts total return based on commission, spread, and slippage, and then illustrating each cost’s marginal impact per percentage point.
metrics_df['comm_pct'] = metrics_df['commission_pct'] * 100
metrics_df['spread_pct'] = metrics_df['spread_pct'] * 100
metrics_df['slip_pct'] = metrics_df['slippage_base_pct'] * 100
from sklearn.linear_model import LinearRegression
X = metrics_df[['comm_pct', 'spread_pct', 'slip_pct']]
y = metrics_df['total_return_pct']
model = LinearRegression().fit(X, y)
coefs = pd.DataFrame({
'Cost': ['Commission', 'Spread', 'Slippage'],
'Impact_per_%': model.coef_
})
print(coefs.sort_values('Impact_per_%'))
\ And the results are:
Cost Impact_per_%
2 Slippage -97.184237
0 Commission -36.458293
1 Spread -18.401344
\ The regression shows slippage has the greatest damaging effect, reducing returns by 97.4% per 1% slippage, followed by commissions (-36.4%) and spreads (-18.4%). This surprising order highlights that slippage is predominant in high-frequency or low-liquidity situations, illustrating why execution quality often matters more than headline broker fees for systematic strategies.
\ The next analysis point will involve a code that identifies each cost component’s real-world drag by comparing average returns when only that cost is active (others are zeroed out) against the zero-cost baseline, ranking their practical impact across tested ranges.
# REAL Delta Return per 0.01% cost increase
base_return = metrics_df[metrics_df['commission_pct']==0.0]['total_return_pct'].iloc[0] # -9.97%
# Average drag per cost type (isolated)
comm_drag = metrics_df[metrics_df['spread_pct']==0.0]['total_return_pct'].mean() - base_return
spread_drag = metrics_df[metrics_df['commission_pct']==0.0]['total_return_pct'].mean() - base_return
slip_drag = metrics_df[metrics_df['commission_pct']==0.0]['total_return_pct'].mean() - base_return
print(f"""
REAL COST IMPACT RANKING:
Spread {metrics_df['spread_pct'].max()*100:.1f}% → {spread_drag:.1f}% drag
Commission {metrics_df['commission_pct'].max()*100:.1f}% → {comm_drag:.1f}% drag
Slippage {metrics_df['slippage_base_pct'].max()*100:.1f}% → {slip_drag:.1f}% drag
""")
\ With the following results:
REAL COST IMPACT RANKING:
Spread 10.0% → -6.1% drag
Commission 0.2% → -8.5% drag
Slippage 0.1% → -6.1% drag
\ Commissions stand out as the main factor reducing performance by -8.5% despite a modest maximum rate of 0.2%, while spread and slippage both contribute a -6.1% impact from much higher tested levels of 1.0%. This shows that commissions have a disproportionately large effect in fixed-per-trade setups, highlighting the importance of broker selection as a key leverage point for systematic strategies.
The key takeaways from the above analysis can be summarized as follows:
In this article, we built a complete backtesting pipeline from the ground up. We extracted intraday price data for AAPL using EODHD’s Historical Intraday API, resampled it to a 4-hour timeframe, and implemented a simple moving average crossover strategy.
\ We then stress-tested that strategy across 64 cost scenarios, varying commission, bid-ask spread, and slippage to quantify their individual and combined drag on performance. The results were clear: realistic frictions turned an already marginal strategy into a significantly worse one, with cost drag alone accounting for over 12 percentage points of additional loss.
\ There is plenty of room to extend this framework further. A natural next step is to incorporate position-sizing constraints and market-impact models that scale slippage dynamically with order size relative to average daily volume. You could also introduce walk-forward validation to avoid overfitting cost assumptions to a single period, or expand the asset universe to compare execution frictions across equities, ETFs, and futures. Adding financing costs for overnight positions and borrowing costs for short trades would make the model even more representative of live trading conditions.
\ Ultimately, the lesson here is not specific to any one strategy. Every systematic trader faces the same fundamental challenge: the gap between gross edge and net reality. Execution costs are not a footnote; they are a defining variable in whether a strategy survives production. Model them early, model them honestly, and let that discipline carry through every stage of your development process.
Your Business — On AutoPilot with DDImedia AI Assistant \n (Join Our Waitlist)
Visit us at DataDrivenInvestor.com
Join our creator ecosystem here.
DDI Official Telegram Channel: https://t.me/+tafUp6ecEys4YjQ1
Follow us on LinkedIn, Twitter, YouTube, and Facebook.
\
2026-04-16 05:42:34
I recall some basic principles regarding the relationship between a Layer 1 blockchain and a Layer 2 rollup. And I take Arbitrum as a precise example because it is a case at Invarians that I use very frequently for demonstration. Arbitrum certifies the finality of its transactions on Ethereum, a Layer 1 blockchain.
2026-04-16 04:57:32
It was a Tuesday, around 2:30 PM. I was in the middle of migrating a .NET MVC app to a CQRS model at the company I was working at when I heard the familiar ping from my personal email.

I alt-tabbed. A recruiter. From company X.
\ Oh lord. Alright, chill. Probably a rejection. I read it again. It wasn’t.
\ The job description was perfect. The exact tech stack I wanted. The kind of compensation that meant I could finally stop eating generic-brand cereal. My stomach did that weird drop-and-flip thing, terror disguised as excitement.
\ I replied, confirmed a time for the technical screen, and then just sat there staring at the migration work I was supposed to be doing.
\ I was done for the day. My brain switched instantly into panic mode. I needed to study.
\ I opened Chrome and went straight to LeetCode. Pavlovian at this point.
\ I started grinding dynamic programming problems, my weakest area. Twenty minutes in, I was ready to quit engineering entirely and become a goose farmer. I caught myself staring at some “optimal solution” for a substructure problem I knew I would never, ever use in real life as a software developer.

Then I had what I thought was a smart idea.
\ Why not see what the company actually asks?
\ I clicked on the “Company” tab. Locked. Paywall.
\ Thirty-five dollars a month just to see what problems a company is asking right now. It felt off. It felt like a scam. Like they were holding the candidate's success hostage.
\ You know what, let’s try something else.
\ I opened a new terminal, fired up Python, and spent the rest of the afternoon in VS Code.

Over the next few days, while I was supposed to be working, I scraped raw interview reports. I parsed over 1,500 recent technical screens across nearly 500 companies. I wrote scripts to categorize topics, analyze difficulty levels, and compute actual statistical frequencies.
\ And when the raw JSON finally started turning into real patterns, it hit me:
\ The whole “just grind LeetCode” advice is statistically stupid.
\ The interview process is a black box, but black boxes still have patterns.
I used to think FAANG was synonymous with “impossible.” But the data on Apple tells a different story.
\ Only 15% of their problems are rated Hard. A massive 62% are Medium. That’s pretty much solid fundamentals they're asking for.
\ But there’s a quirk, an algorithmic quirk. Apple has an obsession. They test Binary Search Trees at 4.5 times the global average. The highest multiplier in my entire dataset for any major tech company. If you’re grinding obscure Hard DP problems for Apple, you’re preparing for the wrong fight. The data says you’re far more likely to pass by mastering tree traversals than by memorizing niche tricks.
\ Netflix is just as weird. They have a 10.7× multiplier on bucket sort. Let that sink in. About 8.3% of their problems involve bucket sort, compared to just 0.8% globally. If you haven’t mastered that one specific algorithm, you’re exposed.

A graph isn’t your optimal study option if you’re preparing for Yelp. The numbers show 92.9% of their interview problems revolve entirely around math, string manipulation, and raw logic. They basically do not ask traditional data structure questions. Accenture is the same, 65% math and logic. If you are grinding Dijkstra for them, you’re wasting your time.
\ And then there’s the real boss.
\ Not Google.
\ Sprinklr.
\ Their interview process is brutal. Nearly half of their problems (47.4%) are rated Hard. They focus on unconventional logic puzzles and rarely ask Easy questions. They’re the actual endgame.
I didn’t build this scraper just for myself.
\ I turned the dataset, the topic multipliers, the frequency breakdowns, and the company-specific patterns for 458 companies into a searchable frontend.
\ It’s all online. Free. No login. No paywall. https://crackr.dev/companies
\ Stop studying blindly. Understand the statistical bias of your interviewer before you take the call.
2026-04-16 03:09:17
Risk adjustment software is a multi-billion-dollar market, and it’s still growing. That growth has attracted dozens of vendors, all claiming AI-powered accuracy and audit readiness. Most of those claims are noise.
What’s changed in 2026 isn't technology. It’s the consequences. The Department of Justice collected $117.7 million from Aetna in March for submitting unsupported diagnosis codes and failing to remove them. Kaiser Permanente settled for roughly $1 billion on similar allegations. OIG audits published that same month found error rates between 81% and 91% across three Medicare Advantage organizations. The government isn’t warning anymore. It’s collecting.
If you’re evaluating risk adjustment software right now, the question isn’t “which tool finds the most codes?” It’s “which tool helps me prove every code I submit?”
This article breaks down what risk adjustment software does, what capabilities actually matter in the current enforcement climate, and where the category is headed.
Risk adjustment software analyzes clinical documentation, claims data, and patient records to identify, validate, and manage diagnosis codes used for reimbursement in value-based care programs. In Medicare Advantage alone, CMS pays private insurers over $530 billion annually, with payments adjusted up or down based on how sick each member is. The accuracy of those risk scores determines whether a health plan gets paid fairly, overpaid, or underpaid.
The core workflow looks like this: ingest medical records and claims data, use AI to identify diagnosis codes (specifically HCC codes, or Hierarchical Condition Categories), validate those codes against clinical evidence in the documentation, and present results to human coders for final review and submission.
Good software does this faster and more accurately than manual processes. Great software does it in a way that survives an audit.
For years, risk adjustment was treated as a revenue function. Health plans hired vendors to mine charts, find missed diagnoses, and add codes that increased their Risk Adjustment Factor (RAF) scores. The more codes you added, the more CMS paid. Simple math, and it worked for a long time.
That model is now a liability.
The DOJ’s case against Aetna is instructive. Aetna ran a chart review program in payment year 2015 where it hired coders to review medical records. Those reviews turned up additional codes (which Aetna submitted) and unsupported codes (which Aetna kept in place). The government’s argument: using chart reviews to add codes while ignoring results that showed overcoding is evidence of intent to inflate payments.
This is the pattern that should worry every health plan running a retrospective program: if your software only adds codes and never removes them, you’ve built exactly the kind of system regulators are targeting.
Three OIG audits released in March 2026 reinforce the point. One audit of a major southeastern Medicare Advantage organization (A-07-22-01207) found a 91% error rate on high-risk diagnosis codes, with 100% error rates for acute stroke and acute myocardial infarction. Most errors came from history-of conditions coded as active diagnoses. A second audit found 84% error rates. A third, 81%.
These aren’t outliers. They’re the baseline.
After working with health plans, ACOs, and provider organizations processing millions of patient records, here’s what separates useful risk adjustment software from expensive shelfware.
This is the single most important capability to evaluate, and one that’s hard to find in the market.
Two-way coding means the software identifies codes that should be added (legitimate diagnoses missed in claims) AND codes that should be removed (diagnoses in claims that lack clinical support). Every compliance-first program needs both directions.
The Aetna settlement makes the stakes clear: $106.2 million of the $117.7 million penalty traced back to a chart review program that only worked in one direction. The enforcement pattern leaves little room for interpretation: supplemental data submission is expected to work in both directions. Your software should too.
At RAAPID, two-way coding isn’t an optional feature. It’s the default. Our Neuro-Symbolic AI identifies underclaimed codes (potential adds), overclaimed codes (potential deletes), and properly claimed codes in a single review cycle. This isn’t a philosophical choice; it’s a direct response to how enforcement actually works.
“AI-powered” is now table stakes. Every vendor in the market claims some form of artificial intelligence. The question that matters: can the AI show its work?
When a diagnosis code gets flagged for review, the software should trace that recommendation back to specific clinical evidence in the medical record. This means linking each code to documentation that meets MEAT criteria (Management, Evaluation, Assessment, and Treatment), the standard CMS uses to validate HCC diagnoses.
If the AI can’t explain why it suggested a code, that code is indefensible under audit. Full stop.
RAAPID’s Neuro-Symbolic AI combines neural networks with symbolic reasoning, including knowledge graphs and rule-based clinical logic. This architecture produces a transparent decision trail for every code suggestion: here’s the evidence in the chart, here’s how it maps to the diagnosis, and here’s why it meets or fails MEAT criteria. Unlike pure NLP or generative AI approaches, neuro-symbolic systems are far less prone to hallucination because outputs are constrained by knowledge graphs and clinical rules. They don’t require a human to reverse-engineer the AI’s reasoning after the fact.
CMS launched payment year 2020 RADV audits in February 2026, with audits now running on a quarterly cadence. The agency restored its five-month medical record submission window and is using variable sample sizes (35 to 200 enrollee-years per audit). CMS has also announced plans to scale its coding workforce from roughly 40 to approximately 2,000 certified coders and is using AI as a support tool for its own audit process.
Translation: audits are faster, bigger, and more frequent than ever.
Software that isn’t purpose-built for RADV preparation leaves health plans scrambling when the audit notice arrives. Look for platforms that offer centralized audit management, real-time progress tracking, CMS-compliant report generation, and the ability to manage concurrent audits from a single interface.
RAAPID’s RADV Audit Solution functions as a command center for the entire audit lifecycle, from initial notification through evidence submission and response. Every chart reviewed through RAAPID’s retrospective workflow already carries the MEAT-validated evidence trail needed for audit defense. The audit module doesn’t create defensibility after the fact; it surfaces documentation that was validated during the coding process itself.
Retrospective review (looking back at medical records after encounters) is necessary, but it’s no longer sufficient on its own. CMS has signaled a clear preference for encounter-driven documentation, diagnoses confirmed during actual patient visits rather than discovered months later through chart mining.
The safest diagnosis is one documented at the point of care by the treating clinician.
Prospective risk adjustment software supports providers before and during visits by surfacing conditions that need recapture, flagging care gaps, and providing clinical decision support in the EHR workflow. This isn’t about telling doctors what to code. It’s about making sure relevant clinical information is available when the provider is actually with the patient.
RAAPID offers both prospective and retrospective solutions under a single Clinical AI Platform. The prospective solution analyzes two years of patient data from charts, claims, and lab reports to create pre-visit summaries. The retrospective solution handles chart review, code validation, and audit preparation. Used together, they cover the full risk adjustment lifecycle: prospective captures diagnoses at the safest point (the encounter), and retrospective cleans up what was missed and removes what shouldn’t be there.
Risk adjustment software processes protected health information (PHI) at massive scale. Security certifications aren’t optional.
Look for HITRUST certification and SOC 2 Type II compliance at minimum. Not all vendors in the market carry both. And ask about deployment options: can the platform run in your cloud environment, or does it require you to send PHI to a vendor-controlled infrastructure?
RAAPID holds both HITRUST and SOC 2 Type II certifications and deploys on Azure, AWS, GCP, or within a customer’s own cloud environment. That kind of flexibility matters for enterprise buyers who need to align with existing IT governance and security policies.
A few red flags to keep in mind during vendor evaluations.
Accuracy claims on small sample sizes. Nearly every vendor will show you 98% accuracy on a pilot of 200 charts. That number is meaningless at production scale. Ask for performance data on 2,000 or more charts processed without human intervention. If the vendor can’t provide that, the “AI” is really human QA with a software wrapper.
Add-only retrospective programs. If the vendor’s retrospective solution identifies codes to add but never flags codes to remove, you’re building the exact risk profile the DOJ just penalized Aetna for. Ask specifically: “Does your platform identify overclaimed codes?” If the answer involves hedging, walk away.
Opaque AI. If the vendor can’t explain the technical architecture behind their AI, that’s a problem. “We use AI” tells you nothing. Ask whether they use NLP, machine learning, generative AI, or something else. Ask how the system explains its code recommendations. Ask for a sample evidence trail. If the demo only shows the recommendation without the reasoning, assume there isn’t any.
No RADV module. Some platforms treat RADV audit preparation as a consulting add-on rather than a built-in capability. Given the quarterly audit cadence, RADV readiness should be native to the platform, not bolted on after the fact.
The OIG’s updated Medicare Advantage Industry Compliance Program Guidance, released in February 2026, signals where enforcement is going. The guidance flags chart reviews, in-home health risk assessments, and EHR prompts as practices that warrant close oversight. It warns that failing to remove unsupported codes is a compliance failure. And it explicitly calls out the need for MAOs to review AI and software tools used in the coding process.
Meanwhile, MedPAC’s March 2026 report found that Medicare spends 14% more on MA enrollees than it would under traditional fee-for-service, totaling roughly $76 billion in excess payments, with coding intensity driving $22 billion of that gap. Bipartisan legislation (the No UPCODE Act, S.1105) would exclude chart review and health risk assessment diagnoses from risk adjustment entirely, with CBO estimating $124 billion in savings over 10 years.
The direction is unmistakable. Risk adjustment is shifting from a revenue function to a compliance function. Software built to maximize code capture will become a liability. Software built to prove clinical accuracy and documentation integrity will become essential.
RAAPID was built for this shift. Not because we predicted it, but because defensible coding was the founding principle from day one. Two-way retrospective review, MEAT-validated evidence trails, prospective support at the point of care, and purpose-built RADV audit management aren’t features we added in response to enforcement pressure. They’re why the platform exists.
If you’re evaluating risk adjustment software in 2026, start with this question: will this tool help me prove every code I submit, or will it just help me find more codes to submit? The answer determines whether you’re building a program that survives the next five years, or one that becomes a case study in what not to do.
\
:::tip This story was distributed as a release by Sanya Kapoor under HackerNoon’s Business Blogging Program.
:::
\
2026-04-16 02:58:38
Kingstown, St. Vincent and the Grenadines, April 15th, 2026/Chainwire/Bitunix, a cryptocurrency derivatives exchange, announced that it has obtained ISO/IEC 27001:2022 certification, a widely recognized international standard for information security management given by the International Organization for Standardization (ISO).
The certification confirms that Bitunix exchange has established formal systems to manage and protect sensitive data, including user information and their assets. It follows an external audit process that evaluates how organizations identify risks, control access, and respond to potential security incidents.
With ISO 27001:2022 now achieved, for Bitunix users, the impact is practical. It means stronger protection of personal information and funds, better alignment with international data protection rules, and more transparency around how the platform operates.
This also builds greater trust for users on the platform and, at the same time, the certification pushes the company to keep improving how it operates, from internal processes to overall platform stability. For users, that translates into a more reliable experience and a platform that is consistently working to perform better.
ISO 27001:2022 sets out clear requirements for how companies should organize their security practices, from internal procedures to technical safeguards. For exchanges, where large volumes of funds and personal data are handled, such standards are increasingly seen as essential rather than optional; hence, Bitunix achieved this certification.
Known for high standards when it comes to security and transparency, alongside the certification, Bitunix exchange continues to build on its existing security setup through several practical measures reflecting ongoing efforts to improve how the company safeguards its platform and users.
The platform maintains proof of reserves showing more than 100% backing for BTC, ETH, and USDT, supported by real-time Merkle tree verification. It also applies a strict 1:1 asset backing model, ensuring that all user funds are fully matched. In addition, users are given access to open-source tools and a verification portal to independently check their balances.
To cover unexpected situations, Bitunix has also set aside a dedicated $30 million USDC care fund. Therefore, the ISO 27001:2022 certification adds to these efforts and reflects a broader push to keep improving how the exchange protects users.
The company said it will keep updating its systems as it grows, with a focus on keeping things safe and transparent for users.
“Achieving ISO/IEC 27001:2022 certification reflects our deep commitment to security and transparency,” said Steven Gu, Bitunix’s Chief Strategy Officer. “At Bitunix, we believe trust is earned through action. This certification, alongside our Proof of Reserve system, ensures our users can trade with confidence.”
Bitunix said it plans to continue updating its security practices as the platform expands and as threats evolve.
Bitunix is a global cryptocurrency derivatives exchange trusted by over 5 million users across more than 150 countries. Guided by its core principle of better liquidity, better trading, the platform is built for traders who expect more, committed to providing Ultra Trust, Ultra Products, and Ultra Experience.
Bitunix offers a fast registration process and a user-friendly verification system supported by mandatory KYC to ensure safety and compliance. With global standards of protection through Proof of Reserves (POR) and the Bitunix Care Fund, the exchange prioritizes user trust and fund security.
Industry-first innovations like Fixed Risk, TradingView-powered chart suite, along with indicator alerts, cloud-synced templates, provide both beginners and advanced traders with a seamless experience. Making Bitunix one of the most dynamic platforms on the market.
Bitunix Global Accounts
X | Telegram Announcements | Telegram Global | CoinMarketCap | Instagram | Facebook | LinkedIn | Reddit | Medium
Contact
COO
Kx Wu
:::tip This story was published as a press release by Chainwire under HackerNoon’s Business Blogging Program
:::
Disclaimer:
This article is for informational purposes only and does not constitute investment advice. Cryptocurrencies are speculative, complex, and involve high risks. This can mean high prices volatility and potential loss of your initial investment. You should consider your financial situation, investment purposes, and consult with a financial advisor before making any investment decisions. The HackerNoon editorial team has only verified the story for grammatical accuracy and does not endorse or guarantee the accuracy, reliability, or completeness of the information stated in this article. #DYOR
2026-04-16 02:48:02
New York, New York, April 15th, 2026/Chainwire/--Community-first NFT ecosystem built on Solana signals a new era for dynamic digital ownership - with a historic NFT launch on the horizon.
Kokopi Koalas ($KOKOP) officially launched on March 9, 2026, as a 100% fair-launch token on pump.fun. Within hours, it graduated and secured verified listings on Jupiter, Birdeye, CoinGecko, and GeckoTerminal. In just 30 days, the project has achieved a stabilized market cap of approximately $800,000, attracted over 800 holders, and built a rapidly growing, highly engaged community across X and Discord.
What sets Kokopi Koalas apart is not just its technology - it is its leadership and conviction. Founded and led by Mandi, a woman with decades of experience in corporate marketing, technology, and entrepreneurship, the project stands as a rare identity-forward force in a space still dominated by anonymous teams and short-lived meme coins.
On March 19, just ten days after launch, the team made significant on-chain commitments rarely seen in early-stage projects:
Vested 25,000,000 $KOKOP (2.5% of total supply) on Streamflow for a full year with zero yield
On March 30, the team launched three community staking pools by allocating 16,486,842 $KOKOP (1.65% of total supply) with aggressive yields for long-term holders:
100-day pool at 13% APY
200-day pool at 25% APY
365-day pool at 40% APY
All pools are live on Streamflow. The link is available on the official website: kokopikoalas.com
These moves lock 4.15% of the total supply with no financial benefit to the team - a powerful statement of long-term alignment in an industry often plagued by rugs and short-term thinking. The creator wallet now holds approximately 2.02% of the supply. Mint authority has been permanently disabled, ensuring no additional $KOKOP tokens will ever be created. All actions are fully on-chain and publicly verifiable.

Kokopi Koalas will be releasing its unique Modular NFT collection in April. Unlike static NFT projects, every Koala NFT will be endlessly customizable through its proprietary modular Trait Store, allowing holders to upgrade, evolve, and personalize their NFTs over time.
Brand partnerships are a foundational element of Kokopi Koalas' long-term vision. Leading brands will sponsor exclusive, limited-edition trait drops inside the modular Trait Store, giving Koala holders access to real-world perks such as exclusive discount codes, affiliate opportunities, limited-edition merchandise, and VIP experiences.
A meaningful portion of every partnership revenue will be used to purchase $KOKOP directly on the open market - generating sustained buying pressure and delivering compounding long-term value to holders.
Community First: Always Was, Always Will Be
The Kokopi Koalas community existed long before the token launch. When $KOKOP went live, the community moved first, ensuring genuine supporters led the way. Today, the project has over 1,500 active members and a circulating supply of approximately 974,999,448 $KOKOP.
“We didn’t build a community after launch - we launched because we already had one. Every decision, from tokenomics to NFT design, is made with our holders in mind. This project was built differently, and the market is starting to see that.”
- Mandi, Founder & CEO, Kokopi Koalas
What’s Next: The official Kokopi Koalas NFT collection and modular Trait Store launch is scheduled for April. Details will be announced first on Discord and via press release on official channels. For more detailed information and official links, users can visit: kokopikoalas.com
Kokopi Koalas is a woman-founded and women-led, community-driven Modular NFT project on Solana, launched March 9th, 2026. $KOKOP is the verified and listed native utility token, live on Jupiter, Birdeye, CoinGecko, and GeckoTerminal. It powers staking rewards and the modular Trait Store, letting holders customize and evolve their Koalas in real time. Backed by a passionate community and strategic brand partnerships built for sustained growth, Kokopi Koalas delivers real ownership, utility, and long-term value in the Solana ecosystem.

Media & Press Contact
Torin | Director of Public Relations, Kokopi Koalas
Email: [email protected]
Website: kokopikoalas.com
Coin Address: ENcwYGVhRsEqKpH4SzRH4mcYSGc9Cb6s4WJGS9ojpump
Official Media & Press Kit - featuring high-resolution assets, project visuals, and exclusive Q&As with $KOKOP founder Mandi - at: kokopikoalas.com/media/
#KokopiKoalas #$KOKOP #Solana #SolanaNFT #NFT #NFTCommunity #ModularNFT #WomenInCrypto
Woman-Founded and Led Solana Project Kokopi Koalas Launches $KOKOP Token and NFT Project.
Director of Public Relations
Torin
Kokopi Koalas
:::tip This story was published as a press release by Chainwire under HackerNoon’s Business Blogging Program
:::
Disclaimer:
This article is for informational purposes only and does not constitute investment advice. Cryptocurrencies are speculative, complex, and involve high risks. This can mean high prices volatility and potential loss of your initial investment. You should consider your financial situation, investment purposes, and consult with a financial advisor before making any investment decisions. The HackerNoon editorial team has only verified the story for grammatical accuracy and does not endorse or guarantee the accuracy, reliability, or completeness of the information stated in this article. #DYOR