MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

The HackerNoon Newsletter: 5 Risks You Have To Take as a Leader (1/22/2026)

2026-01-23 00:02:54

How are you, hacker?


🪐 What’s happening in tech today, January 22, 2026?


The HackerNoon Newsletter brings the HackerNoon homepage straight to your inbox. On this day, Apple aired its iconic "1984" commercial during the Super Bowl in 1984, In historic Roe v. Wade decision, the right to abortion was made a federal right in 1973, Kmart Corporation filed for Chapter 11 bankruptcy in 2002, and we present you with these top quality stories. From 5 Risks You Have To Take as a Leader to What Happens When Novelists Write Like Developers, let’s dive right in.

What Happens When Novelists Write Like Developers


By @burvestorylab [ 7 Min read ] Swap Word for an IDE: use VS Code, Markdown, Git version control/branches, and built-in AI to draft, revise, and safeguard your novel like a developer. Read More.

What School Violence Reveals About Education – and How AI Could Help


By @giovannicoletta [ 12 Min read ] A school killing in Italy exposes deeper failures in education and politics. Data, AI, and debate-based learning may offer a long-term solution. Read More.

5 Risks You Have To Take as a Leader


By @vinitabansal [ 12 Min read ] Here are the 5 risks every leader must take daily because it’s impossible to get better at anything without consistent practice. Read More.


🧑‍💻 What happened in your world this week?

It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️


ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME


We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.See you on Planet Internet! With love, The HackerNoon Team ✌️


Why Expected Value Is Not Enough in Production Trading Systems

2026-01-22 23:34:22

We had a problem. Our automated trading system had a positive expected value: the math checked out, the backtests looked great, and initially, it made money. But over time, it was bleeding. Small losses that accumulated faster than the occasional wins could compensate.

This wasn't a bug in the code. It was a fundamental misunderstanding of what matters in production.

The Expected Value Trap

Most trading tutorials, academic papers, and online courses teach you to maximize expected value. The logic seems bulletproof:

E[profit] = Σ(probability_i × outcome_i)

If this number is positive, you should take the trade. If you can make this number bigger, you should optimize for it. Simple, right?

Except in production, this optimization strategy has a fatal flaw: it doesn't account for the path you take to reach that expected value.

Let me show you what I mean with a real scenario from our system.

The Bleeding System

Our strategy was designed to capture price spikes in volatile markets. The model would:

  1. Analyze possible price directions for each trading window
  2. Optimize position sizing using quadratic programming
  3. Execute trades to capture spread opportunities

On paper, the expected value was solidly positive. In practice:

  • Day 1-3: Caught a major spike, made $15,000
  • Day 4-12: Small losses every day, total -$8,000
  • Day 13-14: Another spike, made $12,000
  • Day 15-28: Gradual bleed, total -$11,000

The problem? Our optimizer had developed a structural bias. It was systematically taking positions that won big occasionally but lost small amounts frequently. The expected value calculation said this was fine: the big wins would eventually compensate. But "eventually" requires infinite capital and infinite time horizon.

We had neither.

Seeing The Difference: A Simulation

To illustrate why these risk controls matter, let's compare two strategies trading the same market over one year:

Strategy A (EV Maximization): Aggressive position sizing based purely on expected value, using 150% leverage when opportunities look good.

Strategy B (Risk-Controlled): Same market signals, but with fractional Kelly sizing (40% of aggressive) and CVaR-based position reduction during high tail risk periods.

Simulations results with and without risk controls

The results tell a crucial story. Look at the left chart closely - most EV-maximization paths aren't catastrophically failing. They're just… not compounding. You can see the sawtooth pattern: occasional spikes up, followed by slow erosion. This is the insidious bleeding that positive expected value misses.

Notice how a few paths reached $500k? Those outliers pull the mean up to $146k. But the median is only $136k, and 29 out of 100 paths end below starting capital. In a backtest, you might have gotten lucky and seen one of those winner paths. In production, you get one random draw.

The right chart is "boring", and that's exactly the point. No moonshots to $500k, but also no catastrophic drawdowns. The risk-controlled strategy clusters tightly around modest growth. It survives to compound returns over multiple years.

This is the production reality: the strategy that survives gets to compound. The strategy that bleeds out makes nothing, regardless of what the expected value calculation promised.

What Expected Value Doesn't Capture

1. Risk of Ruin

This is the classic gambler's problem, formalized by the Kelly Criterion. Even with positive expected value, if your position sizing is wrong, you will go broke.

Consider: You have $100,000 capital and a trade with 60% win probability that either doubles your bet or loses it. Expected value is positive (+20%). But if you bet everything, you have a 40% chance of losing it all on the first trade.

Kelly tells you the optimal bet size is:

kelly_fraction = (p * b - q) / b
# where p = win probability, q = loss probability, b = odds

But here's what we learned in production: even Kelly is too aggressive.

Why? Because:

  • Your probability estimates are wrong (always)
  • Markets change (your 60% edge becomes 52%)
  • Correlations break down during stress (when you need them most)
  • You can't rebalance instantly (slippage, latency, market impact)

We ended up using fractional Kelly (25-50% of the theoretical Kelly bet) because the real-world costs of overestimating your edge are catastrophic.

2. Numerical Instability in Extreme Events

One morning, our system crashed during an extreme weather event. Not a software crash, but a mathematical one.

Our covariance matrix became singular. The optimizer couldn't find a solution. We were frozen, unable to trade, during the exact conditions where our strategy should have made the most money.

The problem: we had optimized for expected scenarios. But extreme events have different correlation structures. Assets that normally move independently suddenly become perfectly correlated. Your carefully estimated covariance matrix, built from thousands of normal days, becomes useless.

The fix wasn't better expected value calculations. It was regularization:

from sklearn.covariance import LedoitWolf

# Instead of sample covariance
cov_matrix = np.cov(returns.T)

# Use shrinkage towards structured estimator
lw = LedoitWolf()
cov_matrix_robust = lw.fit(returns).covariance_

This trades off some accuracy in normal times for stability in extremes. Your expected value calculations will be slightly worse. Your system will survive black swans.

Time Horizon Mismatch

Here's a problem that doesn't show up in backtests: your expected value calculation assumes you can wait long enough for the law of large numbers to work.

In production, you can't.

We discovered this when our system showed strong positive expected value over 90-day windows but consistently lost money over 30-day windows. The problem wasn't the math. It was the business reality.

Our capital providers reviewed performance monthly. Our risk limits were adjusted quarterly based on recent results. If we had three bad months, our position limits got cut, regardless of what the long-term expected value said.

The theoretical strategy needed 6-12 months to reliably show profit. The operational reality gave us 3 months before consequences kicked in.

We had to add explicit time-horizon constraints to our optimization:

def optimize_with_horizon_constraint(scenarios, max_horizon_days=30):
    """
    Optimize not just for long-term EV, but for probability of
    positive returns within operational time horizon 
    """
    # Standard expected value 
    ev = np.mean(scenarios)

    # But also: what'sthe probability we're profitable
    # within our actual time horizon?
    rolling_returns = pd.Series(scenarios).rolling(max_horizon_days).sum()
    prob_profitable_in_horizon = (rolling_returns > 0).mean()

    # Penalize strategies with low short-term win probability 
    # even if long-term EV is great
    if prob_profitable_in_horizon < 0.6:  
      return ev * 0.5 # Heavily discount

    return ev

This meant accepting strategies with slightly lower theoretical expected value but higher probability of showing profit within our operational constraints. It's not mathematically optimal, but it's practically necessary.

What to Optimize Instead

After painful lessons, here's what we learned to optimize for:

1. Risk-Adjusted Returns with CVaR

Instead of maximizing E[profit], we minimize CVaR (Conditional Value at Risk): the expected loss in the worst 5% of scenarios

import cvxpy as cp

# Decision variable: position sizes
positions = cp.Variable(n_assets)

# Scenarios returns
scenario_returns = get_price_scenarios() # shape: (n_scenarios, n_assets)
portfolio_returns = scenario_returns @ positions

# CVaR constraints
alpha = 0.05 # 5% tail
var = cp.Variable()
u = cp.Variable(n_scenarios)

constraints = [
    u >= 0,
    u >= -(portfolio_returns - var),
]

cvar = var + cp.sum(u) / (n_scenarios * alpha)

# Optimize for return while constraining tail risk
objective = cp.Maximize(cp.sum(portfolio_returns) / n_scenarios - lambda_risk * cvar)

This explicitly penalizes strategies that have good average returns but catastrophic tail risk.

2. Robustness to Model Error

We assume our model is wrong and optimize for the worst-case within a reasonable uncertainty bound:

# Instead of single expected return estimate
mu_estimated = historical_returns.mean()

# Assume uncertainty 
mu_lower_bound = mu_estimated - 2 * historical_returns.std() / np.sqrt(len(historical_returns))

# Optimize for worst-case in uncertainty range
# (Robust optimization / minmax approach)

This protects against overconfident parameter estimates.

3. Kelly-Constrainted Position Sizing

We explicitly limit position sizes based on Kelly criterion, even when the optimizer wants more:

def kelly_position_limit(edge, volatility, capital, max_kelly_fraction=0.25):
    """
    edge: expected return per unit risk
    volatility: standard deviation of returns 
    max_kelly_fraction: fraction of theoretical Kelly to actually use
    """
    kelly_full = edge / (volatility ** 2)
    kelly_fraction = capital * kelly_full * max_kelly_fraction

    return kelly_position

We use 25% Kelly as a hard constraint. Yes, this reduces expected value. It also ensures we'll still be trading next month.

The Production Mindset

The shift from expected value thinking to production thinking is philosophical:

Research mindset: "What strategy has the highest expected return?"

Production mindset: "What strategy will survive being wrong about my assumptions?"

Here are the practical shifts we made:

  1. Backtests: Added worst-month analysis, not just average returns
  2. Position sizing: Conservative by default, with kill-switches for anomalies
  3. Risk metrics: Track CVaR daily, not just P&L
  4. Model validation: Assume 30% parameter uncertainty on all estimates
  5. Disaster planning: Explicit code paths for "model is completely wrong" scenarios

The Lesson

Expected value is a beautiful mathematical concept. It's clean, intuitive, and theoretically optimal.

It's also not enough.

In production, you're not trading against a probability distribution. You're trading against:

  • Your own imperfect risk models
  • Markets that change
  • Operational constraints that aren't in your backtest
  • The psychological reality of watching your capital decline day after day even though the "expected value is positive"

The systems that survive aren't the ones with the highest expected value. They're the ones that remain robust when the model is wrong, markets shift, and black swans appear.

Optimize for survival first. Profitability second. Expected value is a component of that calculation, but it's not the objective function.

How to Advertise Your Startup the Right Way

2026-01-22 23:30:03

We’ve all been there, scrolling through the internet late at night when you know you’re supposed to be sleeping. But something’s off. You’re getting hit with so many ads, and the worst part of it all is that you’re not interested in their products at all. For businesses, especially startups, this is really not a great first impression.

\ There are, however, ways to not fall into this trap. You can advertise to millions of people without annoying anyone. Here’s how.

3 Ways to do Ads Correctly

1. Choose a platform that fits your target audience

An easy way to not annoy potential customers is by targeting them correctly. For example, if your startup primarily sells hair gel, you don’t want to advertise and try to sell hair gel on a website called baldguys.com. Would it be hilarious if you did? Yes, absolutely. Does it make sense business-wise? No.

\ That’s why it’s important to know the audience of the platform where you’re trying to advertise. Baldguys.com can have millions of visitors, and your ad can get millions of impressions, but if nobody wants to buy your product, then it doesn’t really matter. Plus, I’m not sure if bald men want to be reminded of the time they used to have hair.

2. Subtle is Perfect

Another trap startups fall into is believing that flashy and in-your-face ads are better. But nobody likes loud, obnoxious ads; they prefer ads that don’t clutter up the whole screen. You might think that a smaller, less in-your-face ad is counterintuitive to get more eyes onto your startup and your product, but people remember ads that respect them.

3. Be clear about what you’re selling

The final thing you should do is ensure your ad is clear and concise. Potential customers need to know exactly who you are and what you’re selling. If you waste their time by posting ads that are cryptic and say something like, “By harnessing AI, we’re empowering enterprises to grow their ecosystem.” That really doesn’t explain anything, and people get annoyed when you’re trying to sell them something, while remaining cryptic and mysterious. Just tell them outright why they should consider your product or service.

\ Crafting the perfect advertisement campaign can be tedious and nerve-wracking, but it doesn’t have to be. Especially when you can join HackerNoon’s Targeted Ad Program. Here’s everything that it has to offer.

\

  • HackerNoon has curated 100,000+ technology tags to date.
  • These tags are grouped into the relevant parent categories like AIWeb3ProgrammingStartupsCybersecurityFinance, Business, and more!
  • Every story organically gets at least eight relevant tech tags and a parent category.
  • Sponsors buy multimodal placements on relevant categories with all the tags and stories.
  • These Ad placements include Banners, Logos, Newsletter Ads, and Audio Ads (what we call truly AIO - Activities, Interests, and Opinions).
  • Optimized for: Brand Recall and Clickability (Get 3× more clicks for the same impressions compared to elsewhere).
  • Get quality leads at unbeatable prices, with CPM ~ $7 and CPC ~ $5.

:::tip Publish your first story with HackerNoon today!

:::


Now, here are 3 startups that will annoy you with how amazing they are.

Meet Scoutlabs, DataRock Labs, and Hackrate: HackerNoon Startups of the Week

Scoutlabs

Farmers have a lot of problem-solving they have to do, and one big problem that is very time-consuming is dealing with insects destroying their crops. That’s where scoutlabs comes in. They specifically specialize in delta traps to help farmers get rid of insects in a quick and efficient way. These traps are made by farmers for farmers.

\ It’s this desire to help farmers that helped scoutlabs earn second place in HackerNoon’s Startups of the Year in Budapest, Hungary. They also have the honor of being in the top 10 Startups of The Year in the ClimateTech and Manufacturing categories. But the biggest honor came when they won 1st place in the Renewable Energy category.

\

DataRock Labs

Everyone knows that gathering data to better understand the needs of your customers and employees is extremely important. However, just having all that raw data will get you nowhere if you don’t know how to analyze it and turn it into an actionable plan. Well, with DataRock Labs’ services, that can all change. They offer Data Engineering, AI, and Business Intelligence services to help you better understand your data.

\ DataRock Labs made a name for itself in the SaaS, DevOps, and Developer Tools categories, and also scored 1st place as Startup of The Year in Budapest, Hungary.

\

Hackrate

It seems like almost every month, you hear about a company getting its data stolen, including all of its customers’ credit card information. They say all press is good press, but you really don’t want to be in the news because of that. That’s why Hackrate is well-trusted, because its services actively help companies improve their defenses and better understand their weaknesses. Attack surface management, a bug bounty board, and penetration testing are just a few of the services that Hackrate offers companies to level up their cybersecurity.

\ Hackrate came in 3rd place for Startups of The Year in Budapest, Hungary, and was also a recognized startup in the It Services, Web development, and Business Intelligence categories.


:::tip Want to be featured? Share Your Startup's Story Today!

:::


That’s all we have for you today.

\ Until next time, Hackers!

Why “It Works” Is Often the Most Dangerous Phrase in Product Design

2026-01-22 23:10:33

At a previous company, I sat in a meeting where we were debating whether to redesign a feature that had been in the product for three years. The PM thought we should leave it alone. It works, users know it, so why mess with success?

Then someone pulled up the original design spec from 2021. It turns out the feature was built to solve a problem that no longer existed. The vendor API it was working around? Fixed 18 months ago. The technical constraint that shaped the entire interaction was also gone.

But users still used it. Not because it was the best solution. They'd just learned how to use what we gave them, and nobody ever went back to ask whether we should rebuild it properly now that we could.

That meeting stuck with me. My husband wrote something years ago that kept coming back to me: "If need is the mother of invention, usability should be the father." It's a phrase I've been thinking about a lot lately—how usability keeps transforming products long after the original need is solved, sometimes long after that need has completely changed.

\

Products keep changing even when they look the same

Think about bicycles. The ones we ride today look nothing like the first versions. Those early designs were awkward, uncomfortable, and kind of dangerous, actually. But the core need stayed the same: human-powered transportation.

What evolved was our understanding of usability. Better materials, yes, but also refined ergonomics and improved balance. Each generation learned from watching people actually ride these things. Not inventing something new, just making the existing invention more human.

I keep seeing this pattern. The first version solves the need. Everything after that? It's about making the solution livable.

\

Why "it works" is a trap

Teams treat "it works" like a finish line. Feature shipped, problem solved, next thing please.

But "it works" usually just means users figured out how to accomplish their goal despite the friction. Doesn't mean the friction isn't there. Doesn't mean they wouldn't jump at something better. They've just adapted.

I was reviewing a flow a while back in which users had to click through three confirmation screens to complete a single action. Why three confirmations? Nobody on the current team could tell me. We dug through old tickets and found that two years ago, there was a data loss bug. The triple confirmation was a temporary patch while engineering fixed the root cause.

They fixed the bug six months later. The three confirmation screens stayed. Because it "worked." Users got through it, drop-off wasn't terrible, and other priorities came up.

That's not usability evolving. That's usability getting stuck.

\

What actual evolution looks like

Real usability evolution isn't about adding features. It's refinement based on how people actually use what you built.

ATMs are probably the clearest example I can think of. First-generation ATMs gave you cash first, then returned your card. Simple, functional, solved the need. But people would grab their cash and walk away, leaving their card in the machine.

Later designs reversed the order: card first, then cash. Not a new invention. Not solving a different problem. Just adapting to how humans actually behave instead of how the machine logic suggested they should behave.

Same core function, shaped around reality instead of theory.

\

Why do products stop evolving?

From what I've seen, products stop improving their usability for a few predictable reasons.

The original designer left, and nobody knows why things work the way they do. Teams get nervous about changing anything.

Metrics look fine, so there's no business case. Even though users have just learned to work around problems.

The system got so interconnected that changing one thing breaks three others. Small improvements feel too risky.

Or the team moved on to new features. Improving what exists feels less exciting than building what's next.

All reasonable. Allare slowly eroding your product's usability.

\

What I've noticed about features that improve vs features that stagnate

The features that keep getting better usually have someone who still cares about them. An owner is watching how people use it, noticing small frustrations, pushing for refinements.

Features that stagnate are orphans. Nobody owns them anymore. They work "well enough." Which means they never get better.

One team I worked with had this practice they called "adoption reviews." Every quarter, they'd look at features that launched more than six months ago and ask: how are people actually using this now? Not just "are they using it" but how. What workarounds did they develop? What's harder than it needs to be? What did we learn since we built it?

Then they'd prioritize improvements based on real usage patterns, not just new feature requests.

Not exciting work. But it's the difference between products that age well and products that just get old.

\

When evolution goes wrong

Sometimes teams overcorrect. They see users struggling and immediately redesign without understanding why it works the way it does.

I watched a team simplify what seemed like an unnecessarily complex flow. Three steps became one. Much cleaner. Everyone on the team was excited.

Drop-off rates tripled.

Those three steps weren't random complexity. They were scaffolds that helped users make better decisions. Collapsing everything into one screen removed the structure people needed to think through their choices.

Good evolution requires understanding not just what people do, but why the current design works, even when it seems like it shouldn't.

\

The products that never stop improving

The products I actually love using don't necessarily have the best features. They have the best evolution.

Google Search hasn't changed its fundamental purpose in 20 years. But the usability keeps evolving. Autocomplete, instant results, knowledge panels, and featured snippets. Each one is based on learning how people actually search, not how Google thought they would search.

Slack didn't invent team chat. They just kept refining how it works. Threading, reactions, better notifications, smarter search. Small improvements that add up over time.

These aren't dramatic reinventions. They're careful, continuous evolution based on watching real people use the product.

\

What this means for how I think about design now

Need might create the first version of your product. But usability evolution determines whether there's a tenth version.

The questions I ask myself now are different from those I used to ask. What are we keeping just because it exists, not because it's still right? What constraints shaped our design that don't apply anymore? What have users learned to work around that we could actually fix?

And probably the most important question: who on the team is responsible for making existing features better, not just building new ones?

\

Something worth trying

Every few months, pick one established feature and pretend you're designing it fresh today. With current technology, current understanding of users, and current business context.

Would you design it the same way?

Usually, the answer is no. And when it is, the next question is: what's stopping you from making it better?

Sometimes it's real constraints. But a lot of times it's just inertia. And inertia is expensive when you're competing with teams that actually evolve their products.

\

Where I've landed on this

Need invents products. Usability determines whether those products stay relevant.

The invention gets you to market. The evolution keeps you there.

Teams that get this don't just ship and move on. They ship, watch what actually happens, learn from it, and refine. Continuously. Even when things seem to be working.

Especially when things are working, actually.

==Because "working" is just the starting point for what a product can become.==

\

The Problem With Perfect AI Is That Mathematics Won’t Allow It

2026-01-22 23:02:20

Despite rapid advances, artificial intelligence is constrained by mathematical, physical, and logical limits that make perfect or infinite intelligence fundamentally impossible.

Optimization Log: How I Pushed a 2,750-Word Guide to 97/100 PageSpeed for AI Search

2026-01-22 22:51:14

Speed is the New Authority: How I Hit 97/100 PageSpeed for the AI-Search Era

The "SEO is dead" crowd is half-right. The old way of SEO—slow, bloated pages stuffed with keywords—is indeed dying. In its place, a new beast has emerged: Generative Engine Optimization (GEO).

Last week, I decided to put my own infrastructure to the ultimate test. I didn't just want a "fast" site; I wanted a site that could be parsed by an LLM (Large Language Model) in a heartbeat. I took a massive, 2,750-word guide on GEO—the kind of page that usually chokes under its own weight—and optimized it until Google’s PageSpeed Insights handed me a 97/100 Mobile Performance score.

Why go to such extremes? Because in 2026, if your Largest Contentful Paint (LCP) isn't under 1 second, you don't just lose users—you lose the AI.

Generative engines like Gemini and SearchGPT are speed-hungry. They prioritize "Instant Answers" and favor sources that provide a seamless technical connection. If your site is slow, you are invisible to the AI crawlers that now control the majority of search traffic.

In this log, I’m breaking down the six technical pillars I used to clear the path for AI. From AVIF conversion to the "Death of Bloat" in script pruning, here is how I built a high-performance foundation for the next era of search.

Pillar #1: Advanced Image Delivery and Modern Media Compression

One of the biggest killers of Technical SEO for GEO is unoptimized media. In my 2,750-word guide, images are essential for context, but they also add weight. \n

The Strategy:

  • AVIF over WebP: While WebP was the gold standard in 2024, by 2026, AVIF has taken over. It offers 30% better compression than WebP without losing quality.
  • Dynamic Aspect Ratio Padding: To ensure a CLS of 0, I pre-define the space for every image in the CSS. This prevents the "jumpy" feeling when a page loads.
  • The GEO Benefit: AI engines often "scrape" images to display in the Generative UI sidebar. If your images are well-lit and properly tagged with Schema, you are five times more likely to be featured as the visual.

Pillar #2: Script Pruning and the "Death of Bloat"

Most WordPress sites are weighed down by "Ghost Scripts"—plugins you deleted months ago that still leave traces of JavaScript behind.

In our Technical SEO for GEO framework, we use a "Load on Interaction" model.

  • Example: Your "Contact Form" script shouldn't load the moment the page opens. It should load only when the user scrolls to the bottom or clicks "Contact."
  • The Result: My site achieved a 0ms Total Blocking Time. This means the browser's main thread is always free to process the actual content, which is what the AI needs to read.

Pillar #3: Server-Side Excellence and Edge Computing

If your server is in New York and your visitor is in London, physics dictates a delay.

  • Implementation: I moved TheAbbasRizvi.com to a "Global Edge Network."
  • Why it matters for GEO: AI bots crawl from multiple global locations simultaneously. If your Time to First Byte (TTFB) is inconsistent, the AI may flag your site as "unstable." High-tier Technical SEO for GEO requires a server response time under 200ms globally.

**Pillar #4: The Language of AI – Advanced Schema Markup for Generative Answers

\ Achieving a 97/100 PageSpeed score is monumental, but its power for Technical SEO for GEO is amplified tenfold when paired with precise Schema Markup. Schema is the structured data that tells AI exactly "what" your content means, not just "what" it says. In 2026, Schema is no longer optional; it's the dictionary for AI.

Beyond Basic Schema: The GEO Advantage

For TheAbbasRizvi.com, I went beyond the generic Article or BlogPosting Schema. We deployed highly specific markup, including:

  • HowTo Schema: For actionable guides, AI can pull steps directly into generative answers.
  • FAQPage Schema: For direct Q&A, allowing AI to immediately answer user queries without needing them to click through.
  • AboutPage and Person Schema: To establish clear E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) for Abbas Rizvi as the author and expert. This tells AI: "This is a credible source."
  • ImageObject Schema: Every critical image now tells AI its purpose, subject, and context. This significantly increases the chance of your visuals being used in AI-generated responses.

The GEO Benefit: By clearly defining content components with Schema, you are pre-digesting your information for AI models. This drastically improves the likelihood of your content being chosen as the "source" for a generative answer, even if the user never directly visits your site in the traditional sense. It's about being present in the AI's response, not just in the SERP.

Pillar #5: The Psychology of Speed – User Experience (UX) and AI Signals While we've focused heavily on the technical aspects of AI, we must never forget the human element. Ultimately, AI learns from human behavior. If users bounce immediately from your site because it's slow or visually unstable, that negative signal feeds back into the AI's ranking algorithms. This is why Technical SEO for GEO isn't just about bots; it's about delighting the user.

Converting Speed into Engagement

  • Reduced Bounce Rate: A site that loads in under 1 second ensures users don't hit the back button out of frustration. This tells AI: "This content is engaging."
  • Increased Time on Page: With faster loading, users spend more time consuming your valuable content, signaling to AI: "This is comprehensive and relevant."
  • Higher Conversion Rates: Whether it's signing up for a newsletter or requesting a consultation, a seamless user experience (driven by speed) directly impacts your business goals.

Visual Proof of Stability:

Here, we can see the impact of perfect CLS and a low LCP. The page loads smoothly, without any jarring shifts that disrupt the user's reading experience. The subtle yet powerful impact of a perfectly stable and fast-loading page cannot be overstated in the AI era. These user signals indirectly tell AI which content is truly "high quality" and deserving of top generative rankings.

Pillar #6: A 12-Month Technical Maintenance Roadmap for Sustainable GEO Dominance

Achieving a 97 PageSpeed score is not a one-time event; it's an ongoing commitment. The algorithms evolve, and so should your website. A robust Technical SEO for GEO strategy includes a proactive maintenance plan.

Quarterly Audit Checklist:

  1. Q1: Code Dependency Audit: Review all third-party scripts. Are they still necessary? Can they be loaded conditionally?
  2. Q2: Media Compression Review: Re-scan all new and existing images. Have new, more efficient formats (like JPEG XL) emerged?
  3. Q3: Server Log Analysis: Monitor crawl budget and bot behavior. Are AI crawlers hitting critical pages efficiently?
  4. Q4: Schema Validation: Use Google's Rich Results Test to ensure all Schema markup is still valid and being interpreted correctly.
  5. Monthly: Check Core Web Vitals in Search Console. Any dips need immediate investigation.

By integrating this proactive maintenance, you ensure that your Technical SEO for GEO remains ahead of the curve, providing a stable, fast, and AI-friendly platform for years to come.

\ **"A version of this technical audit was originally documented on my personal blog at TheAbbasRizvi.com."

\