MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

构建 Aether:一款“本地优先”P2P 即时通讯应用的架构解析

2026-04-06 21:23:27

Most "secure" messengers today still rely on centralized infrastructure. Whether it’s for signaling, metadata storage, or push notifications, there is almost always a server sitting between you and your recipient.

With Aether, I wanted to take a different route. The goal was to build a strictly local-first software architecture. If two devices are on the same network, they should be able to discover each other and communicate directly—no cloud, no central databases, and no intermediary nodes.

Here is a technical breakdown of how I built an architectural MVP of a decentralized P2P messenger using Electron, React, and libp2p, and the engineering bottlenecks I had to solve along the way.

\

The Core Philosophy: Zero-Server by Design

Building a system without a backend forces a complete paradigm shift in how you handle state and routing. You can't rely on a REST API to authenticate users or fetch message history. Every client must act as an independent, self-sufficient network node capable of discovering peers, negotiating protocols, and encrypting streams locally.

Phase 1: Cryptographic Identity (Secp256k1)

We abandoned the traditional "login/password" concept entirely. In Aether, your identity is pure mathematics.

  • The Tech Stack: I utilized ethers.js to generate a wallet based on the Secp256k1 elliptic curve.
  • The Implementation: Upon the first launch, the Electron Main process generates a 32-byte private key.
  • The User ID: We extract the wallet.address (the Ethereum address format) to serve as the public identifier. It is shorter than a raw public key and more familiar to users interacting with Web3 paradigms.
  • Current Security Debt: Currently, the key is stored as plaintext in an identity.json file within app.getPath('userData'). Acknowledging this vulnerability is the first step; securing it is the immediate next milestone (detailed in the roadmap below). However, by design, this key never leaves the Main process.

Phase 2: Isolating the Core (Strict IPC)

A common vulnerability in Electron applications is frontend Cross-Site Scripting (XSS) leading to local data theft. To mitigate this, we implemented a strict "Isolating the Core" pattern:

  • The Renderer (UI): Built with React, this is a completely "dumb" presentation layer. It has zero knowledge of where the cryptographic keys are stored and no direct access to the networking stack.

  • The Preload Bridge: Using contextBridge, we exposed a strictly typed API. The frontend can only issue high-level commands like "send this string to this PeerID". It cannot inspect the encryption process or alter node configurations.

  • The Main Process: This is the brain of the application. It safely houses the libp2p networking stack, manages the private keys, and handles all heavy cryptographic lifting.

    Fig 1: Secure Data Flow through Strict IPC and Isolated Subsystems.

\

Phase 3: Networking and the ESM Dependency Hell

Setting up a P2P node inside an Electron environment introduces a massive headache: the Pure ESM dependency hell. Modern Web3 libraries (libp2p included) are heavily reliant on ES Modules, which historically clash with Electron's CommonJS ecosystem. We solved this by creating a custom Vite configuration that bundles the @libp2p dependencies directly into the Main process.

Once the environment was stable, the networking logic was structured as follows:

  • Node Discovery (mDNS): We implemented Multicast DNS. The moment you open Aether, your node broadcasts its presence to the local network. Other local nodes catch this signal and automatically execute a dial().

  • Transports: We spun up two transports simultaneously: TCP (for raw speed) and WebSockets (to ensure future compatibility with browser-based nodes).

  • Muxing & Encryption: We use Yamux for stream multiplexing and the Noise protocol framework for channel encryption. This guarantees that any intercepted traffic between nodes appears as pure cryptographic noise.

    Fig 2: Resolving the Pure ESM compatibility issue by forcing Vite/Rollup to bundle dependencies directly into the Electron Main process (externalizeDeps: false).

\

Phase 4: Direct Stream Protocol (/aether/chat/1.0.0)

Instead of sending standard HTTP requests, Aether nodes communicate via raw data streams.

  • Protocol Negotiation: When you initiate a chat with a discovered peer, the nodes negotiate the use of our custom protocol /aether/chat/1.0.0.
  • Stream Handling: We utilize it-pipe to pipe data through the connection. A text message is encoded into a Uint8Array, fired across the Noise-encrypted channel, and decoded back into a string on the receiving end. It is as close to the "metal" of the network as possible.

What's Next? (The Architectural Roadmap)

Fig 3: High-level architectural roadmap showing technical debt migration path from current MVP to production node.

\ Pushing this MVP to GitHub is just the baseline. Here are the next technical milestones required to turn this prototype into a production-ready autonomous node:

  1. Encryption at Rest (The Vault): To fix the plaintext identity.json issue, we will implement AES-256-GCM encryption for the key file. Users will input a master password, which will be passed through Scrypt (a Key Derivation Function) to safely decrypt the private key locally.
  2. Global Peer Discovery (Kademlia DHT): Currently, mDNS only works over LAN/Wi-Fi. To allow internet-wide communication without signaling servers, we will integrate a Distributed Hash Table (DHT). Your node will use bootstrap nodes to find peers globally, turning Aether into a true mesh network.
  3. Local Persistence (SQLite/LevelDB): Without a server, there is no cloud history. We plan to embed SQLite or LevelDB directly into the Main process. All messages will be stored locally, paving the way for a future "sync protocol" that allows nodes to exchange missed messages upon reconnection.
  4. End-to-End Encryption (Double Ratchet): While the Noise channel is secure, we need a second layer of defense. Integrating the Double Ratchet Algorithm (similar to Signal) will provide Perfect Forward Secrecy—ensuring that even if a session key is compromised, past communications remain locked.
  5. Rich Data Streams: Because the architecture is already stream-based, sharing files simply requires negotiating a new protocol (/aether/files/1.0.0) to handle large data chunks (Buffer) and reassemble them on the receiver's end.

Aether isn't just a messenger; it's an exploration into autonomous network units.

The full code for this architectural MVP is open-sourced on my GitHub

为什么微服务难以应对人工智能系统

2026-04-06 21:15:01

Adding AI to microservices breaks the assumption that same input produces same output, causing unpredictability, debugging headaches, and unreliable systems. To safely integrate AI, validate outputs, version prompts, use a control layer, and implement rule-based fallbacks. Never let AI decide alone—treat it as advisory, not authoritative.

你的微前端打包的图标比实际使用的还要多:我是这样解决的

2026-04-06 21:11:10

Microfrontends usage is becoming popular nowadays. Teams goal to make applications work faster, and then move from a monolith architecture. Everything is simple - they expect smaller bundles, independent releases, and faster delivery. And transition usually gives that.

But the transition to microfrontends involves optimizations: it is necessary to identify what will be separated out, get rid of dependencies and understand how communication with the monolith will be implemented. It’s a real volume of tasks, and small issues, such as the use of SVG sprites, often go unnoticed.

New applications are often built with a focus on future scalability and microfronted architecture, but older applications that were designed as monoliths often have a system where icons are stored in a location/package and assembled into a single sprite. And when you migrate, you may encounter that your microfrontend still pulls in the whole sprite — hundreds of SVG symbols that page never actually needs.

My Production Case

In the production scenario that pushed me to create some solution, our shared icon set contained 319 SVG icons. But one microfrontend actually used 38.

The remaining 281 icons were transferred and still bundled into every microfrontend that would never render them.

Nothing was broken. But from optimization perspective it’s wasteful once you looked at how much was being shipped across every microfrontend. It’s the kind of thing you don’t really notice until you look at what actually gets shipped.

We’re loading more than we actually use — so it felt like we should fix that

\ I tried to find solutions in the already created packages, but nothing appealed to me.

The real issue is that there is no automatic connection between what the code imports and what the final sprite contains.

The Core Idea

I deside to guide a simple principle:

If an icon is not referenced in the codebase, it should not exist in the generated sprite.

Instead of maintaining configuration by hand and generation changes, so as not to break what monolith has already worked with, the build should derive the result directly from the source code.

That's the idea behind @mf-toolkit/sprite-plugin — a build-time tool that scans the application, detects which icons are actually imported, matches them to SVG files, optimizes them, and generates a minimal sprite containing only the required symbols. Tree-shaking, but for SVG sprites. If icons are already React/Vue components, tree-shaking handles this automatically but in case your have single core sprite, such tool will be helpfull

\

Architecture

\ The flow is intentionally simple — four stages, each doing one thing well:

How @mf-toolkit/sprite-plugin works — source code and icons folder feed into the build step, which scans imports, matches SVG files, and generates a minimal sprite

The goal was never to build a clever compiler experiment. It was to build something a real team could add to an existing microfrontend in a few lines of config — without changing how anyone writes application code.

The minimal setup proves the point:

new MfSpriteWebpackPlugin({
  iconsDir: './src/assets/icons',
  sourceDirs: ['./src'],
  importPattern: /@my-ui\/icons\/(.+)/,
  output: './src/generated/sprite.ts',
});

Four lines. The complexity belongs inside the tool, not in the consumer's build config. And i’v made some configurations if you using Rollup and Vite. Becasue it’s open source code, here i’v described some steps how it works, maybe it helps you to make your own tool.

Stage 1: Analyze the Imports, Not the UI

The system starts from the only one reliable signal - imports. If a codebase imports an icon, that's evidence of usage. If it never imports it, that icon should not be shipped.

This sounds straightforward until you look at real projects. Icon usage doesn't appear in just one neat form:

// Static imports
import { CartIcon } from '@ui/icons/cart';
import CartIcon from '@ui/icons/cart';

// Re-exports
export { CartIcon } from '@ui/icons/cart';

// Dynamic imports
const CartIcon = await import('@ui/icons/cart');
import('@ui/icons/cart').then(({ CartIcon }) => /* ... */);

// React.lazy
const CartIcon = React.lazy(() =>
  import('@ui/Icon/ui').then(m => ({ default: m.CartIcon }))
);

// CommonJS
const CartIcon = require('@ui/icons/cart');

All of these patterns need to be detected reliably. Missing even one means a missing icon in production.

The Parser Trade-off That Shaped Everything

The engineering decision in this project was how to parse these imports.

I had a choice either to use a ready-made AST parser, it gives full accuracy in determining our cases, but it would also give us an additional dependency (@babel/parser adds ~5 MB) (@babel/parser adds ~5 MB) and install friction for every microfrontend that adopts the tool, so the package weight and its dependencies are of great importance to the teams.

Because of that, decision was to start with regex-based analysis by default, and make AST parsing optional.

The default regex analyzer has zero extra dependencies. It makes entire package is really small, but it still covers the real-world patterns that matter.

The implementation checks the source code character-by-character to skip comments while preserving string content — handling escape sequences, block comments, and line comments correctly before any pattern matching begins. Then it normalizes multiline imports to single lines, and runs targeted patterns against each line.

For dynamic imports, the analyzer handles three distinct patterns. Take this common React.lazy pattern:

import('@ui/Icon/ui').then(m => ({ default: m.ChevronRight }))

The parser first identifies the .then() callback parameter name (m), then dynamically constructs a regex to find member accesses like m.ChevronRight — only capturing identifiers starting with uppercase to avoid false positives on utility calls.

The proof: tested against the same production codebase with 319 available icons, and the regex analyzer matched the exactly the same 38 icons as the Babel-based AST parser.

But I also wanted to give teams freedom, in most real projects, typescript or @babel/parser is already installed. Because of that i’v made some optionals for AST-based parsers usage that are loaded dynamically: if the dependency exists, use it, if not, you still have regex by default. But the parser layer is designed as a pluggable interface — adding support for a new parser (say, swc or oxc as they mature) is a matter of implementing one function, not rewriting the pipeline.

Stage 2: Map Code-Level Usage to Physical SVG Files

Once imports are collected, the next problem is naming.

Real design systems are rarely uniform. An import path says CartIcon, but the file on disk is cart-icon.svg. Another import says Coupon2, but the asset is coupon-2.svg.

The plugin resolves this through a multi-strategy name matching pipeline:

"ChevronRight" → ["chevronright", "chevron-right"] → matches chevron-right.svg
"Coupon2"      → ["coupon2", "coupon-2"]           → matches coupon-2.svg
"HTTPServer"   → ["httpserver", "http-server"]      → matches http-server.svg

Under the hood, the PascalCase-to-kebab-case conversion applies three regex passes to handle different boundary types: lowercase-to-uppercase (cartIconcart-Icon), consecutive capitals (HTTPServerHTTP-Server), and letter-to-digit (icon2icon-2).

For projects where icons live in subdirectories (e.g., ui/arrow.svg vs payment/arrow.svg), the plugin supports category-prefixed matching. When extractNamedImports is enabled:

import { Arrow } from '@ui/Icon/ui';       // → matches ui/arrow.svg
import { Arrow } from '@ui/Icon/payment';  // → matches payment/arrow.svg

Same icon name, different directories — no ambiguity. And if an icon is imported but no matching SVG file exists, the plugin logs a warning and continues. It won't break your build.

Stage 3: Optimize Before Combining

Selecting fewer icons is only half the optimization. The SVGs themselves still carry unnecessary weight.

Files exported from Figma, Sketch, or Illustrator often contain editor metadata, redundant width/height attributes, XML namespaces, empty groups, and hardcoded colors. The plugin goes thrue each SVG through SVGO with a tuned configuration (this configuration can be changed if you want to make something custom) — multipass optimization, dimension removal (using viewBox instead), and namespace cleanup.

One more thing - plugin replaces black color values with currentColor before running SVGO, not after. Why? Because SVGO's default preset removes redundant black fills as an optimization — if it runs first, we lose the chance to convert them to currentColor and the order matters.

The result: icons automatically inherit the text color of their parent element. Set color: red on the container — every icon turns red. No hardcoded values, no extra CSS.

The Edge Case That Breaks Naive Sprite Generators

Here's a problem that bites people in production. When multiple SVGs are merged into a single sprite, their internal IDs can collide.

Design tools love generating IDs like gradient1, clip0, mask1. Merge two icons that both define id="gradient1", and one of them silently references the other's gradient. The result? Corrupted icons that only appear broken when certain combinations load on the same page.

The plugin solves this by automatically prefixing all internal IDs per icon:

<!-- Before: both icons use id="grad1" — collision -->
<symbol id="cart"><circle fill="url(#grad1)" />...</symbol>
<symbol id="star"><rect fill="url(#grad1)" />...</symbol>

<!-- After: each icon gets a unique prefix -->
<symbol id="cart"><circle fill="url(#cart--grad1)" />...</symbol>
<symbol id="star"><rect fill="url(#star--grad1)" />...</symbol>

This covers id attributes, url(#...) references, href="#...", and xlink:href="#..." — all four reference types that SVG uses for internal linking. It's a small implementation detail, but it's the kind of thing that separates "works in demo" from "works in production."

Stage 4: Generate Code That's Easy to Consume

The output isn't just a file — it's a generated TypeScript module with a runtime injection function:

import { injectSprite } from './generated/sprite';

injectSprite();

One call at app startup. Then the application uses standard SVG <use> references as before. No API changes for product teams. No migration of existing icon components.

The generated injectSprite() is idempotent (safe to call multiple times — it injects only once) and SSR-compatible (checks for document before touching the DOM). The sprite is inserted as a hidden container at the top of <body> with a data-mf-sprite attribute for debugging.

This matters because adoption depends on integration cost. If the output is awkward to wire in or behaves differently across environments, teams postpone integration. The goal was to make it trivial.

Being Honest About Limits

One of the most underrated qualities in developer tooling is transparency about what it can't do.

Some import patterns are not statically analyzable. Namespace imports and wildcard re-exports fall into this category:

import * as Icons from '@ui/Icon/ui';  // Which icons are used? Can't know statically.
export * from '@ui/Icon/ui';           // Same problem.

Instead of silently ignoring these or guessing, the plugin emits clear warnings with file location and a suggested refactor:

[sprite] Namespace import is not statically analyzable: import * as Icons from '@ui/Icon/ui'
  at src/components/App.tsx:3
  Refactor to named imports: import { Icon1, Icon2 } from '@ui/Icon/ui'

Similarly, some design system libraries export lazy-loading wrappers (like PacmanLazy) that don't correspond to actual SVG files. These appear in the manifest's missing list — expected and safe to ignore.

The optional build manifest (sprite-manifest.json) makes this machine-readable:

{
  "generatedAt": "2025-03-25T12:00:00.000Z",
  "iconsCount": 34,
  "missingCount": 3,
  "icons": [
    { "name": "cart", "sources": ["src/components/Cart.tsx:5"] },
    { "name": "ui/arrow", "sources": ["src/components/Nav.tsx:2"] }
  ],
  "missing": ["PacmanLazy", "SplitLazy"]
}

Plug it into CI to fail builds when too many icons are missing. Use it in dashboards. Debug why a particular icon is or isn't in the sprite. The data is there.

The Result

On the production case used to validate the approach, the numbers speak for themselves:

| Metric | Before | After | |----|----|----| | Icons in design system | 319 | 319 | | Icons shipped to the browser | 319 | 38 | | Icons excluded | 0 | 281 | | Icon payload reduction | — | 88% |

But the icon count is only part of the story.

Each SVG also goes through SVGO optimization — metadata stripped, paths minified, dimensions removed, redundant attributes cleaned up. So the remaining 38 icons are individually smaller than their source files too. The total reduction in actual bytes transferred is even larger than the 88% symbol count suggests.

No manual curation. No configuration drift. Just a build step that looks at what the code actually uses and generates exactly that.

Where This Matters Most: App Shell Updates

There's one scenario where this optimization becomes especially critical — app shell updates.

In microfrontend architectures, the app shell is the outermost layer: the layout, navigation, shared components, and — yes — the icon sprite. When a team updates the app shell, every user who visits the site has to download the new version.

If your sprite contains 500 icons and you add one new icon to the design system, every user re-downloads all 500 — even though only 12 are used. The cache is invalidated for a file that's 97% waste.

With a usage-derived sprite, the app shell update only ships what's actually referenced. The payload is smaller, cache invalidation hurts less, and the update cycle stays lean. For products with millions of users, this difference compounds quickly.

The same logic applies to any scenario where shared assets are re-delivered — CI/CD deployments, CDN cache busts, A/B testing shells, or rolling out a new version across regions. The less unnecessary weight in the critical path, the faster the rollout lands.

More Than an Icon Problem

At the surface level, this project is about SVG sprites. I didn’t start with some big theory here. I just noticed that a microfrontend was shipping way more icons than it actually used, and that felt wrong enough to automate.

Honestly, this wasn’t something I set out to solve — I just ran into it while working on a microfrontend setup and ended up digging into it.

Microfrontends are supposed to make systems more modular. But if every app still drags around the full weight of shared assets, part of that modularity is organizational, not technical. Icons are just one example. The same pattern appears in translations, style layers, shared media, and other "global" resources that are available everywhere but needed only in fragments.

A lot of frontend optimization work is reactive. We notice something heavy in production, then try to patch around it. A better approach is to ask a structural question early:

What are we shipping by default that should really be derived from usage?

In this case, the answer was icons. Tomorrow, it could be something else. But the principle holds: if your frontend architecture is modular, your asset delivery should be modular too.


The package is open source on npm: @mf-toolkit/sprite-plugin

If you're dealing with a similar problem in your microfrontend setup — give it a try and open an issue if something doesn't work for your case.

如何将Facebook广告日投放额从50美元提升至1万美元(经实践验证的三阶段方案)

2026-04-06 21:04:29

Scaling from $50 to $10,000 a day should feel like pressing a magic button. It should create a system that keeps running without breaking.

After reviewing over 100 ad accounts, I have learned that most media buyers don’t fail because their ads are bad. But they fail because they don’t have a clear map for the tricky points in scaling.

So, if you feel stuck, here’s the exact step-by-step plan I use to reach $10k/day safely.

The Deadliest Mistakes (Why 80% of Accounts Fail)

Before trying to grow, you need to know what can ruin your results. If you make these mistakes, no amount of money will fix it. Look at these first to save time and money. \n

Phase 1: Creative Darwinism ($50 → $500/Day)

In the first place, your job is not to find "the perfect audience." It is to find the creative that survives.

I use a 20/80 Budget Split. I will test 20 different creatives and kill the bottom 16 the moment they hit 50 impressions without significant engagement or a click. Those survivors get the bulk of the spending.

The winner trigger here is if a creative maintains a 7-day ROAS > 5x, it is time for Phase 2.

Phase 2: The Audience Overlap Hack ($500 → $2k/Day)

Most people just increase the budget on a single ad set. But it rarely works. Don't do that. Instead, take your winning creative from Phase 1 and make 5 identical ad sets..

The twist here is, give each one a different Lookalike audience( 1%, 5%, 10%, 30%, and 60%).

And the winner is whichever ad set performs best, becomes the star, and gets 70% of your Phase 2 budget.

Phase 3: The Machine ($2k → $10k/Day)

At this level, you stop manual tweaking and let the CBO (Campaign Budget Optimization) do the heavy lifting for you. I structure "The Machine" into three distinct containers:

  • TOFU (Top of Funnel),
  • MOFU (Middle), and
  • Retargeting.

Check this: Do you use a TOFU, MOFU, BOFU funnel for campaigns, and how do you plan them?

Inside these, I use Broad Match. Why, because the algorithm needs 50+ conversions per week per creative to optimize. Broad match gives the AI the room it needs to find those buyers. \n

The ROAS Killswitch Checklist

If your ROAS drops below 2.5x, don't panic. Execute this checklist immediately:

  • Pause the bottom 30% of performing ad sets.
  • Duplicate your top creative into a fresh ad set.
  • Reduce bids by 20% across the board.
  • Kill any creative older than 14 days.

If 7-day ROAS is > 3.2x, you have permission to push.

How Fast Can You Scale?

Don't guess how much to increase your budget. Use this multiplier table to see your potential: \n

Frequently Asked Questions Answered

  1. What if I don't have $50/day to test?

    Start with $10/day for 5 days. Same idea as before. Test 10 creatives and kill 8 after 25 impressions. For this Phase 1 setup, the trigger to move on is a 7-day ROAS over 6x. It’s a higher target because we’re testing with a smaller sample.

  2. My niche is saturated. Can this still work?

    Yes. Testing quickly kills weak creatives. This Darwinism helps you find the 1 out of 20 ads that actually work. For example: a gaming headset campaign in a crowded market scaled from $50 → $9k/day using this exact blueprint.

  3. What pixel events should I optimize for?

    Purchase (not AddToCart). Scaling to $10k/day needs end-to-end funnel data. If your checkout converts <2%, fix that FIRST before ads.

  4. How many ad accounts have you tested this on?

    127 Facebook accounts across gaming, fashion, and supplements (2024-2026). 73 scaled past $2k/day. The 54 failures all ignored the Phase 2 audience overlap trick.

  5. What if Facebook changes the algorithm again?

    The basics never change. Test a lot of creatives and keep the winners (Creative Darwinism), get at least 50 conversions per creative per week, and stick to your ROAS targets.

\

PRED 如何打造体育预测市场的“彭博终端”

2026-04-06 20:28:26

For decades, sportsbooks have been operating with a flawed model, of being the counterparty to the customers prediction, causing frustration that the house always wins. PRED is building something structurally different. As a peer-to-peer sports prediction exchange on Base blockchain, it takes no position against users, bans no winners, and generates revenue purely through trading fees.

\ We sat down with Amit Mahensaria, CEO and co-founder of PRED, a former investment banker, and ex co-founder of Impartus (acquired by upGrad for ~$20M), to talk about why he left edtech for decentralized prediction markets, how PRED stacks up against Polymarket and Kalshi, and what the next phase of the platform looks like.

\ Ishan Pandey: Hi Amit, great to have you on our "Behind the Startup" series. You've built and exited a company in edtech, worked in investment banking and private equity, and now you're running a decentralized sports prediction exchange. That's a wide arc. What's the common thread, and what pulled you into the prediction market space?

\ Amit Mahensaria: The common thread is building tech products that fundamentally change how a market operates, not just competing within the existing rules.

\ At Impartus, we built a video learning app, something that was personalised for learners and catered to their short attention span. That company was acquired by upGrad and scaled to over two million users. In investment banking and private equity before that, I spent years learning how liquidity forms, how price discovery works, and how the capital markets actually function.

\ But running underneath all of that, I've been trading sports markets for over 22 years. Not gambling, but analytically.

\

And in those two decades, I've experienced every structural flaw the traditional sportsbook model has: opaque pricing, winner discrimination, the house sitting on the other side of every position you take.

\ It's an industry measured in hundreds of billions of dollars, and is built on a flawed model where company revenue = users losses. Frustrated, I started my own sports trading community around 7 years back and this sports prediction idea emanates from there.

\ The prediction market space gave me the opening.

\

Polymarket and Kalshi proved the on-chain model works and have since moved heavily into sports. But their infrastructure was originally designed for elections and macro events, and that origin shapes the product.

\ Sports trading has a different tempo. I wanted to build exchange infrastructure specifically for that rhythm, purpose built for sports, not adapted to a general-purpose tool.

\ So the pull was personal. I've lived with the problem as a trader for two decades. I understand how exchanges work from finance. I've built and scaled a venture before. PRED is where all of that converges.

\ Ishan Pandey: Polymarket and Kalshi have significant mindshare in the prediction market space. PRED is specifically targeting sports. Where does PRED actually beat them, technically, structurally, or in terms of user experience and where are the gaps you're still closing?

\ Amit Mahensaria: Let's be honest about the landscape. Polymarket and Kalshi have both moved aggressively into sports. Kalshi generates over 70% of its volume from sports. Polymarket has just signed an official partnership with MLS. So this isn't a situation where we're entering a category they've ignored. They're here. The question is how we're built differently.

\

Polymarket and Kalshi were designed as general-purpose prediction platforms. They started with elections, macro events, cultural outcomes, and expanded into sports as a growth lever.

\ Their infrastructure reflects that origin. Binary contracts, yes/no resolution, structures optimized for events that settle once and disappear.

\

Pred is designed as a high speed sports native exchange. That distinction shows up in the product. When a red card is shown in a Premier League match, the market reality changes in an instant.

\ The difference between 200ms and two seconds is the difference between capturing that price movement and watching it pass. Our matching engine was purpose-built for that tempo. Similarly, sports resolutions on PRED happen within a couple of minutes and not hours or days enabling users to celebrate their winnings and trade the next match; ask any user how important this is.

\ Second, spreads. We're consistently targeting under 2% on active markets. For a trader taking multiple positions across a matchweek, tight spreads compound into real edge over time. In sports, retail users don't market make or place limit orders in a live match, as prices can have extreme fluctuations within milliseconds when a goal happens. We're innovating to enable protection of user's limit orders, when high impact events happen, an industry first.

\ Third, more and more markets. Polymarket has 25 sub-markets in a match, whereas Stake has 500! Sports users love a variety of markets around an event. Rather than just who will win or what the score will be, sports fans want to predict and benefit from their deep insight on multiple trivia. We only think about sports day in day out, and will better innovate on markets than general purpose prediction markets.

\ Ishan Pandey: The core architecture of PRED is peer-to-peer order matching, not a traditional sportsbook model. Walk us through how the exchange mechanics actually work, how orders get matched, how prices are discovered, and why you believe this model produces better outcomes for users than a market-maker or AMM-based approach.

\ Amit Mahensaria: Think of Pred as a limit order book, similar to what you'd see on any financial exchange. If you want to take a position that Arsenal will beat Chelsea this weekend, you place an order at the price you're willing to pay. Someone who disagrees, or who wants to trade the other side, places a counter-order. When the prices cross, the trade executes.

\ There's no AMM setting the price algorithmically and no market maker sitting in the middle with an information advantage. Prices emerge from real supply and demand between traders who have done their own analysis. This is how financial exchanges work. Other prediction platforms are moving in this direction, but most of them bolted sports onto infrastructure that was built for something else. The gap shows up in execution speed and spread quality.

\ In an AMM model, liquidity providers take directional risk, and the pricing function is algorithmic rather than market-driven. AMMs work well for fungible token swaps where there's continuous two-way flow. They work poorly for event-based markets where information is asymmetric and time-sensitive. You end up with impermanent loss for LPs and slippage for traders.

\ Settlement is on-chain on Base. Every matched position, every payout, fully auditable. You don't need to trust us to honor results because the settlement logic is transparent.

\ Ishan Pandey: You claim PRED is currently the fastest exchange for sports predictions. Speed on a blockchain-based platform is a real engineering challenge. What does "fastest" mean concretely in your context, and what architectural decisions on Base enabled that?

\ Amit Mahensaria: Fastest means sub-200 millisecond execution from order submission to confirmation. For context, most prediction market platforms operate in the range of seconds to tens of seconds. In live sports, that latency is disqualifying. A goal can be scored, a red card can be given, and the market reality changes in an instant. If your platform can't keep pace, your traders are trading on stale information, and that erodes trust.

\ The architectural decisions were deliberate. We chose Base for a reason. It gives us low gas costs and fast block times, which means we can process high-frequency order flow. But Base alone doesn't get you to sub-200ms. We built a custom matching engine that processes the order book off-chain for speed and settles on-chain for transparency and finality.

\ This is the same approach that centralized crypto exchanges like Binance and Coinbase use: match fast, settle securely. The difference is that our settlement layer is public and verifiable. You can audit every trade.

\ We also implemented gasless onboarding, so new users aren't blocked by the need to acquire ETH before they can start trading. That's a UX decision more than a speed decision, but it removes a friction point that kills conversion for any blockchain-native product.

\ At PRED, speed is not only in trade execution, but in its DNA. We will be high speed in creation of new markets, high speed in settlement of markets and high speed in user experience.

\ Ishan Pandey: PRED offers 5–6% native yield on user deposits. In the current Web3 landscape that's a meaningful differentiator, but it also raises questions around sustainability and risk. Where does that yield come from, and how do you make sure it doesn't become a liability as the platform scales?

\ Amit Mahensaria: Fair to be skeptical. After the yield farming disasters of 2021 and 2022, anyone offering yield should have to explain exactly where it comes from.

\ On PRED, your deposited capital earns yield while it's in your account. We've partnered with global institutions to generate yield via US Govt T-Bills on the underlying stablecoin deposits. Also since we don't have huge marketing costs like deposit bonus, we are able to pass on a portion of our trading fees to the users in the form of yield. Your capital is working for you on all trades and even when you're not actively trading.

\ Why did we build this into the core product? Because we think about the experience from a trader's perspective. Serious traders think about capital efficiency all the time. On a traditional sportsbook, your deposited funds sit there earning nothing until you place a trade. On most prediction platforms, the same story. That's an opportunity cost that adds up fast, especially for traders who maintain larger balances and are selective about their entries.

\ Ishan Pandey: "We never ban winners" is a strong brand promise. Traditional sportsbooks restrict or close accounts of consistently profitable bettors. What does that policy cost you operationally, and how does the P2P exchange model make it viable to actually keep that promise?

\ Amit Mahensaria: It costs us nothing. And that's the entire point.

\ On a sportsbook, banning winners is a necessity. The house takes the other side of every position. If a trader is consistently right, the house is consistently losing money to that person. The rational business response is to limit or close their account. Every major sportsbook in the world does this. It's not malice. It's structural inevitability.

\ On an exchange, it works the other way. We don't take the other side of your trade. When a skilled trader wins, they're winning from another trader, not from us. Our revenue comes from commissions on completed trades, regardless of who profits on the position. Skilled traders generate the most volume. They make the markets tighter. They attract other traders. Banning them would be like a stock exchange kicking out its most active participants.

\ So "Winners Welcome" isn't a marketing slogan we had to design around some operational cost. It's a structural consequence of being an exchange. We literally could not ban winners and have it make business sense.

\ The real question is why anyone would trade on a platform that punishes skill. That's the norm right now. We think it shouldn't be.

\ Ishan Pandey: Accel is on your cap table. What does that partnership look like in practice beyond the cheque, and what did you learn from building and fundraising for Impartus that changed how you approached raising for PRED?

\ Amit Mahensaria: Accel led our $2.5 million round, with Coinbase Ventures, Reverie and prominent sports professionals. Accel has backed enough exchange and marketplace businesses that they understand the cold start problem, how liquidity networks form, and what kills them early. That's more useful than capital alone when you're building something where the product only works once there's enough activity on both sides.

\

What I learned from Impartus about fundraising is that building a consumer product requires user obsession and patience.

\ The investors we brought on our cap table understand sports, understand consumer product building and are willing to back us for the long term, not only with this round but also subsequent growth rounds.

\

The biggest lesson from the first time around: pick investors who can argue with you, not just write cheques. Accel pushed back on our liquidity assumptions and our go-to-market sequencing.

\ That's what you want from a lead investor.

\ Ishan Pandey: Three to four years from now, what does PRED look like at scale? Are you building toward a dominant sports prediction vertical, or is the infrastructure you're building the foundation for something broader?

\ Amit Mahensaria: Three to four years from now, Pred is where serious traders go to trade sports. Not because we told them to, but because the liquidity is deep, the execution is fast, and the spreads are tight enough that professionals can operate here full-time. We want to be across every major sport and league globally, with sub-100ms execution and market data that rivals what you'd see on a Bloomberg terminal.

\ We're building toward sports first and we're building it deep. Soccer is our starting point, leading all the way to FIFA World Cup and NBA and other sports like Tennis, F1 will shortly follow because the global audiences and trading demand are already there. From there, it's additional leagues, additional sports, player props, in-play markets, season futures. Sports alone is a massive surface area. We haven't come close to exhausting it.

\ Could the infrastructure serve other event-based markets? Sure. The matching engine, settlement layer, and order book mechanics are general enough. But I'm not interested in becoming a general-purpose prediction platform. I've watched that playbook before.

\

You spread thin, you lose depth, and somebody more focused eats your lunch in every vertical.

\ We're staying in sports. The more focused we are, the harder we become to replicate. That's the strategy, and we're not going to get bored of it halfway through.

\ Don’t forget to like and share the story!

\

新银行信用卡欺诈案例研究:Novo案例对信用卡风险测量的启示

2026-04-06 19:40:53

For teams that issue cards inside digital banking products, fraud rarely appears as one isolated event. It moves through the product in stages. It can begin at onboarding, surface during funding, intensify at login or account takeover, and only become visible to the business once a suspicious transaction, dispute, or chargeback appears.

That is what makes the Novo example worth studying.

This neobank card fraud case study is useful not because it presents card fraud as a simple transaction problem, but because it shows how debit card risk has to be measured across the customer journey. The published example connects unauthorized disputes, blocked transactions, false positives, customer verification flows, and broader user behavior signals rather than treating each as a separate workstream.

That distinction matters. In many organizations, disputes sit with one team, fraud rules sit with another, support manages customer fallout, and product owns the experience layer that ties everything together. But customers do not experience those functions separately. They experience a single system. If a valid transaction is blocked, if a suspicious purchase is challenged, or if support is required to recover a purchase, the customer sees one thing: whether the bank made the right decision quickly and with minimal friction.

The Novo case study highlights that balance clearly. It describes a business working to reduce dispute and chargeback-related fraud losses while also increasing two-factor authentication challenges and transaction friction to protect customers. Those changes may help prevent unauthorized activity, but they also create a new question: how much legitimate usage gets caught in the same net?

That is the deeper lesson here. Stronger card fraud programs are not only about reducing unauthorized spend. They are about reducing unauthorized activity without making legitimate card use harder than it needs to be.

Why card fraud in neobanks is not just a transaction-stage issue

A debit card transaction may be the moment when fraud becomes obvious, but it is rarely the moment when fraud actually begins.

In a digital banking product, many of the conditions that shape fraud outcomes are set earlier. Weak onboarding controls can allow stolen or synthetic identities into the platform. Gaps in authentication can create account takeover exposure. Poor funding controls can make suspicious accounts look normal until later activity reveals the risk. By the time a card dispute is filed, the original weakness may have happened long before the purchase itself.

That is why neobank fraud prevention works best when teams stop treating card misuse as a single downstream problem. Card fraud sits inside a broader system of onboarding, login, funding, issuing, transaction monitoring, support, and compliance.

Why lifecycle risk matters more in digital banking

Digital banking products are designed around speed and convenience. Customers expect low-friction onboarding, immediate access, and smooth card usage. That convenience is part of the appeal. But it also creates pressure to simplify controls in ways that can expand risk exposure across the customer journey.

In a branch-based environment, more manual review can happen earlier. In a digital-only environment, the system has to do more of the decision-making. That means onboarding, behavioral signals, device context, and transaction monitoring carry more weight.

The Novo example reflects that reality. It does not frame card fraud as something that exists only at the point of purchase. It points to account opening, account funding, and transaction monitoring as part of the larger risk picture. That is what makes it useful for digital banking teams. It shows that the path to a debit card dispute often starts long before the card is actually used.

Why separate teams can miss the real problem

One reason card fraud is often misunderstood is that different parts of the problem are owned by different teams.

Fraud operations may care most about unauthorized spend and losses. Card operations may focus on disputes and chargebacks. Support may care about false positives and recovery time. Product may be concerned with friction and approval rates. Compliance may watch for broader monitoring and review obligations.

All of those teams are looking at the same risk environment, but from different angles.

That creates a measurement problem. If blocked transactions and disputes are tracked separately, leadership can miss how one control influences the other. A stricter fraud rule may reduce some unauthorized activity while increasing valid declines. A looser one may improve card usage while allowing more risk through. Without a connected measurement model, the business can improve one metric while quietly damaging another.

What the Novo example measures beyond chargebacks

The most visible metric in the published case study is the reported reduction in unauthorized disputes, along with a debit card chargeback rate presented as best in class. That is the kind of result that usually draws attention first because it speaks directly to downstream fraud loss.

But the more revealing lesson may be found in how the case study frames blocked transactions.

It reports that 84 percent of blocked transactions were recovered by prompting 2FA or OTP rather than declining the purchase outright. That is important because it shows the fraud program was not only designed to stop bad activity. It was also designed to recover legitimate activity that might otherwise have been lost.

Why blocked valid transactions matter so much

Blocked legitimate transactions are often treated as a side effect of fraud prevention. In reality, they are one of the clearest indicators of how usable a fraud program actually is.

For a small business banking customer, a blocked card purchase is not just an inconvenience. It can interrupt routine operations, delay a payment, create uncertainty, and force the customer into support channels at exactly the wrong moment. Even when the fraud system is technically working as designed, the experience can still feel like failure.

The Novo example highlights that operational cost. The case study notes that customers who encountered false positive events had previously needed to contact support, which created time and cost burdens for both the company and the customer. That makes the issue much broader than card fraud alone. It becomes a support efficiency issue, a customer experience issue, and a product trust issue.

What OTP and 2FA change at the transaction moment

The use of SMS approval or OTP verification is significant because it changes the decision structure around a suspicious transaction.

Instead of treating the choice as binary, approve or decline, the system introduces a third path. The transaction can be challenged. That allows the business to intervene when a purchase looks risky without automatically converting suspicion into a hard stop.

This matters because many suspicious transactions are not actually fraudulent. They may look unusual based on spending amount, location, merchant type, device pattern, or timing, but still be valid. A challenge flow gives the legitimate user a chance to prove that quickly.

That is where better fraud prevention and better customer experience begin to align. The goal is not simply to make fraud rules stricter. The goal is to make decisions more adaptive.

How this case study reframes false positives and customer friction

One of the strongest elements in this neobank card fraud case study is that it does not treat false positives as an unavoidable side note. It places them near the center of the performance discussion.

That is the right approach.

Too many fraud programs are measured mainly by how much loss they prevent. That is important, but it is incomplete. If a system reduces unauthorized disputes while increasing false declines, support demand, and customer frustration, the business may be solving one problem by making another one worse.

Why fraud prevention user experience balance matters

In digital banking, speed and simplicity are part of the product promise. Customers expect quick onboarding, smooth access, and reliable card usage. When fraud controls become too aggressive, they do not just affect risk outcomes. They directly affect the product experience.

This is especially important for small business banking customers. A blocked purchase can affect routine operations, payroll-related needs, subscriptions, supply purchases, travel, or vendor payments. The impact is often practical and immediate.

That is why fraud prevention user experience balance should be treated as a core operating question, not a design preference. A better fraud program is one that protects the business while preserving legitimate customer behavior.

What better measurement looks like

A stronger card risk program measures more than post-transaction fraud loss.

It should measure unauthorized disputes and chargebacks, because those remain important downstream indicators of misuse.

It should measure blocked valid transactions, because that shows how often legitimate customers are being interrupted.

It should measure recovery rates for challenged activity, because that reveals whether the system can distinguish recoverable good behavior from true fraud risk.

It should measure support burden tied to false positives, because every manual recovery path creates cost.

It should also measure customer experience effects, even when they are less visible in fraud dashboards. Repeated friction can quietly undermine trust and long-term retention.

When those metrics are evaluated together, teams get a much more realistic view of whether their fraud controls are actually working.

What the case study shows about signals, monitoring, and lifecycle context

Another important takeaway from the Novo example is its use of behavior and monitoring across multiple touchpoints.

The case study states that user behavior was monitored across every touchpoint and points to device intelligence and behavior biometrics at account opening, funding, and transaction monitoring. That is important because it suggests fraud decisions were not being made from a single transaction event alone.

Why transaction-only fraud review falls short

A card transaction, by itself, only tells part of the story.

A purchase may seem risky because it is high value, comes from a new merchant type, or appears at an unusual time. But the meaning of that event changes when it is evaluated alongside other signals. Did the login come from a familiar device? Was there recent account activity that changed the risk posture? Was the account newly opened or newly funded? Has the user shown unusual behavior earlier in the session?

Without that wider context, fraud systems are forced to make decisions with incomplete information. That usually leads to one of two outcomes: too much friction or too much missed risk.

Why lifecycle monitoring improves card decisions

Lifecycle monitoring matters because it creates continuity between stages that are too often evaluated separately.

Onboarding signals help teams understand whether an account looked legitimate from the beginning. Login and device signals help assess account takeover risk. Funding patterns can provide context for later spending behavior. Transaction monitoring then becomes more informed because it is built on what the system already knows.

That is a stronger model for digital banking teams. It does not assume there is one universal fraud path. It recognizes that risk can emerge at different points and that each stage can strengthen or weaken the next decision.

For teams evaluating fraud analytics, transaction monitoring, or broader fintech fraud infrastructure, that is one of the clearest lessons in the case study. Better debit card fraud prevention does not come only from smarter transaction filters. It comes from connecting signals across the lifecycle.

What digital banking teams should take from this example

The real value of this neobank card fraud case study is not just the headline improvement in disputes. It is the way the example connects several operating metrics that are often treated as separate.

Unauthorized disputes matter. Blocked valid transactions matter. OTP or 2FA recovery rates matter. Support burden matters. User behavior signals matter. The interaction between those measures is where card risk becomes visible in a meaningful way.

The most useful lesson for fraud and product teams

For fraud teams, the case study is a reminder that strong controls should be judged by more than what they block.

For product teams, it shows that user friction is not simply a customer experience issue. It is also a fraud performance issue.

For support teams, it reinforces that false positives create measurable operational cost.

For leadership, it shows why card fraud should be treated as a cross-functional system, not a downstream loss metric.

That framing is what makes the example broadly useful. It shows how digital banking teams can evaluate debit card fraud prevention, chargeback reduction, transaction recovery, and customer experience as connected parts of the same operating environment.

Where stronger programs usually improve next

Teams that learn from this kind of example usually move in a few predictable directions.

They connect onboarding risk more clearly to transaction risk.

They invest in challenge and recovery flows rather than relying too heavily on outright declines.

They measure false positives with the same seriousness they apply to fraud losses.

They reduce the distance between fraud operations, card operations, support, and product decision-making.

And they treat authentication design as part of the overall card experience, not just a security layer.

Those shifts do not eliminate fraud. But they do make fraud programs more durable, more measurable, and more aligned with how digital banking products actually work.

Final takeaway

The strongest lesson from the Novo example is simple: card fraud should not be measured only where it becomes visible. It should be measured across the journey that made it possible.

That means looking beyond disputes and chargebacks alone. It means understanding how onboarding, funding, authentication, transaction review, blocked purchases, recovery flows, and customer behavior signals work together. It means treating friction as a real operating cost, not just collateral damage. And it means judging card fraud controls by whether they protect the business without making legitimate activity unnecessarily difficult.

For digital banking teams, that is the more mature way to think about card risk measurement.

A strong fraud program is not the one that blocks the most. It is the one that makes better decisions across the full lifecycle of the customer journey.

\

:::tip This story was distributed as a release by Sanya Kapoor under HackerNoon’s Business Blogging Program.

:::

\