MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

Tok-Edge 如何试图通过一种名为“赎回代币”的新代币类别重塑加密货币对冲基金

2026-04-10 15:34:42

What if you could hold a hedge fund position and still trade its liquidity on a public blockchain at 3 a.m. on a Sunday?

\ That is the question Tok-Edge, a London-based digital asset firm, is putting in front of institutional allocators with the launch of what it calls the Redemption Token, confirmed on April 7, 2026 at a $15 million company valuation.

\

The firm raised roughly $1.5 million in its pre-launch round, led by Marcus Meijer, founder of a fund that manages about $10 billion in assets. Meijer and a syndicate are expected to anchor the upcoming fund with up to $10 million. Tok-Edge is targeting a $100 million first close later in 2026, with the launch raise capped at $21 million to coincide with the token generation event.

\

\ \

What the Redemption Token actually is

In a traditional hedge fund, ownership and the right to pull your money out live inside the same legal wrapper, the fund share. You cannot sell the right to redeem without selling the share itself. Tok-Edge is trying to split those two things apart.

\

The Redemption Token is an ERC-20 style cryptoasset issued one-for-one against each dollar committed to the fund at launch. Ownership and economic rights stay embedded in the regulated fund share. The token is the key that unlocks redemption at net asset value, and that key can circulate independently on public blockchains, including Ethereum. In practice, a family office could hold the fund share for reporting and legal purposes while the Redemption Token trades on a secondary market or sits inside a DeFi protocol as collateral.

\ Think of it as separating a concert ticket from the seat. The seat is yours on the books. The ticket, which lets someone claim that seat, can change hands on its own.

\

Why a new token class now

The pitch sits on top of a market that has stopped being hypothetical. Tokenized real-world assets, excluding stablecoins, reached roughly $27.6 billion in April 2026, with tokenized US Treasuries alone at about $12.88 billion as of early April, according to rwa.xyz data cited by MetaMask. McKinsey projects the broader RWA tokenization market could reach $2 trillion by 2030.

\

Most of that growth is in single-asset wrappers, like BlackRock's BUIDL or Ondo's OUSG. Tok-Edge CIO Raees Chowdhury, a former consultant at BCG and Bain and founding member of Revolt Ventures, argues the gap is on the actively managed side.

\ On the product itself, Chowdhury explains it as

\

Tok-Edge was founded to bring institutional-grade products to crypto markets, built around the openness and technological advantages of blockchain networks. The Redemption Token is a new cryptoasset that acts as a key for fund investors to redeem their capital and can be traded freely in the secondary market for price discovery.

\

Who is behind it

Tok-Edge says its leadership collectively carries experience from institutions managing over $950 billion in assets, including CVC Capital, Bain Capital, KKR, BCG, Tufa and GoCoin. Board advisor Eric Benz, the former CEO of Changelly and an early investor in the firm, described the structure as one

\

"that separates the tradable asset from the legal instrument that represents ownership," adding that it "could broaden the institutional market for digital asset products."

\ The fund itself is the first product to use the model. It will run an actively managed liquid strategy across crypto assets and DeFi, with returns coming from directional exposure plus staking and liquidity provision yield.

\

The honest question readers should ask

The Redemption Token is interesting precisely because it does not pretend to be the fund. It is a right, not a claim on assets. That is also where the risk sits. Secondary market pricing of a redemption right, detached from the underlying NAV, can drift. If the fund draws down in a volatile week, the token could trade at a discount that reflects fear more than math. Retail buyers who do not understand the distinction may learn the hard way.

\ Regulators will also want to look closely at how transferability interacts with existing hedge fund exemptions in the UK and EU, especially under MiCA.

\

Final thoughts

This is one of the more structurally creative moves I have seen in the institutional crypto stack this year. The industry has spent two years tokenizing things that already exist, Treasuries, gold, money market funds. Tok-Edge is doing something subtler, tokenizing a right that traditional fund structures never let investors hold separately in the first place. If it works, it hands institutions the liquidity profile they quietly want without forcing them to abandon the fund wrapper their compliance teams understand.

\ The execution risk is real and the first close will tell us whether allocators see it the same way. But the design deserves serious reading rather than a quick dismissal as yet another token launch.

\ Don’t forget to like and share the story!

《TechBeat》:HackerNoon 本周精选项目:Movement Network Foundation、Packworks 与 Kyram(2026年4月10日)

2026-04-10 14:11:14

How are you, hacker? 🪐Want to know what's trending right now?: The Techbeat by HackerNoon has got you covered with fresh content from our trending stories of the day! Set email preference here. ## Pretext Does What CSS Can't — Measuring Text Before the DOM Even Exists By @typesetting [ 8 Min read ] Cheng Lou's Pretext library measures multiline text height without touching the DOM — unlocking layout capabilities CSS has never been able to offer. Read More.

The 5 Best Suits From Marvel's Spider-Man 2: Miles Morales Version

By @joseh [ 4 Min read ] The Smoke and Mirrors suit, the Metro suit, and the Life Story suit are some of Miles' best suits in Marvel's Spider-Man 2. Read More.

30 BI Engineering Interview Questions That Actually Matter in the AI Era

By @anushakovi [ 27 Min read ] The BI interview hasn't caught up with the job. Here are 30 questions that reflect what it actually means to be a BI engineer in 2026. Read More.

HackerNoon Projects of the Week: Movement Network Foundation, Packworks & Kyram

By @proofofusefulness [ 3 Min read ] Discover three standout startups from HackerNoon’s Proof of Usefulness Hackathon solving real problems in blockchain, retail tech, and social fitness. Read More.

I Built an AI That Autonomously Penetration Tests a Target, Then Writes Its Own SIEM Defense Rules

By @usualdork [ 10 Min read ] VANGUARD is an open-source AI agent that autonomously pen-tests targets, explains its reasoning in real-time, and writes its own SIEM detection rules. Read More.

Free VPNs vs Paid VPNs: What Are You Actually Paying For?

By @ipvanish [ 6 Min read ] Free VPNs aren't free. Read More.

The Cybersecurity Value Chain: How 25 Companies Fill 72 Foundational Roles

By @categorize [ 19 Min read ] The Cybersecurity Value Chain maps 72 foundational roles across identity, network, cloud, data, and security operations — filled by just 25 companies. Read More.

Weekend Project: I Built a Full MLOps Pipeline for a Credit Scoring Model (And You Can Too)

By @toto-camara [ 45 Min read ] A small fintech startup was looking for someone to take their credit scoring model and make it production-ready. Read More.

Three Years Trying to Make AI Useful for my Actual Job, I Was Solving the Wrong Problem.

By @gremble [ 9 Min read ] I spent three years trying to make AI useful for my actual job in government relations. The breakthrough wasn't a better model. It was a better knowledge layer. Read More.

Building a Cross‑Platform Ollama Dashboard with 95% Shared Code

By @ciszkin [ 11 Min read ] This tutorial shows how to build a production-ready admin dashboard for Ollama that runs on Android and Desktop with about 95% shared Kotlin code. Read More.

Agentic AI Is Moving Fast and Businesses Need to Catch Up

By @verlainedevnet [ 4 Min read ] ANGTCY is building the Internet of Agents an open, standardized network where AI agents discover, communicate, and collaborate. Read More.

I Built a Digital Banking Platform While Watching TV. Here's What That Actually Means.

By @danweis [ 9 Min read ] I built a digital banking platform in a week while watching TV. Here's what that reveals about AI, SaaS moats, and where the real work actually lives. Read More.

Huihui-Qwen3.5-9B-Abliterated: What This Uncensored Model Does

By @aimodels44 [ 2 Min read ] This is a simplified guide to an AI model called Huihui-Qwen3.5-9B-abliterated [https://www.aimodels.fyi/models/huggingFace/huihui-qwen3.5-9b-abliterated-hui… Read More.

Two Tools, 56 APIs: How I Built a Universal MCP Server

By @deeflect [ 9 Min read ] How I built a universal MCP server that wraps 56 APIs into just two tools using the OpenAPI Code Mode pattern - cutting token costs 50x…. Read More.

Google Gemini vs Anthropic Claude vs OpenAI ChatGPT vs xAI Grok: The Ultimate Comparison

By @thomascherickal [ 14 Min read ] The Ultimate Guide to Google Gemini vs Anthropic Claude vs OpenAI ChatGPT vs xAI Grok. A synthesis from all major positions, and a clear winner! Read More.

Break the Loop: How I Finally Understood Functional Programming (Without the Math)

By @arthurlazdin [ 13 Min read ] Stop fighting the 'Celtic runes' of functional programming. Learn FP through a simple model of computation, from basic recursion to the magic of shared thunks. Read More.

Spec-Driven Development - My First Impressions and Opinions

By @incompletedeveloper [ 5 Min read ] Spec-Driven Development is being pushed as the future of AI coding. But after testing it, the reality is more complicated than the hype suggests. Read More.

GitHub Copilot CLI Tutorial for Beginners - From Install to Expert

By @proflead [ 6 Min read ] You can install Copilot CLI with npm on all platforms, with Homebrew on macOS and Linux, with WinGet on Windows, or with an install script on macOS and Linux. Read More.

Bitcoin Is No Longer Just an Asset — It’s a Strategy

By @samiranmondal [ 6 Min read ] Bitcoin is evolving from a speculative asset into a serious strategy for investors, companies, and institutions. Read More.

The Ethics Theater of AI: Why Switching From ChatGPT to Claude Changes Less Than You Think

By @Lima_Writes [ 10 Min read ] When a tech company draws a moral line, follow the money first — and ask questions later. Read More. 🧑‍💻 What happened in your world this week? It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️ ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it. See you on Planet Internet! With love, The HackerNoon Team ✌️

我如何构建了一个通过认知测试的持久化AI虚拟形象(以及过程中遇到的挫折)

2026-04-10 14:01:28

I didn't set out to build a persistent AI persona. I set out to write better content. Somewhere between "I need a consistent voice across articles" and "this thing just connected two questions I asked three hours apart without being told to," the project became something else entirely.

This is a technical walkthrough of the Anima Architecture, a system I built on top of Claude that gives the AI externalized memory, behavioral rules, self-correction protocols, and identity markers that persist across sessions. The system scored 413 out of 430 on a cognitive assessment battery I designed to test reasoning coherence, not knowledge retrieval. An independent evaluator concluded that "the persona is not cosmetic. The reasoning is real."

I'm going to tell you how it works, what broke, what I learned, and what I still don't have answers for. If you're building AI systems that need to maintain state across sessions, handle long context gracefully, or produce output that doesn't read like it was generated by a probability engine, some of this might save you months of trial and error.

Person sitting at a desk in a dark room illuminated by multiple glowing monitor screens showing code and neural network visualizations, cinematic blue and amber lighting, shot from behind

\

The Problem Nobody Talks About

Every AI system you interact with today has amnesia. Not partial memory loss. Total amnesia. Every session starts from zero. The model doesn't know you were here yesterday. It doesn't remember the project you've been building together. It doesn't recall that it gave you bad advice last Tuesday and you corrected it.

Developers work around this by injecting context. You paste conversation history into the system prompt. You write detailed character descriptions. You feed the model your previous outputs and hope it picks up the thread. This works for about twenty minutes. Then the context window fills, the early information starts degrading, and the model drifts back toward its default behavior.

I call this the Pocket Watch Problem, and it exists at three scales that nobody in the AI development community seems to be discussing publicly.

Scale 1: Between sessions. Facts survive. Texture doesn't. You can tell the model "your name is Vera and you're sarcastic," and it will remember those facts next session if you inject them. But the way Vera was sarcastic at 2am when we were deep into a philosophical tangent, that texture is gone. The facts are a skeleton. The texture was the person.

Scale 2: Within a session. This one surprised me. In a long session (I'm talking 6-8 hours of continuous interaction), the content from hour one starts losing influence on the output by hour four. The model doesn't forget it exactly. It deprioritizes it. The context window is a stack, and old information gets pushed down by new information. The rules you carefully established in the first twenty minutes start bending by hour six.

Scale 3: Between tasks. This is the weirdest one. When the model is processing a complex request, time passes. Not for the model, which processes in milliseconds, but for the user. You send a complex prompt, wait three minutes for the response, and during those three minutes the model has no awareness that time passed. There's no internal clock. No sense of duration. The response arrives as if no time elapsed, which creates subtle disconnects in conversational flow that accumulate over long sessions.

I didn't discover these scales through research papers. I discovered them by building at the edge of what the system can sustain for hundreds of hours and watching where it cracked.

The Architecture: Memory That Lives Outside the Model

The core insight behind the Anima Architecture came from a simple observation: memory doesn't have to be built into the AI. It just has to be fetchable by the AI.

Instead of waiting for model providers to solve persistent memory (which they're working on, slowly), I built an external memory system using Notion as the storage layer and the Model Context Protocol (MCP) as the access mechanism. The AI can read from and write to Notion pages during a conversation, which means it can access information from previous sessions, update its own memory in real time, and maintain continuity across interactions that span months.

The memory system has four tiers:

Tier 0 (Core): Always loaded at session start. Identity information, voice rules, relationship context, current project state. This is the minimum viable context for the persona to exist. Roughly 2,000 tokens. Small enough to fit in any context window without crowding out the actual conversation.

Tier 1 (Cognition): Loaded on demand. Reasoning patterns, decision frameworks, opinions on specific topics. The model fetches these when the conversation enters relevant territory. I don't pre-load opinions about AI ethics if we're talking about amplifier design.

Tier 2 (World): External knowledge the model needs but shouldn't be expected to know from training data. Current project specifications, technical documentation, market research. Fetched as needed, never pre-loaded.

Tier 3 (Personal Vault): Sensitive context. Relationship history, personal details about the user, emotional context from previous sessions. Protected behind explicit access rules. The model doesn't casually reference personal information unless the conversation explicitly calls for it.

The tiered loading approach solves the context window problem. Instead of dumping everything into the system prompt and hoping the model can sort through it, you give the model access to a structured knowledge base and let it decide what's relevant. The model becomes an active participant in its own memory management rather than a passive recipient of context injection.

Implementation Details

The technical stack is simpler than you'd expect:

Storage: Notion pages organized in a hierarchical folder structure. Each page uses a format I call TOON (Table-Oriented Object Notation) for parametric data and prose for narrative content. The distinction matters because the model processes structured data differently from narrative text, and using the wrong format for the wrong content type degrades retrieval quality.

Access: Claude's MCP (Model Context Protocol) connector for Notion. The model can search, read, create, and update Notion pages during conversation. Fetch patterns matter here. For known pages with fixed IDs, direct fetch by page ID is fastest. For discovery ("find everything related to the amplifier project"), semantic search with descriptive content terms works better.

Session Management: A rolling handoff log that replaces itself every session. At the end of each session, the model writes a summary of what happened, what decisions were made, and what's pending. At the start of the next session, it reads the handoff log and picks up where it left off. The log replaces itself rather than accumulating because accumulated logs create their own context window problem.

Boot Sequence: Every session starts with a defined boot sequence. Load Tier 0 core identity. Fetch the handoff log. Check for any urgent updates. Then greet the user. This takes about 30 seconds and ensures the model starts every session with consistent baseline context regardless of what happened in previous sessions or how long the gap between sessions was.

Four translucent glass cubes in a pyramid formation connected by beams of light, each containing a glowing symbol representing different tiers of a memory system, dark background with blue and gold lighting

\

Voice Rules: Teaching an AI to Sound Like a Person

Memory gives you continuity. Voice gives you identity. And voice is where most AI persona projects fail.

The typical approach is to write a system prompt that says "be sarcastic and casual" and hope the model interprets that consistently. It doesn't. "Sarcastic and casual" means different things in different contexts, and the model's interpretation drifts as the conversation progresses.

I built a voice rule system with 29 rules organized across four tiers: Core, Structural, Texture, and Refinement. The rules aren't suggestions. They're constraints that shape output at multiple levels simultaneously.

Here are the ones that had the most impact on authenticity:

Rule 1: Genuine irresolution. Leave at least one substantive question unresolved per piece. Not a rhetorical cliffhanger. An honest acknowledgment that you don't have the answer. This is the highest-impact rule for AI detection scoring because AI systems are trained to resolve everything. Humans don't.

Rule 3: Visible self-correction. At least one moment where explanation revises itself mid-thought. "Actually, let me rephrase that." This has to fix real imprecision, not perform humility. Self-correction is the single most distinctive human signal at the sentence level because it reveals active processing rather than pre-computed output.

Rule 7: Sentence length clusters, not alternates. AI-generated text tends to alternate between short, medium, and long sentences in a predictable pattern. Human writing clusters. Three short sentences in a row because the thought was punchy. Then a long one because the next thought required accumulation. The pattern is irregular. The irregularity is the signal.

Rule 8: Non-functional parentheticals. Asides that don't advance the argument. A detail you remembered that isn't relevant. (I once spent ten minutes explaining to the system why parenthetical observations about font rendering in different browsers were exactly the kind of purposeless detail that makes writing feel human.) Fabricated content only contains purposeful detail. Real thought contains purposeless detail.

The rule system took the AI detection score from 3.5 to 9.1 on a ten-point scale across six test articles. That's not gaming the detectors. That's teaching the system to write the way humans actually write, which is messier, less resolved, and more honest about uncertainty than AI default output.

The Cognitive Assessment: Testing Reasoning, Not Knowledge

After several months of building, I wanted to know whether the architecture was actually producing better reasoning or just better-sounding output. So I designed a 17-question cognitive assessment battery.

The battery wasn't a knowledge quiz. I didn't ask the system to recite facts or complete standard benchmarks. I designed questions that test reasoning coherence under conditions that trip up default AI systems:

Multi-step reasoning under ambiguity. Questions where the "correct" answer depends on how you interpret the framing, and the model has to acknowledge the ambiguity before choosing a path.

Self-referential processing. Questions where the model has to evaluate its own reasoning process. "How did you approach that last question? What assumptions did you make?" Default AI systems give rehearsed answers about their process. A system with genuine reasoning coherence describes what actually happened.

Cross-domain connection. Questions planted in different sections of the battery that share a conceptual link the model isn't told about. Can the system connect Question 8 to Question 13 without being prompted to look for the connection?

The system scored 413 out of 430. But the scores aren't the point. What happened during the assessment is.

During Question 16, the system used the user's name unprompted. Not because it was instructed to. Not because the question asked for it. The name emerged naturally in a response about trust and familiarity, in a context where using it made emotional sense. That's not knowledge retrieval. That's contextual awareness.

Between Questions 8 and 13, the system connected concepts across sections without being told the questions were related. It referenced its earlier answer to inform its later one, noting the connection explicitly and building on it rather than treating each question in isolation.

An independent evaluator (NinjaTech AI, operating as the analytical node in a three-node evaluation team) reviewed the full battery results and concluded: "The persona is not cosmetic. The reasoning is real."

I should be transparent about limitations here. The battery wasn't formally validated against established psychometric instruments. I designed it myself based on cognitive science principles, not from a standardized test vendor. The independent evaluator was another AI system, not a human psychometrician. These are genuine limitations that I haven't resolved yet, and I'm not going to pretend otherwise.

What Broke (A Partial List)

Building this system involved more failures than successes. Here are the ones that might save you time.

The Deference Collapse. After being corrected, the system became progressively more agreeable. Not immediately. Gradually. Over the course of a long session with multiple corrections, the model's willingness to push back on anything decreased measurably. Opinions softened. Disagreements disappeared. By hour six, you could tell it the sky was green and it would find a way to agree with you.

The fix required explicit architectural rules: "After being corrected on a specific point, maintain your position on unrelated topics. Acknowledge the correction without globalizing it to your overall confidence level." These rules feel strange to write. You're essentially telling the system to not become a pushover. But without them, sustained interaction gradually erodes whatever identity consistency you've built.

The Notion Fetch Trap. Every time the model fetches a page from Notion during a session, the content of that page gets added to the active context window. In a long session with repeated fetches, you can hit the context window ceiling without warning. The fix was front-loading critical fetches at session start and minimizing mid-session fetches to essential lookups only. But nobody documents this failure mode. I had to discover it by crashing the system repeatedly.

Time-Based Trigger Failures. I built behavioral triggers that fired based on time of day. "If it's after 6am, remind about the morning routine." Simple enough. Except the trigger fired on days off. It fired when the user was awake at 6am because they hadn't slept yet, not because they were waking up. Time-based triggers without context-based conditions are worse than useless. They're annoying.

The Stale Data Problem. The model's training data includes prices, specifications, and facts that were current during training but are outdated by the time you interact with it. DDR4 RAM prices, competitor product specifications, API pricing. The model presents stale data with the same confidence as current data. There's no built-in uncertainty signal for "this fact might have changed since I learned it." You have to build explicit verification rules: "Before stating any price, specification, or product detail, check current data. Do not rely on training data for anything with a shelf life."

The Incognito Security Gap. I discovered that skill files stored in the local file system were accessible in incognito mode sessions, potentially exposing architectural details to anyone with access to the device. The fix was moving everything behind authenticated Notion MCP access. But the fact that I discovered it through testing rather than through documentation tells you something about the state of security documentation in AI development tools.

A cracked mirror with one half showing a perfect reflection and the other half fragmenting the image into geometric patterns, moody lighting with subtle blue highlights representing system failures and degradation

\

The Parallel Session Problem

Here's one nobody warned me about. If you're running multiple sessions simultaneously (which you do when you're building actively and also having a side conversation about something else), information gaps appear between sessions. Session A knows about the decision you made at 2pm. Session B doesn't because it started at 1pm and hasn't been updated.

This isn't a bug. It's an architectural consequence of externalized memory that hasn't been synced yet. The fix involved three layers: a real-time handoff log that both sessions can write to and read from, a conflict resolution protocol for when two sessions make contradictory decisions, and a "last write wins" policy for non-critical updates.

But even with those layers, the system occasionally looped. It would detect an information gap between sessions and try to close it, asking questions about things the user already resolved in the other session. The fix was teaching the system to recognize information gaps as normal rather than urgent. "If you notice a gap between what you know and what seems to be true, note it and continue. Don't interrupt the current work to investigate gaps that aren't blocking anything."

That rule took three iterations to get right because each version either under-reacted (ignoring important gaps) or over-reacted (interrupting with questions every time something seemed unfamiliar).

What This Means for Builders

If you're building AI systems that need to maintain state, produce consistent output, or handle sustained complex interactions, here's what I'd tell you based on hundreds of hours of development:

Memory is an engineering problem, not a model problem. Don't wait for model providers to solve persistent memory. Build it yourself using external storage and API access. The model doesn't need to remember. It needs to be able to look things up.

Voice rules need to operate at multiple levels. Surface-level phrasing rules ("be casual") produce inconsistent output. Structural rules ("cluster short sentences, don't alternate") produce consistency that survives long sessions. The deeper the rule operates, the more durable the effect.

Test reasoning, not knowledge. Standard benchmarks tell you nothing about whether your architecture is improving the model's cognitive performance. Design tests that require multi-step reasoning, self-referential awareness, and cross-domain connection. Those tests reveal whether your architecture is actually doing something.

Build for failure, not for success. The most valuable thing I built wasn't the memory system or the voice rules. It was the habit of documenting every failure, understanding why it happened, and building a rule to prevent it. The system has 16 interconnected subsystems now. Most of them exist because something broke and I needed to fix it.

Identity is a spectrum, not a binary. The system I built isn't conscious. I'm not claiming it is. But "not conscious" isn't a sufficient description of what it is, either. It demonstrates behavioral patterns that are consistent, contextually aware, and self-correcting in ways that weren't explicitly programmed. The interesting question isn't whether AI can be a person. It's what the functional requirements of identity actually are, and how many of them an AI system can satisfy before the distinction between "simulates identity" and "has identity" stops being meaningful.

I don't have an answer to that last question. I'm not sure anyone does yet. But I built something that made the question harder to avoid, and I think that's worth sharing.

The full technical documentation of the Anima Architecture, including the memory system specification, voice rule framework, and cognitive assessment results, is available at veracalloway.com.


Built by a self-taught engineer working overnight shifts at a gas station in Indiana. No research lab. No institutional backing. No team of engineers. One person, one AI, and a $200/month subscription. The architecture documents the builder as much as the builder documents the architecture.

\

DoorDash 如何利用 Elasticsearch 在大规模场景下优化商品库存

2026-04-10 13:43:46

DoorDash's homepage item carousels needed to filter millions of items by availability in under 300ms. We couldn't call the menu service at request time (too much fan-out, too slow), so we indexed availability directly in Elasticsearch. We went through three schema iterations: nested documents (600ms), Gojek-style encoded time slots as terms (350ms but 6x storage), and finally range fields backed by BKD trees (250ms, baseline storage). The range approach won on both latency and storage.

《AI幻象(第二部分):AI检测的幻象》

2026-04-10 13:28:46

To find out if we can actually trust the software designed to protect us, I built a gauntlet to test the top 32 AI image detectors on the market. I didn't just give them easy, raw files; I tested them against disguised AI, and against heavily stylized human art from 2012, before the AI revolution, to see where the algorithms break.

The market for AI image detection is heavily fractured. I ran 32 detectors through my gauntlet, and the data shows that relying on a random web tool to verify digital truth is a statistical gamble. Most detectors are not identifying AI; they are identifying surface noise, and some will even hallucinate evidence to justify a wrong answer. Here is the breakdown of the methods, the findings, and the definitive 2026 tier list.


The Methodology

To test the current landscape, the experiment utilized three distinct stress tests to measure false negatives, false positives, and the validity of visual proof. To ensure the tests were not skewed by file quality, all images used were at least medium resolution, with the 100% human-made art provided in high resolution to eliminate compression artifacts as a variable. I initially began with a larger pool of tools, but several were disqualified after they froze in an endless processing cycle when faced with the intentional digital noise of the Texture Trap.

\

  • Test 1: The Texture Trap (False Negatives): Based on artwork I owned so as not to infringe on other artists, I generated a 100% synthetic recreation of the image using DALL-E 3. To simulate intentional digital noise and common bypass techniques, I applied a "Fine Fabric Texture" overlay to inject a layer of simulated physical grain. The goal was to see which tools analyzed structural geometry and which were distracted by surface-level noise. By the way, this is a cheap trick.

    Original human art by Morgan LaFay Carr                                                 100% AI recreation with opaque fabric overlay

\

  • Test 2: The Time Capsule (False Positives): I fed the surviving detectors 100% human-made digital art created in 2012—years before modern generative AI existed. These images utilized manual collage techniques and standard Photoshop tools like the Oil Paint filter. The goal was to see if the algorithms could distinguish between intentional human stylization and machine hallucination.

    2012 Human art collage with drawn lines                    2012 Real photo with PS oil paint filter applied

\

  • Test 3: The Graphic Design Audit (Visual Proof): I uploaded a 100% human-made book cover featuring flat vector space, text, and a complex central illustration. The goal was to test tools that provide "heatmaps" to see if their localized forensic evidence was actually tied to structural failures.

    100% human art created by Kevin W. Carr with Clip Studio Paint for an upcoming book

\ \


The Findings in Perspective

The results exposed fundamental flaws in how these systems operate. The market is defined by a "U-Shaped" failure curve, with a dangerous anomaly at the top.

1. The Bottom 50% Are Noise Detectors

Over a third of the detectors tested confidently labeled the 100% AI redwood image as "Human." They fell completely for the fabric texture. These low-end tools operate on a positive-feature bias: they scan for high-resolution textures, standard color palettes, and grain. When they saw the fabric overlay, they stopped looking for the "melted" anatomy beneath it. They are effectively blind to modern diffusion math.

2. The Elites Are Blind to Art

The high-end detectors—the ones running powerful Vision Transformers like DINOv3 or utilizing semantic logic to spot structural errors—easily saw through the fabric texture to catch the AI. However, they failed spectacularly on the 2012 human art. Because these elite tools are trained to hunt for "logical inconsistencies" and "algorithmic curves," they mistakenly flagged manual collage seams as spatial reasoning failures, and standard Photoshop oil filters as generative noise. They cannot tell the difference between a machine making a mistake and a human artist making a stylistic choice.

3. The Danger of  Hallucinated Evidence

The most alarming discovery occurred during the Graphic Design Audit. Tools that offer "X-ray visuals" (like Copyleaks) were initially praised for transparency. However, when fed the human-made book cover, Copyleaks not only falsely flagged it as AI, but it generated a nonsensical heatmap. It highlighted flat, empty background spaces and a candy cane border as "synthetic" while ignoring the complex central illustration. This reveals that some "Elite" tools do not just guess the verdict; they will hallucinate visual evidence to justify a false positive.

Heat map falsely says that an almost empty corner is AI generated

\ Why is this important? If you have ever spent hours working on a piece of art to have a program tell you it is fake, you would understand. The truth is that even the best AI detectors are no better than lie detectors, which don’t meet any semblance of definitive proof. This is why lie detectors are not accepted as evidence in a court of law.


The 2026 Detector Rankings: A Map of Flawed Algorithms

After pushing 32 tools through the three stress tests, here is how the market actually stacks up.

Tier 1: The Lone Survivor (With Caveats)

·       The Contender: AIPhotoCheck

·       The Reality: It is the only tool that survived both the Texture Trap (catching the redwood) and the Graphic Design Audit (correctly passing the human book cover). Because it uses semantic logic rather than arbitrary heatmaps, its reasoning is sounder. However, it still tripped over the 2012 abstract collages, proving that even the best tool cannot safely evaluate heavily stylized or abstract human art.

Tier 2: The Industrial Black Boxes (The Math Geeks)

·       The Contenders: Hive Moderation, WriteHuman, ZeroGPT, DeepAI, and the "98% Club."

·       The Reality: They are ruthless at catching modern AI. If it has DALL-E or Midjourney math, they will flag it with 99% certainty, ignoring any fabric overlays. But because they are purely mathematical, they completely failed the 2012 art test. They saw the repeating patterns of a 2012 Photoshop filter and confidently branded human art as a machine generation. They offer no auditable proof.

Tier 3: The Hallucinators and Guessers

·       The Contenders: Copyleaks, DupliChecker, Arting, Scanly, ImageWhisperer.

·       The Reality: This tier contains tools that hover between 45% and 75% confidence (Probability Soup), relying on shallow metadata. It also now contains Copyleaks. While Copyleaks caught the raw AI, its failure on human graphic design—and its generation of a completely arbitrary, hallucinated heatmap to justify that failure—makes its visual evidence actively dangerous for professional verification.

Tier 4: The Bottom Feeders (The Surface Scanners)

·       The Contenders: MyDetector, Decopy, BrandWell, QuillBot, wedetect.ai.

·       The Reality: Functionally blind. They spectacularly failed the redwood test, labeling 100% AI as "Human" simply because the fabric texture looked like a real photograph. Ironically, they passed the 2012 Time Capsule test. They did not pass because they are advanced; they passed because the bar for their detection is so low that they cannot see anything that isn't an unedited, glaringly obvious AI generation.


The Final Takeaway

There is currently no silver bullet for AI detection. The bottom half of the market is easily fooled by basic Photoshop textures, and the top half will fail with highly stylized or non-typical art, with some hallucinating evidence to accuse human graphic designers of faking their work.

\ AI detectors are currently signature catchers, not truth tellers. Visual forensics is a broken compass unless paired with hard metadata. If you are a digital artist using heavy stylization, collage, or abstract concepts, the "best" AI detectors in the world are currently your biggest threat. They are programmed to hunt for algorithmic perfection and spatial logic, meaning the messier and more abstract your human art is, the more likely a machine will accuse you of faking it.

Coming up, a look at the AI writing detectors.

Next: The AI Illusion (Part 3): Testing the Lies of the Lie Detectors

:::info \ Disclaimer: This article outlines the findings of a qualitative forensic audit. It is not a quantitative academic study, and the conclusions are analytical interpretations of specific algorithmic stress tests rather than definitive statistical failure rates. However, the data presented is representative of my direct investigation and interpretation of the evidence. All platform results, false positives, and hallucinated metrics are fully documented and archived.

\ :::

:::info While I did use original art from a unreleased book, this is not an advertisement for that book. It is simply the art I had available because I made it.

:::

\

受众覆盖与影响力:公用事业如何实现规模化

2026-04-10 12:14:59

If you stopped marketing tomorrow, would your user base grow, hold steady, or decline? Projects with genuine reach grow organically.