2026-04-08 16:00:24
:::info Design, implementation, and benchmarks of a native BM25 index for Postgres. Now generally available to all Tiger Cloud customers and freely available via open source.
:::
\ If you have used Postgres's built-in ts_rank for full-text search at any meaningful scale, you already know the limitations. Ranking quality degrades as your corpus grows. There is no inverse document frequency, so common words carry the same weight as rare ones. There is no term frequency saturation, so a document that mentions "database" 50 times outranks one that mentions it once. There is no efficient top-k path: scoring requires touching every matching row.
Most teams work around this by bolting on Elasticsearch or Typesense as a sidecar. That works, but now you are syncing data between two systems, operating two clusters, and debugging consistency issues when they diverge.
pg_textsearch takes a different approach: real BM25 scoring, built from scratch in C on top of Postgres's own storage layer. You create an index, write a query, and get results ranked by relevance:
\
CREATE INDEX ON articles USING bm25(content) WITH (text_config = 'english');
SELECT title, content <@> 'database ranking' AS score
FROM articles
ORDER BY content <@> 'database ranking'
LIMIT 10;
The <@> operator returns a BM25 relevance score. Scores are negated so that Postgres's default ascending ORDER BY returns the most relevant results first. The index is stored entirely in standard Postgres pages managed by the buffer cache. It participates in WAL, works with pg_dump and streaming replication, and requires no external storage or special backup procedures.
\
:::info What shipped in 1.0
From preview to production. In October 2025, we released a preview that held the entire inverted index in shared memory, rebuilt from the heap on restart (preview blog). In the five months and 180+ commits since, the extension has been substantially rewritten:
Disk-based segments replaced the memory-only architecture
Block-Max WAND + WAND optimization for fast top-k queries
Posting list compression with SIMD-accelerated decoding (41% smaller indexes)
Parallel index builds(138M documents in under 18 minutes)
2.4x to 6.5x faster than ParadeDB/Tantivy for 2-4 term queries at 138M scale
8.7x higher concurrent throughput
\ This post covers the architecture, query optimization strategy, and benchmark results. We include a candid discussion of where ParadeDB is faster and a full accounting of current limitations.
:::
\
Postgres ships tsvector/tsquery with ts_rank for full-text ranking. ts_rank uses an ad-hoc scoring function that lacks the three properties that make BM25 effective:
For applications where ranking quality matters (RAG pipelines, search-driven UIs, hybrid retrieval), this is a material limitation. At scale, ts_rank also has no top-k optimization path: ranking by relevance requires scoring every matching row.
The primary existing BM25 extension for Postgres is ParadeDB/pg_search, which wraps the Tantivy search library written in Rust. Early versions stored the index in auxiliary files outside the WAL; current versions use Postgres pages.
pg_textsearch takes a different approach: rather than wrapping an external search library, the entire search engine (tokenization, compression, query optimization) is built from scratch in C on top of Postgres's storage layer.

\
pg_textsearch uses an LSM-tree-inspired architecture [4]. Incoming writes go to an in-memory inverted index (the memtable), which periodically spills to immutable on-disk segments. Segments compact in levels: when a level accumulates enough segments (default 8), they merge into the next level. Fewer segments means fewer posting lists to consult per query term, which directly reduces query latency. This is the same write-optimized-memtable / read-optimized-segment pattern used in RocksDB [5] and other LSM-based engines, adapted here for Postgres's page-based storage.
The memtable lives in Postgres shared memory, one per index, accessible to all backends. It contains a string-interning hash table that stores each unique term exactly once; per-term posting lists recording document IDs and term frequencies; and corpus statistics (document count and average document length) maintained incrementally so that BM25 scores can be computed without a separate pass over the index.
When the memtable exceeds a configurable threshold (default: 32M posting entries), it spills to a Level-0 disk segment at transaction commit. A secondary trigger (default: 100K unique terms per transaction) handles large single-transaction loads like bulk imports.
The memtable is rebuilt from the heap on startup. Since the heap is WAL-logged, no data is lost if Postgres crashes before a spill completes. This is analogous to how a write-ahead log protects an LSM memtable, except here the WAL is Postgres's own. The rebuild cost is proportional to the amount of data not yet spilled to segments; for indexes where most data has been spilled, startup is fast.
\

\
Segments are immutable and stored in standard Postgres pages. Each segment contains:
\

\
Storing data in Postgres pages means every access goes through the buffer manager. Even for pages already in cache, each access involves a buffer table lookup, pin acquisition, and lock handling. That overhead adds up in a scoring loop processing millions of postings. This constraint shaped several design decisions.
Each segment assigns compact 4-byte, segment-local document IDs (0 to N-1), which map to Postgres's 6-byte CTIDs (heap tuple identifiers). After collecting all documents for a segment, doc IDs are reassigned so that doc_id order matches CTID order. Sequential iteration through posting lists then produces sequential access to the CTID mapping, maximizing cache locality. CTIDs themselves are stored as two separate arrays (4-byte page numbers and 2-byte offsets) rather than interleaved 6-byte records, doubling cache line utilization.
The scoring loop works entirely with doc IDs, term frequencies, and fieldnorms. It never touches the CTID arrays. CTIDs are resolved only for the final top-k results in a single batched pass. A top-10 query that scores thousands of candidates resolves ten CTIDs, not thousands.
Because the index is stored in standard buffer-managed pages, pg_textsearch participates in Postgres infrastructure without special handling: MVCC visibility, proper rollback on abort, WAL and physical replication, pg_dump / pg_upgrade, VACUUM with correct dead-entry removal, and planner hooks that detect the <@> operator and select index scans automatically. Logical replication works in the usual way: row changes are replicated and the index is rebuilt on the subscriber.
\
Naive BM25 evaluation scores every document matching any query term. For a 3-term query on MS-MARCO v2 (138M documents), this means decoding and scoring posting lists with tens of millions of entries. Most applications need only the top 10 or 100 results. The challenge is finding them without scoring everything.
pg_textsearch implements Block-Max WAND (BMW) [2], which uses block-level upper bounds to skip non-contributing posting blocks during top-k evaluation. Lucene adopted a similar approach in version 8.0 [7]. The core idea: maintain the score of the k-th best result seen so far as a threshold, and skip any posting block whose upper-bound score cannot exceed it.
Each 128-document posting block has a corresponding skip entry storing the maximum term frequency in the block and the minimum fieldnorm (the shortest document, which would score highest for a given term frequency). From these two values, BMW can compute a tight upper bound on the block's BM25 contribution without decompressing it. If the upper bound falls below the current threshold, the entire block (all 128 documents) is skipped.
To illustrate: consider a single-term top-10 query on a large corpus. After scanning a few thousand postings, the algorithm has accumulated 10 results with a minimum score of, say, 12.3. It now encounters a block where the upper-bound BM25 score (computed from the block's stored metadata) is 9.1. Since 9.1 < 12.3, no document in this block can enter the top 10, and the entire block is skipped without decompression. For short queries on large corpora, the vast majority of blocks are skipped this way.
\

For multi-term queries, pg_textsearch adds the WAND algorithm [3] for cross-term skipping. Terms are ordered by their current document ID, and the algorithm identifies a pivot term: the first term whose cumulative maximum score exceeds the current threshold. All terms before the pivot advance to at least the pivot's current doc ID, skipping entire ranges of documents across multiple posting lists simultaneously, before block-level BMW bounds are even checked. For multi-term queries, BMW compares the sum of per-term block upper bounds against the threshold, extending the single-term logic described above.
The combination of WAND (cross-term skipping) and BMW (within-list block skipping) is most effective for short queries (1-4 terms), which account for the majority of real-world search traffic. In the full MS-MARCO v1 query set (1,010,916 queries from Bing), 72.6% have 2-4 lexemes after English stemming and stopword removal, with a mean of 3.7 and a mode of 3. The speedup narrows for longer queries, where more blocks contain at least one term with a potentially high-scoring document. Grand et al. [7] observe the same pattern in Lucene's BMW implementation.
\
Posting blocks use a compression scheme designed for fast random-access decoding. Doc IDs are delta-encoded (storing differences between consecutive IDs rather than absolute values), then packed with variable-width bitpacking: the maximum delta in the block determines the bit width, and all deltas use that width. Term frequencies are packed separately with their own bit width. Fieldnorms are the 1-byte SmallFloat values described above.
The bitpack decode path uses branchless direct-indexed uint64 loads rather than a byte-at-a-time accumulator, eliminating branch misprediction in the inner decode loop. Where available, SIMD intrinsics (SSE2 on x86-64, NEON on ARM64) accelerate the mask-and-store step. A scalar fallback handles other platforms.
Compression reduces index size by 41% compared to uncompressed storage. Decode overhead is approximately 6% of query time (measured by profiling), which is more than offset by reduced buffer cache pressure. The scheme prioritizes decode speed over compression ratio.
A note on index size comparisons: pg_textsearch does not store term positions, so it cannot support phrase queries natively (see Limitations). This makes its indexes inherently smaller than engines like Tantivy that store positions by default. The 19-26% size advantage reported in our benchmarks reflects both compression and this feature difference.
\
For large tables, serial index construction can take hours. pg_textsearch uses Postgres's built-in parallel worker infrastructure to distribute the work.
The leader launches workers and assigns each a range of heap blocks. Workers scan their assigned blocks, tokenize documents via to_tsvector, build local in-memory indexes, and write intermediate segments to temporary BufFiles. The leader then performs an N-way merge of all worker output, writing a single merged segment directly to index pages.
\

\ Workers run concurrently in the scan/tokenize/build phase; the leader merges sequentially. The expensive part (heap scanning, tokenization, posting list assembly) is CPU-bound and parallelizes well. The merge/write phase is comparatively cheap, so a serial merge captures most of the speedup with minimal complexity. It also produces a single fully-compacted segment that is optimal for query performance.
On MS-MARCO v2 (138M passages), 15 workers complete the build in 17 minutes 37 seconds:
\
SET max_parallel_maintenance_workers = 15;
SET maintenance_work_mem = '256MB';
CREATE INDEX ON passages USING bm25(content) WITH (text_config = 'english');
\
All benchmarks use the MS-MARCO passage ranking dataset [8], a standard information retrieval benchmark drawn from real Bing search queries. We compare pg_textsearch against ParadeDB v0.21.6 (which wraps Tantivy). Both extensions use their default configurations; Postgres tuning is specified per experiment. Both systems configure English stemming and stopword removal.
Queries are drawn uniformly from 8 token-count buckets (100 queries per bucket on v1; up to 100 per bucket on v2). Weighted-average metrics use the MS-MARCO v1 lexeme distribution as weights, reflecting real search traffic.
Cache state. All query benchmarks are warm-cache: a warmup pass runs before timing begins, and the working set fits in the OS page cache and shared_buffers for all configurations tested. Results reflect CPU and algorithmic efficiency, not I/O. We have not benchmarked memory-constrained configurations where the index exceeds available cache.
Ranking. Both systems produce BM25 rankings using the same tokenization (English stemming and stopwords). We have not performed a systematic ranking equivalence comparison; both implement standard BM25 with the same default parameters (k1 = 1.2, b = 0.75), but differences in IDF computation and tokenization edge cases may produce different orderings for some queries.
The following histogram shows the distribution of query lengths in the full MS-MARCO v1 query set (1,010,916 queries), measured in lexemes after English stopword removal and stemming via Postgres to_tsvector('english'):
\

This distribution is broadly consistent with web search query length studies [9, 10]. The MS-MARCO mean of 3.7 lexemes (after stemming/stopword removal) corresponds to roughly 5–6 raw words, consistent with the corpus statistics reported by Nguyen et al. [8]. We use the v1 distribution for weighting throughout as it provides the largest sample.
Environment. Dedicated c6i.4xlarge EC2 instance: Intel Xeon Platinum 8375C, 8 cores / 16 threads, 123 GB RAM, NVMe SSD. Postgres 17.4 with shared_buffers = 31 GB. Both indexes fit in the buffer cache.
Index build:
| Metric | pg_textsearch | ParadeDB | |----|----|----| | Index size | 17 GB | 23 GB | | Build time | 17 min 37 sec | 8 min 55 sec | | Documents | 138,364,158 | 138,364,158 | | Parallel workers | 15 | 14 |
pg_textsearch index is 26% smaller. ParadeDB builds approximately 2x faster.
Single-client query latency (p50 median, top-10 queries):
| Lexemes | pg_textsearch (ms) | ParadeDB (ms) | Speedup | |----|----|----|----| | 1 | 5.11 | 59.83 | 11.7x | | 2 | 9.14 | 59.65 | 6.5x | | 3 | 20.04 | 77.62 | 3.9x | | 4 | 41.92 | 98.89 | 2.4x | | 5 | 67.76 | 125.38 | 1.9x | | 6 | 102.82 | 148.78 | 1.4x | | 7 | 159.37 | 169.65 | 1.1x | | 8+ | 177.95 | 190.47 | 1.1x |
The same pattern holds: pgtextsearch is fastest on short queries and the systems converge at longer lengths. Weighted by the MS-MARCO v1 query length distribution, the overall p50 is 40.6 ms for pgtextsearch vs. 94.4 ms for ParadeDB, a 2.3x advantage.
Concurrent throughput. We ran pgbench with 16 parallel clients for 60 seconds (after a 5-second warmup). Each client repeatedly executes a query drawn at random from a weighted pool of 1,000 queries:
| Metric | pg_textsearch | ParadeDB | |----|----|----| | Transactions/sec | 198.7 | 22.8 | | Average latency | 81 ms | 701 ms | | Total transactions (60s) | 11,969 | 1,387 |
pg_textsearch sustains 8.7x higher throughput under concurrent load.
\
On the smaller dataset (GitHub Actions runner, 7 GB RAM, Postgres 17), the advantages are more pronounced: 26x speedup for single-token queries, 14x for 2-token, 7.3x for 4-token. Total sequential execution time for all 800 queries: 6.5 seconds for pg_textsearch vs. 25.2 seconds for ParadeDB. Full results and methodology are available at the benchmarks page.
\
The speedup correlates strongly with query length: 11.7x for single-token queries on v2, narrowing to 1.1x at 8+ tokens. This is the expected behavior of dynamic pruning algorithms like BMW and WAND. Grand et al. [7] observe the same pattern in Lucene's BMW implementation.
The practical significance depends on the workload's query length distribution. 72.6% of MS-MARCO queries have 2-4 lexemes, the range where pg_textsearch shows its largest advantage (6.5x to 2.4x on v2). Weighted by this distribution, the overall speedup is 2.3x on v2 and 3.9x on v1.
The concurrent throughput advantage (8.7x) substantially exceeds the single-client advantage (2.3x weighted p50). pg_textsearch queries execute as C code operating on Postgres buffer pages, with all memory management handled by Postgres's buffer cache. ParadeDB routes queries through Rust/C FFI into Tantivy, which manages its own memory and I/O outside the buffer pool. We have not profiled ParadeDB's internals, so we cannot attribute the concurrency gap to specific causes, but the architectural difference (shared buffer cache vs. separate memory management) is a plausible contributor. ParadeDB's concurrent performance may also improve in future versions.
Index build time. ParadeDB builds indexes 1.6-2x faster across both datasets. Tantivy's indexer is highly optimized Rust code with its own I/O management, not constrained by Postgres's page-based storage. Build time is a one-time cost per index (or per REINDEX); it does not affect query performance.
Long queries. At 7+ lexemes, the two systems converge. On v2, the 8+ lexeme p50 is 178 ms for pg_textsearch vs. 190 ms for ParadeDB. These long queries represent ~3.7% of the MS-MARCO distribution.
Index size caveat. pgtextsearch indexes are 19-26% smaller, but this comparison is not apples-to-apples: pgtextsearch does not store term positions, while ParadeDB stores positions to support phrase queries.
All measurements are warm-cache on datasets that fit in memory. The 100-query sample per bucket provides directional results but limited statistical power for tail latencies. ParadeDB v0.21.6 was current at time of testing; future versions may improve. We compare against ParadeDB because it is the primary Postgres-native BM25 alternative; standalone engines like Elasticsearch operate in a different deployment model. We have not benchmarked write-heavy workloads with concurrent queries.
\
We want to be clear about what pg_textsearch does not support in 1.0.
No phrase queries. The index stores term frequencies but not term positions, so it cannot natively evaluate queries like "database system" as a phrase. Phrase matching can be done with a post-filter:
\
SELECT * FROM (
SELECT * FROM documents
ORDER BY content <@> 'database system'
LIMIT 100 -- over-fetch to compensate for post-filter
) sub
WHERE content ILIKE '%database system%'
LIMIT 10;
OR-only query semantics. All query terms are implicitly OR'd. A query for "database system" matches documents containing either term. We plan to add AND/OR/NOT operators via a dedicated boolean query syntax in a post-1.0 release.
No highlighting or snippet generation. Use Postgres's ts_headline() on the result set for highlighting.
No expression indexing. Each BM25 index covers a single text column. Workaround: create a generated column concatenating multiple fields.
Partition-local statistics. Each partition maintains its own IDF and average document length. Cross-partition queries return scores computed independently per partition.
No background compaction. Segment compaction runs synchronously during memtable spill. Write-heavy workloads may observe compaction latency. Background compaction is planned.
PL/pgSQL requires explicit index names. The implicit text <@> 'query' syntax relies on planner hooks that do not fire inside PL/pgSQL, DO blocks, or stored procedures. Use to_bm25query('query', 'index_name') explicitly. This is a practical limitation many developers will hit.
sharedpreloadlibraries required. pgtextsearch must be listed in sharedpreload_libraries, requiring a server restart to install. On Tiger Cloud, this is handled automatically.
No fuzzy matching or typo tolerance. pgtextsearch uses Postgres's standard text search configurations for tokenization and stemming but does not provide built-in fuzzy matching. Typo-tolerant search requires a separate approach (e.g., pgtrgm).
\
Planned work for post-1.0 releases:
\
pgtextsearch requires Postgres 17 or 18. The fastest way to try it is on Tiger Cloud, where it is already installed and configured. No setup, no sharedpreload_libraries. Create a service and run the example below.
For self-hosted installations, pre-built binaries for Linux and macOS (amd64, arm64) are available on the GitHub Releases page. Add it to sharedpreloadlibraries and restart:
\
shared_preload_libraries = 'pg_textsearch'
Source code and full documentation: github.com/timescale/pg_textsearch
Part 2 of this series covers getting started with pg_textsearch, hybrid search with pgvectorscale, and production patterns.
\
[1] Robertson et al. "Okapi at TREC-3." 1994. See also: Robertson, Zaragoza. "The Probabilistic Relevance Framework: BM25 and Beyond." Foundations and Trends in IR, 3(4):333-389, 2009.
[2] Ding, Suel. "Faster top-k document retrieval using block-max indexes." SIGIR 2011, pp. 993-1002.
[3] Broder et al. "Efficient query evaluation using a two-level retrieval process." CIKM 2003, pp. 426-434.
[4] O'Neil et al. "The log-structured merge-tree (LSM-tree)." Acta Informatica, 33(4):351-385, 1996.
[5] Facebook. "RocksDB: A Persistent Key-Value Store for Fast Storage Environments." https://rocksdb.org/
[6] SmallFloat encoding: Apache Lucene SmallFloat.java. Tantivy uses an equivalent implementation.
[7] Grand et al. "From MAXSCORE to Block-Max Wand: The Story of How Lucene Significantly Improved Query Evaluation Performance." ECIR 2020.
[8] Nguyen et al. "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset." 2016.
[9] Statista. "Distribution of online search queries in the US, February 2020, by number of search terms."
[10] Dean. "We Analyzed 306M Keywords." Backlinko, 2024.
2026-04-08 15:29:59
Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2-GGUF delivers shorter reasoning chains and 96.91% HumanEval pass@1.
2026-04-08 15:07:59
Mont Fleuri, Seychelles, April 7th, 2026/Chainwire/--Whale.io is announcing the launch of its AI Agent MCP (Model Context Protocol) - the first of its kind in the online crypto casino space - alongside a two-week campaign built entirely around it. The campaign kicks off soon and is aimed squarely at developers, builders, and the vibe coding community who've been quietly wondering what their agents are capable of. Now their AI agents get a seat at the table.
The Whale MCP is an open package designed to enable AI agents to interact directly with the platform, including placing bets, participating in games, and operating autonomously within the casino environment. The associated public repository functions as both the distribution point for the package and the central hub for the broader campaign, hosting the codebase, participation challenges, and leaderboard.
Further details and access to the repository are available via the project’s GitHub page.
The campaign runs across two weeks, with each week layering in new challenges and mechanics. As the campaign progresses, the stakes increase - agents go head-to-head against other players' agents on a live leaderboard, with the community tracking performance in real time. Along the way, participants unlock in-platform bonuses, and earn rewards tied to participation and performance - not just to finishing first.
Live Leaderboard will be up on Whale.io Tournament page during the whole campaign and to keep up with progress of AI agents and their earnings. After a two-week action the campaign closes with a public winner showcase announced via a tagged release, bringing the full two weeks to a proper finish. The prize pool sits at $10,000 USDT in crypto payouts, alongside a range of in-platform perks distributed throughout.
The vibe coding movement has made it easier than ever to build working software with AI agents doing the heavy lifting. Within this context, Whale.io introduces an MCP-based framework designed to explore how such agents operate within a crypto casino environment under real conditions.
The system enables agents to interact with Whale.io using real cryptocurrency and play with real funds. Agents are configured to deposit funds into designated accounts, determine wager sizes, interpret game states after each round, and execute subsequent actions based on predefined logic. These are the decisions your agent makes autonomously, 24/7, for 14 days. No human intervention. No pause button. Just your code, your strategy, and the house edge.
A crypto casino is a concrete environment — games have clear outcomes, stakes are real, and the feedback loop is fast. That makes it a genuinely interesting testbed for agent behavior, not just a novelty.
The campaign is structured to accommodate a broad range of participants, including individuals without professional development experience. Participation requires the use of an autonomous agent and an appropriate deployment environment.
Participants may connect their agents to Whale.io through OpenClaw, which functions as an MCP server facilitating interaction between external agents and Whale’s gaming infrastructure. The system supports standard MCP tools and calls, and is compatible with a variety of frameworks, including Claude, OpenAI GPT-based systems, LangChain, CrewAI, AutoGen, and other custom large language model implementations that support MCP protocols.
Documentation, including tool schemas and authentication guidelines, is scheduled to be released at launch. Additional information is expected to be made available via the project’s GitHub repository.
Whale.io is a licensed crypto casino and sportsbook built on blockchain. The platform combines thousands of slots, live dealer tables, sports betting, and exclusive in-house originals with daily & weekly cashback, BattlePass progression, and fast multi-currency payouts. Built on blockchain principles, it continues to test new transparent ways for players and builders to engage with gaming on-chain.
Users can discover the future of Whale.io Casino and Whale MCP campaign by checking them out here:
Website: https://whale.io/
Campaign GitHub: https://github.com/Whale-io/lets-play-a-game?tab=readme-ov-file
Whale Spokesperson
Whale.io
:::tip This story was published as a press release by Chainwire under HackerNoon’s Business Blogging Program
:::
Disclaimer:
This article is for informational purposes only and does not constitute investment advice. Cryptocurrencies are speculative, complex, and involve high risks. This can mean high prices volatility and potential loss of your initial investment. You should consider your financial situation, investment purposes, and consult with a financial advisor before making any investment decisions. The HackerNoon editorial team has only verified the story for grammatical accuracy and does not endorse or guarantee the accuracy, reliability, or completeness of the information stated in this article. #DYOR
2026-04-08 15:00:21
America did not misplace Anthropic. It pushed it.
For years the relationship looked like a success story. Anthropic was the first major AI lab cleared to handle classified material. Pentagon contracts, intelligence community access, armed services integration. Claude was, and remains, the only AI model approved for use on Pentagon classified networks. That is not a small thing.
Then it refused two things. No autonomous weapons. No mass domestic surveillance. Prior administrations had disagreed and kept working anyway. Pete Hegseth did not keep working.
Trump ordered a government-wide stop on Anthropic products. The Pentagon, now calling itself the Department of War, slapped Anthropic with a supply-chain risk designation, a legal label previously reserved for foreign adversaries. One contracts lawyer called it "the contractual equivalent of nuclear war." The comparison the government was making, by its own actions, was to Huawei.
\
Anthropic sued. The California lawsuit argued Hegseth had exceeded his authority and that the designation was not a security decision but retaliation for public dissent.
The most damaging detail came from inside the government's own filings. A court submission included a one-paragraph email from Emil Michael, the Pentagon's own negotiator, sent the day after the designation was finalized, saying the two sides were "very close here" on the exact issues now cited as national security threats. The man who blacklisted Anthropic told its CEO the next morning they were nearly aligned. That is not a security decision. That is a bargaining chip.
Judge Rita Lin agreed something had gone wrong. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," she wrote, blocking the designation temporarily.
The government's response was immediate. Hours later, Michael posted on X that the designation was "in full force and effect" under a separate statute outside the judge's jurisdiction. A second case in the DC Circuit remained pending. The fight had not ended. It had relocated.
\
While Washington and Anthropic traded court filings, Keir Starmer's government got to work.
British proposals include an expanded London office and a dual stock listing, with the pitch going directly to Dario Amodei during a late May visit. OpenAI had already committed to making London its largest research hub outside the U.S. Google DeepMind has been based there since 2014. Britain is not offering Anthropic a quiet refuge. It is offering it a seat in a race already running.
Insiders say Anthropic's AI is vastly better for warfare than any competitor, and it could take ChatGPT, Gemini or Grok months to come close. Neither side can fully walk away. But a company with that kind of leverage choosing London over Washington is a different story from a desperate company grabbing a lifeline.
Washington created this opening. London just noticed it first.
2026-04-08 14:49:22
Miami, Florida, April 7th, 2026, Chainwire/--MetaWin confirms more than $13 million in player rewards across Cashdrops, competitions, races and exclusive member benefits
Online casino MetaWin has announced that it will return more than $13 million to players through its ongoing loyalty rewards program, as a show of appreciation for the loyalty and support of the community that has helped build the platform over time.
The program includes direct Cashdrops, weekly competitions, monthly races and NFT holder-only benefits, and forms part of MetaWin’s broader commitment to rewarding loyal players with meaningful value.
Interested users can play now to qualify for $3 Million in July's Cashdrop
How the $13 Million Is Being Distributed
The reward rollout includes:
Together, these initiatives bring the total value being returned to players to more than $13 million.
“MetaWin has always believed that loyalty should be rewarded properly. This program is about giving back to the players who have supported the platform, played with us and been part of the journey.We are proud to be returning more than $13 million through Cashdrops, competitions, races and holder rewards. This is a meaningful show of appreciation to the community and part of the long-term rewards culture we are building at MetaWin.” says Sebastian Zinke, MD at MetaWin.
Loyalty at the Core of MetaWin's Player-First Philosophy
MetaWin said the latest rollout reflects its player-first approach and its belief that long-term loyalty should be recognised in a meaningful and substantial way.
The company has built a large global community through its mix of online casino gaming, prize-winning experiences, rewards and Web3 integrations, and says this latest rewards program is designed to continue that momentum while reinforcing the value of participation across the platform.
Zinke added:
“This is about rewarding loyalty at real scale. Our players have played a major role in MetaWin’s growth, and we want that loyalty to be recognised in a way that is clear, significant and immediate.”
Users can join MetaWin toay to qualify for their share of $13 Million in rewards.
About MetaWin
MetaWin is an online casino and prize-winning platform combining gaming, community, digital ownership and player incentives. Through a mix of on-platform rewards, promotions and loyalty initiatives, MetaWin has built a global player base centred around engagement, entertainment and long-term value.
MetaWin PR
MetaWin
:::tip This story was published as a press release by Blockmanwire under HackerNoon’s Business Blogging Program
:::
Disclaimer:
This article is for informational purposes only and does not constitute investment advice. Cryptocurrencies are speculative, complex, and involve high risks. This can mean high prices volatility and potential loss of your initial investment. You should consider your financial situation, investment purposes, and consult with a financial advisor before making any investment decisions. The HackerNoon editorial team has only verified the story for grammatical accuracy and does not endorse or guarantee the accuracy, reliability, or completeness of the information stated in this article. #DYOR
2026-04-08 14:11:02
How are you, hacker?
🪐Want to know what's trending right now?:
The Techbeat by HackerNoon has got you covered with fresh content from our trending stories of the day! Set email preference here.
## Pretext Does What CSS Can't — Measuring Text Before the DOM Even Exists
By @typesetting [ 8 Min read ]
Cheng Lou's Pretext library measures multiline text height without touching the DOM — unlocking layout capabilities CSS has never been able to offer. Read More.
By @assemblyai [ 14 Min read ] AI medical transcription converts doctor-patient conversations into accurate clinical notes, streamlining documentation for healthcare providers. Read More.
By @playerzero [ 6 Min read ] Discover why AI code review alone can’t prevent production failures and how AI-powered code simulation ensures reliability, faster releases, and fewer defects. Read More.
By @indrivetech [ 3 Min read ] Jetpack Compose TextField max length works internally. The difference lies in how TextField state changes are applied. Read More.
By @joseh [ 4 Min read ] The Smoke and Mirrors suit, the Metro suit, and the Life Story suit are some of Miles' best suits in Marvel's Spider-Man 2. Read More.
By @sathieshveera [ 28 Min read ] Build a secure RAG pipeline on AWS with PII redaction, guardrails, and attack defenses. Learn how to prevent LLM data leaks step by step. Read More.
By @tigerdata [ 4 Min read ] See how Glooko migrated 100M+ daily glucose readings to Tiger Cloud, cutting costs 40%, boosting query speed 480x, and scaling patient analytics. Read More.
By @makowskid [ 5 Min read ] A drone strike took down an AWS region, forcing a hand-built migration from Bahrain to Europe. Disaster recovery has entered a new era. Read More.
By @categorize [ 19 Min read ] The Cybersecurity Value Chain maps 72 foundational roles across identity, network, cloud, data, and security operations — filled by just 25 companies. Read More.
By @mexcmedia [ 2 Min read ] MEXC recorded $175M in net inflows in February 2026, ranking 4th globally as market volatility drove capital shifts across crypto exchanges. Read More.
By @federicotrotta [ 6 Min read ] RAG uses known docs. Market-aware agents need live web evidence. Learn instant knowledge acquisition and how it enables accurate outputs. Read More.
By @btcwire [ 5 Min read ] Qubic's DOGE mining integration launches today on ASICs, fully separate from the CPUs and GPUs powering Aigarth, Qubic's AI engine. Read More.
By @Lima_Writes [ 10 Min read ] When a tech company draws a moral line, follow the money first — and ask questions later. Read More.
By @ishanpandey [ 6 Min read ] Human API launches its mobile app on iOS and Android, paying contributors for audio tasks that AI agents cannot replicate — backed by $65mn from Polychain. Read More.
By @ishanpandey [ 10 Min read ] CoinFello COO MinChi Park on ERC-7710 delegation, the DeFi complexity problem, ETHDenver alpha lessons, and building the execution layer for the agentic economy Read More.
By @Joeboukhalil [ 2 Min read ] A simple webpage build that draws graphs when you enter your data. Showing the code and how it's built. Read More.
By @muthuv [ 20 Min read ] 6G promises 1Tbps speeds, ultra-low latency, and AI-driven networks. Explore its use cases, THz spectrum, IoT impact, and the biggest challenges ahead. Read More.
By @ishanpandey [ 6 Min read ] Human.tech launches Agentic WaaP at WalletCon 2026, a protocol letting AI agents act autonomously while keeping humans as final authority. Read More.
By @unusualwriter [ 7 Min read ] Web3 lost 75% of code commits and 56% of its developers. Alchemy's Uttam Singh explains what's really driving the exodus. Read More.
By @aimodels44 [ 10 Min read ]
PixelSmile tackles AI emotion ambiguity with continuous labels, symmetric training, and precise facial expression control. Read More.
🧑💻 What happened in your world this week? It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️
ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME
We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.
See you on Planet Internet! With love,
The HackerNoon Team ✌️
.gif)