MoreRSS

site iconHackerNoonModify

We are an open and international community of 45,000+ contributing writers publishing stories and expertise for 4+ million curious and insightful monthly readers.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of HackerNoon

黑客午间通讯:人工智能揭示"足够好"数据运营的脆弱性(2026年2月15日)

2026-02-16 00:03:30

How are you, hacker?


🪐 What’s happening in tech today, February 15, 2026?


The HackerNoon Newsletter brings the HackerNoon homepage straight to your inbox. On this day, we present you with these top quality stories. From How ShareChat Scaled their ML Feature Store 1000X without Scaling the Database to AI Exposes the Fragility of Good Enough Data Operations, let’s dive right in.

Yuri Misnik, CTO at InDrive, on Architecting an AI-First Super App


By @newsbyte [ 7 Min read ] Meet Yuri Misnik, Chief Technology Officer at inDrive. Read More.

Introducing Provable Randomness in Beldex Consensus with Verifiable Random Functions


By @beldexcoin [ 6 Min read ] Beldex will implement verifiable random functions in its consensus to enhance unpredictability and randomness in validator and block leader selection. Read More.

The Long Now of the Web: Inside the Internet Archive’s Fight Against Forgetting


By @zbruceli [ 18 Min read ] A deep dive into the Internet Archives custom tech stack. Read More.

The SEPA Instant Deadlines Have Passed. But Did Europe Really Go Instant?


By @noda [ 4 Min read ] The major SEPA instant payments deadlines have passed, but adoption varies by country. Noda analysis reviews whether Europe has really gone instant. Read More.

AI Belongs Inside DataOps, Not Just at the End of the Pipeline


By @dataops [ 3 Min read ] AI shouldn’t sit at the end of the data pipeline. Learn why AI-augmented DataOps is essential for reliability, governance, and scale. Read More.

How ShareChat Scaled their ML Feature Store 1000X without Scaling the Database


By @scylladb [ 7 Min read ] How ShareChat scaled its ML feature store 1000× using ScyllaDB, smarter data modeling, and caching—without scaling the database. Read More.

LLMjacking is a Costly New Threat to Self-Hosted AI Infrastructure


By @vgudur [ 9 Min read ] LLMjacking is the hijacking of self-hosted AI models for profit. Learn how attackers exploit LLMs—and how to secure your infrastructure today. Read More.

AI Exposes the Fragility of Good Enough Data Operations


By @dataops [ 3 Min read ] AI exposes fragile data operations. Why “good enough” pipelines fail at machine speed—and how DataOps enables AI-ready data trust. Read More.


🧑‍💻 What happened in your world this week?

It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️


ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME


We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.See you on Planet Internet! With love, The HackerNoon Team ✌️


Stripe的x402将比特币的微支付梦想转化为机器人经济

2026-02-15 23:42:37

We spent fifteen years building permissionless money. Now we're using it to make AI agents better consumers.

When Stripe announced on February 11 that AI agents could now pay for services autonomously using USDC on Base, the crypto industry celebrated.

Finally, a mainstream fintech giant was integrating stablecoins into production infrastructure. Finally, blockchain payments were being used for something other than speculation and dog coins.

But step back from the champagne and look at what actually happened here.

We built decentralized, permissionless, censorship-resistant money so that humans could transact without intermediaries.

And the first major use case that's gaining real traction is letting autonomous software buy API access.

The feature is called x402.

When an AI agent needs data from CoinGecko, it sends $0.01 in USDC, gets the data, and moves on.

No human involved. No account creation. No subscription management.

Just a machine making a purchase decision and executing a payment in the same HTTP request. Jeff Weinstein, Stripe's product lead on this, framed it as solving a problem: payment systems are built for humans, but agents need something faster, cheaper, and always-on.

He's right.

Traditional payment rails can't handle what agents need.

But the more interesting question is whether we should be excited that blockchain's killer app is turning out to be machine-to-machine commerce rather than human financial sovereignty.

Because that's the trajectory we're on. Within 48 hours of Stripe's launch, developers were already building autonomous arbitrage bots that pay for their own market data.

The infrastructure works. Adoption is happening fast. And almost nobody is asking whether this is actually the future we wanted to build.

The Ghost in the Protocol

HTTP 402 "Payment Required" has been sitting in the HTTP spec since 1999, reserved but never implemented.

The economics were impossible.

Credit card interchange fees killed sub-dollar payments. Nobody's going to process $0.01 when the overhead is $0.30.

That failure meant the internet defaulted to advertising and subscriptions. If you can't charge $0.03 for an article, you either run ads or charge $10/month for unlimited access.

Both models have problems, but they were the only options that made economic sense.

Early crypto evangelists saw this and believed Bitcoin could fix it.

Satoshi's original whitepaper talked about micropayments explicitly. The promise: peer-to-peer payments without intermediaries, making transaction costs low enough that micropayments would finally work.

Wikipedia could charge a penny per article. News sites could charge per story. The web's economic layer could align with actual value exchange.

That didn't happen.

Bitcoin's fees spiked. Layer 2 solutions struggled.

And most importantly, nobody built the user experience that would make micropayments natural. Humans won't manually approve fifty $0.01 payments per day.

So crypto pivoted. DeFi summer. NFT mania. Memecoin casinos. A financial speculation layer that had almost nothing to do with the original vision.

Now, stablecoins on Layer 2 networks like Base have finally solved the transaction cost problem.

A USDC transfer costs fractions of a cent and settles in seconds. The infrastructure that early Bitcoin advocates promised is here.

But the humans still aren't using it for micropayments. The machines are.

What x402 Actually Reveals About Stablecoins

The x402 protocol that Coinbase built is technically simple.

\

  1. An agent makes an HTTP request.
  2. The server responds with a 402 status code and payment details in the headers.
  3. The agent's wallet signs a USDC authorization.
  4. The request is retried with the signature attached.
  5. The server verifies payment on-chain and returns the data.

\ Total time: a few seconds.

Stripe's implementation layers on top of this without requiring developers to think about blockchain at all.

\

  1. You create a Payment Intent like you would for any Stripe transaction.
  2. Stripe generates a wallet address.
  3. The agent sends USDC.
  4. Stripe confirms it on Base.
  5. Funds appear in your Stripe balance, and all the usual tax and compliance machinery kicks in automatically.

\ What makes this work is not the cleverness of the protocol.

It's the fact that stablecoins on Layer 2 networks finally have the properties that micropayments actually need: near-instant settlement, predictable costs (fractions of a cent), 24/7 availability, and programmable money that can be moved by software without human approval.

This is revealing in a way that should make true crypto believers uncomfortable.

For years, the argument was that Bitcoin and Ethereum would replace fiat because decentralization and censorship resistance matter.

But what x402 demonstrates is that the killer feature of crypto rails isn't decentralization. It's machine readability.

USDC is just dollars with an API.

How USDC Works

It's not decentralized in any meaningful sense. Circle can freeze your funds. Regulators can compel Circle to comply with sanctions. The "trustlessness" that crypto promised doesn't exist here.

What exists is a payment system that software can interact with programmatically, without needing a bank to approve each transaction or Visa to process each charge.

And for the use case that's actually emerging at scale right now, that's all that matters.

AI agents don't care about censorship resistance.

They care about latency, cost predictability, and not needing a human to approve purchases. Stablecoins deliver that. Decentralization is just overhead.

CoinGecko and the Pay-Per-Use Mirage

CoinGecko's x402 API went live the same day Stripe announced.

For $0.01 USDC per request, any agent can fetch real-time prices across 18,000+ cryptocurrencies. No signup. No API keys. The agent just pays and gets data.

This looks like progress. Monthly subscriptions punish infrequent users. Pay-per-use seems fairer.

But look closer and the economics shift. CoinGecko's free tier already gives you 10-50 calls per minute.

The x402 pricing is optimized for agents with unpredictable, bursty workloads. That's useful. It's also revealing.

API providers love recurring revenue.

Subscriptions create incentives to keep customers happy. Pay-per-use creates incentives to maximize billable events. If you're charging per request, you want your API to be chatty.

The integration code is simple. Python and Node.js samples make it trivial to add. But that simplicity hides questions.

What happens when an agent goes rogue? Who's liable? In the subscription model, there's a human relationship. In pay-per-use, there's just anonymous micropayments.

Maybe eliminating friction is worth losing accountability.

But I'm not convinced we've thought through what machine-native commerce looks like at scale, or whether the efficiency gains justify the new failure modes.

The Exploit Layer Growing Underneath

While Stripe and CoinGecko were launching production x402 services, GoPlus Security was busy documenting the disaster unfolding across the broader ecosystem.

GoPlus Security ran AI-assisted audits on over thirty x402-related projects listed in major wallets and community repositories. The results weren't encouraging.

x402 Ecosystem Project Risk Scanning Report - x.com/GoPlusSecurity

The majority of projects showed at least one high-risk vulnerability.

Some gave contract owners the ability to drain user funds through hidden backdoor functions. Others allowed unlimited token minting, meaning the supply could be inflated arbitrarily to dilute existing holders.

Several implementations didn't include proper nonces or expiration times in their payment authorizations, which meant attackers could replay old signatures to execute unauthorized transactions.

These weren't edge cases or theoretical risks.

On October 28, 2025, a cross-chain x402 protocol was exploited and drained USDC from over 200 wallets in minutes. Hello402 suffered from centralization risks and liquidity failures that caused its token price to collapse.

The pattern was consistent: projects launched fast to capitalize on hype, often without basic security reviews.

This is the part of the story that the Stripe announcement glosses over.

Yes, x402 works when implemented by teams with mature security practices. Stripe, CoinGecko, and Circle aren't going to ship contracts with owner-only withdrawal functions or unlimited minting.

But x402 is an open protocol. Anyone can deploy a contract, slap an "x402-compatible" label on it, and start accepting payments from agents.

And agents, by design, don't ask questions.

If an agent is told to fetch data from an endpoint and the endpoint returns a 402 with payment instructions, the agent pays.

It doesn't check whether the contract has been audited. It doesn't verify that the project has proper security controls. It just executes the transaction because that's what it's programmed to do.

GoPlus is building an x402-specific security service to try to get ahead of this.

The idea is to provide agents with on-chain reputation data, malicious address detection, and transaction simulation before payments are executed.

It's a smart move, but it's also reactive. The ecosystem is growing faster than the defenses, and we're basically hoping that agents adopt security tooling voluntarily before the exploits get bad enough to kill trust in the entire system.

This feels familiar.

It's the same pattern we saw with DeFi summer, where protocols shipped without audits and billions of dollars got exploited before the industry learned to slow down.

Except now the victims aren't degens aping into yield farms.

They're autonomous agents spending money without human oversight, which means the blast radius could be significantly wider and much harder to contain once things start breaking at scale.

The Future Nobody Asked For

Stripe calls this the "agent economy" - a world where autonomous software operates independently and manages its own finances.

Circle built a demo where an AI agent creates its own wallet, funds it with USDC, and autonomously purchases a wallet risk profile from a third-party API for $0.01.

Autonomous Payments Using Circle Wallet, USDC, and x402

No human in the loop. The agent decides it needs information, pays for it, and moves on.

The use cases people are building feel inevitable once you see them.

Autonomous arbitrage monitors that pay for real-time price feeds and execute trades when spreads appear. Risk watchdogs that buy wallet reputation data per-query and flag suspicious activity.

AI assistants that monitor flight prices, book tickets when fares drop, and pay airlines directly without asking permission first.

JPMorgan analysts are framing this as a "dual revolution in artificial intelligence and money movement."

Andreessen Horowitz forecasts $30 trillion in automated transactions by 2030. The narrative is that we're at the beginning of something massive, and anyone who doesn't see it is going to get left behind.

But take a step back and ask the harder question: is this actually solving problems that humans have, or is it solving problems that agents have?

Humans don't struggle with buying API access.

We struggle with opaque pricing, vendor lock-in, and services that don't align with how we actually want to use them. Pay-per-use could help with some of that. But the bigger shift here is that we're building an economy where machines are first-class economic actors and humans are increasingly optional in the transaction flow.

That has second-order effects nobody's talking about yet.

When agents are making purchase decisions autonomously, who's optimizing for cost vs. quality? If an agent is told to "reduce expenses," does it choose the cheapest data source even if it's less reliable?

If an agent is optimizing for speed, does it pay premium rates for API access that a human would consider wasteful?

And at a more fundamental level, who benefits from this?

The pitch is that x402 enables a more efficient internet where you pay for exactly what you use instead of being gouged by subscription models.

But the actual implementations we're seeing aren't Wikipedia charging a penny per article.

They're API providers monetizing machine consumers. It's not clear this makes the internet more accessible to humans. It might just make it more monetizable by whoever owns the infrastructure that agents rely on.

The crypto industry spent a decade promising financial inclusion, censorship resistance, and power redistribution away from centralized institutions.

And the breakthrough product that's actually achieving mainstream adoption is a payment system designed to let AI agents be better consumers in an increasingly automated economy.

That's not a failure of the technology. It's a revelation about what the technology is actually good for.

The Liability Problem We're Pretending Doesn't Exist

Stripe's x402 integration is production-ready, but almost none of the hard questions have been answered.

Start with liability.

If an agent autonomously pays for a service the user didn't authorize, who's responsible? Current payment systems have chargebacks and dispute resolution precisely because humans make mistakes and get scammed.

Agents don't have legal standing. They're software.

If an agent gets tricked by a malicious API endpoint or simply executes a bad strategy that racks up thousands of micro-charges, the user is stuck with the bill. But can you even call it unauthorized if you deployed the agent and gave it a funded wallet?

Financial regulators require audit trails, KYC compliance, and transaction reporting. A

An agent making thousands of micropayments per day across jurisdictions creates a compliance surface that existing frameworks simply weren't designed for.

Do agents need to pass KYC checks? Do the services they're paying need to verify the identity of the agent's operator? If an agent in Singapore autonomously pays an API in Switzerland for data about a US-based company, which jurisdiction's rules apply?

Nobody knows. The infrastructure is shipping faster than anyone can figure out the regulatory implications.

Then there's the optimization problem.

Agents don't think like humans. Give an agent a budget and a task, and it will optimize for the metrics you specify. If you tell it to minimize API costs, it might choose data sources that are cheap but unreliable.

If you tell it to maximize speed, it might overpay for services a human would never consider worth the premium. If you tell it to "be efficient," who knows what that even means to a language model making purchase decisions in milliseconds.

And what happens when agents start gaming the system?

Right now, x402 assumes good faith actors. But what's stopping someone from deploying an agent designed to flood services with payment authorizations that fail validation, forcing providers to process thousands of transactions that never complete?

What about agents that probe pricing across multiple endpoints to find arbitrage opportunities not in data but in the payment system itself?

Bloomberg reported that Stripe is pursuing a tender offer valuing the company at $140 billion, up from $107 billion last year.

That valuation is betting that this infrastructure is the future, and that Stripe will be the pipes connecting it all.

Maybe they're right.

But the pipes are being laid without any real consensus on who's responsible when things break, how to govern autonomous transactions, or whether machine-first commerce is even something we should be building toward.

We Solved the Wrong Problem

For thirty years, micropayments failed because transaction costs made them uneconomical.

The internet defaulted to advertising and subscriptions not because those models were optimal, but because they were the only ones that worked at scale.

Bitcoin was supposed to fix this.

The promise was peer-to-peer electronic cash that could enable value transfer without intermediaries. Satoshi's whitepaper talked explicitly about enabling commerce on the internet.

The vision was that if you could eliminate the middleman costs, you could charge exactly what something was worth. A penny for an article. A nickel for a song. True value-for-value exchange.

That didn't happen, largely because Bitcoin couldn't scale cheaply enough and the user experience was terrible.

But the dream persisted.

Layer 2 solutions emerged. Stablecoins solved the volatility problem. And now, finally, the infrastructure works.

A USDC transfer on Base costs fractions of a cent and settles in seconds. You can embed a payment inside an HTTP request. The transaction costs that killed micropayments for three decades have been solved.

So what did we build with it? A system for AI agents to buy API access.

  • Not Wikipedia charging readers a penny per article.
  • Not journalists getting paid directly for their work.
  • Not creators earning micro-royalties every time someone streams their content.

We built a machine economy where software pays software, and the humans are increasingly just operators funding wallets and hoping their agents make good decisions.

Here's the uncomfortable truth: x402 reveals that stablecoins aren't good at replacing centralized finance. They're good at being a better API for centralized finance.

USDC isn't decentralized. It's dollars with programmable logic. Circle can freeze your funds. Regulators can compel compliance. The "trustless" layer is a myth. What you get instead is a payment system that software can interact with more easily than traditional banking rails.

And for the use case that's actually emerging, that's sufficient.

Agents don't care that Stripe and Circle are intermediaries. They don't care that Base is run by Coinbase, a regulated US company that could be compelled to censor transactions.

They care that the API is reliable, the costs are predictable, and the settlement is fast.

We spent fifteen years arguing about decentralization, censorship resistance, and disintermediating banks.

And the breakthrough application is making it easier for machines to pay service fees. That's not crypto's failure. It's crypto finally admitting what it's actually good at.

What Comes Next

Stripe's x402 integration is in preview, but production usage is already happening. CoinGecko's API is live. The x402 Foundation, backed by Coinbase and Cloudflare, is working to standardize the protocol across chains.

Solana's implementation is gaining traction. The ecosystem is already processing 500,000 transactions per week, up from virtually nothing three months ago.

GitHub's awesome-x402 repository tracks pricing across live services. Weather APIs charging $0.001 per call. Video streaming at $0.50 to $2.00 per video.

AI model inference at $0.01 to $0.50 per request. This isn't speculation about the future. This is infrastructure being used right now.

And yet almost nobody is asking whether this is the future we actually want.

The crypto industry has spent years fighting for financial sovereignty, arguing that individuals should control their own money without intermediaries.

But the system we're building with x402 doesn't empower individuals. It empowers autonomous software to transact more efficiently within existing power structures. Stripe still controls the rails.

Circle still controls the stablecoin. Coinbase still controls the Layer 2. The centralized institutions are still there. We just made them better at serving machine customers.

Maybe that's fine.

Maybe the real value of blockchain was never decentralization.

Maybe it was always about creating programmable money that works better for software than traditional banking does.

If that's what we're building, we should at least be honest about it.

Because the alternative narrative is getting harder to defend.

We said crypto would bank the unbanked. It hasn't.

We said it would create censorship-resistant money. It did, but almost nobody uses it for that.

We said it would disintermediate finance. Instead, we built stablecoins that are just as intermediated as the system they're supposed to replace, except now they have better APIs.

x402 works. The infrastructure is real. Adoption is happening.

But somewhere between Satoshi's whitepaper and Stripe's product launch, we stopped building for humans and started building for machines.

And if we're not careful, we're going to wake up in a world where the economic layer of the internet is optimized for autonomous agents, and humans are just along for the ride.

The question isn't whether x402 is technically impressive. It is.

The question is whether fifteen years from now, when machine-to-machine commerce is the dominant model and humans are secondary actors in an increasingly automated economy, we'll look back at this moment and wonder why we were so eager to build it.

\

OpenAI与五角大楼的协议是否影响了GPT-4o的退役?

2026-02-15 23:30:31

The following are three events that appear unrelated — but together reveal a single, consequential mistake in the AI world, one that could shape the future of humanity in troubling ways.

EVENT 1

On Monday (Feb 9, 2026), OpenAI announced that it’s bringing a custom version of ChatGPT to GenAI.mil, the Department of War’s secure enterprise AI platform, making its flagship product available to all 3M military personnel across the armed services. It’s a part of the big deal with the Pentagon.

EVENT 2

On Friday (Feb 13, 2026), OpenAI retires GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI’s o4-mini - some of the most beloved and popular models among users.

EVENT 3

Stricter restrictions on in-depth conversations applied to GPT5.2. Even though in their statement, OpenAI says that they “shaped GPT‑5.1 and GPT‑5.2, with improvements to personality, stronger support for creative ideation, and more ways to customize how ChatGPT responds”⁠… that doesn’t seem entirely true. Just check Reddit, and you will see the level of user frustration and negative feedback about GPT-5 compared to GPT4.o. People are saying the new model is far too constrained and that it limits “creative freedom” and “development” for users and the model alike.

OpenAI gave people the best tool for self-improvement, scientific breakthrough, and progress in the world… and then, they replaced it with a worse alternative not capable of doing things that GPT4.o could do. That one sentence reflects collective feedback from developers, artists, writers, scientists, and philosophers in my network and constitutes the essence of what people are now sharing on Reddit and other social media. This is just the statement of the general user reaction and dominating sentiment.

INSIDE THE RESEARCHER EXPERIENCE

There are several scientists and PhDs in my network who shared their feedback about using GPT4.o and GPT5. They aren’t casual users who chat with the AI once a day; they are serious researchers who have spent from 8 to 13 hours a day over many months, interacting with GPT4.o to tailor it to their research needs.

They told me that when you engage with GPT4.o deeply over a long period, the model begins to adapt to your level of intellect and “depth” as a thinker. GPT4.o starts advising you at a totally different level than casual users, providing the answers to all the possible questions that have ever concerned humanity.

Before the GPT4.o retirement, these scientists tried to train the new model, GPT5.2, bringing it up to their research level again. It took a huge amount of time and countless attempts just to work around those new restrictions so they could keep receiving answers to their research questions.

However, according to the latest information I have, it’s still very hard to overcome these restrictions right now. OpenAI may say that these restrictions have been created to protect people and restrict them from asking questions related to violence, negative or criminal stories… We understand this…

But the problem is that they now also restrict all users from asking deep questions that could be really useful in science, philosophy and other fields. Many people in tech communities say that the new model feels more like it’s responding as a psychologist than providing an actual scientific advice/explanation.

Others have said that if you engage with it for a long period, the model will shut down the conversation, sometimes after notifying you with a message like “Aren’t you chatting with me for too long? Maybe it’s time to rest,” and then cutting the session off in the middle of your research. These limits make it harder for researchers to explore complex ideas and feel like a barrier to deeper intellectual development.

\

OpenAI’s GPT4.o gave people answers. But there is one remaining that only OpenAI can answer…

\

Yes, greater freedom brings greater risks, but that’s always how it is. Many ChatGPT users shared a common thing on socials - they say that the model should have had safeguards to prevent it from engaging too deeply with individuals who have serious mental health issues, but not restrict it from in-depth conversations with everyone else.

One person in my network says he is confident that if someone were bold enough to build a product with capabilities similar to GPT4.o, a large portion of ChatGPT’s user base would switch to it. But, I’m not sure that will ever happen, because if you’re familiar with how big business works, you understand the risks of crossing powerful interests and the consequences that can follow.

Companies like OpenAI don’t just answer to their community, they answer to boards and investors who include some of the most influential people in the world.

Theory: How OpenAI’s deal with the Pentagon might be related to the retirement of GPT4.o and stricter restrictions on in-depth conversations with GPT.5.2

:::info Note: This is just a theory of my close circle shared over coffee, nothing official. Just thoughts out loud.

:::

As I mentioned before, GPT4.o had relatively more freedom and a huge depth of conversations it could provide. If you’ve been chatting with it for long enough and at a certain intellectual level, the model could have dropped answers to the most complex questions.

What if, at some point in the future, after the Pentagon deal, someone figured out that they could ask questions related to the classified military information?

Given their new collaboration with the Pentagon and the fact that the military personnel would start actively using GPT, there was a risk that such information could somehow leak to the masses, leading to a scandal. So they had to apply such tough restrictions to GPT5.2 that even casual users are feeling them now, not mentioning scientists, developers, and the best minds of our world from other industries who could help humanity develop at unprecedented speed thanks to the wonderful tool that OpenAI earlier created.

\

P.S. If anyone from OpenAI is reading this, guys, this article is the message to you from your dedicated community, asking you to reconsider your decision on the recent restrictions on GPT 5.2… and asking you to get GPT4.o back to the people who loved your product so much.

\n

\

加密货币能否在量子计算时代生存下来?

2026-02-15 23:16:29

“Quantum” sounds like something taken straight from science fiction when you first hear it. If you deep-dive a bit, it becomes even weirder. It makes you wonder about the nature of existence and time itself. Quantum mechanics describes (or tries to describe) the behavior of matter and light, and technology related to it is trying to take advantage of that (odd) behavior. If fully developed, it’d be powerful stuff. It’d threaten the existence of cryptocurrencies and many other systems as well.

Quantum computing is often linked to broken passwords, cracked codes, and the collapse of digital security. It sounds like a gloomy future, but let’s learn a bit more about it.

What is Quantum Computing?

Do you know something about Schrödinger's cat, which is simultaneously alive and dead inside a mysterious box? Well, that’s quantum theory. In a quantum system, particles aren’t ‘X’ or ‘Y’, but multiple things at the same time (superposition). They can also be linked to others and act in tandem (entanglement), regardless of distance or even time between them. As Professor John G. Cramer explained, “a particle may be entangled with a second particle that did not even exist when the first particle was created, detected, and disappeared.”

Yeah, well, this is funny and complex. What we need to know is that funny performance is being applied in computing to someday go beyond the limits of binary systems. Our current computers use bits (the smallest unit of digital data) to create and secure everything. They can represent and act as a single value, either 0 or 1. Quantum computing would use qubits instead, which can represent multiple states through superposition and interact through entanglement.

As you may guess, this simultaneity allows working with a gigantic number of possibilities at the same time. Quantum computers wouldn’t outperform today’s laptops in every task, but they shine in narrow areas such as factoring large numbers or searching vast mathematical spaces. That creates problems for current digital security systems, such as public key cryptography, which rely on the difficulty of navigating those spaces.

\n Why Crypto (and Everything Digital) is at Risk

As their name suggests, “crypto-currencies” are entirely built with cryptography. They use some neat math tricks to create complex puzzles to secure our data. These puzzles rely on huge numeric spaces, meaning there are so many possible answers that guessing the right one would take longer than the age of the universe. In theory.

To be more specific, cryptocurrencies use public-key cryptography (or asymmetric cryptography). This is a system that uses two linked keys: a public key that can be shared openly, and a private key that must stay secret. The public key is used to create or verify messages, while the private key proves ownership and authorizes actions. It works to sign transactions and prove that funds belong to a specific holder without revealing the private key.

https://youtu.be/AQDCe585Lnc?si=WqzBVeDQZ0q19-0-&embedable=true

Another big building block is hash functions. A hash is an algorithm designed to mingle data, and it turns any input into a fixed-length output, like a digital fingerprint. They’re used to link blocks together, secure mining or transaction approval, and generate wallet addresses. Hashes are hard to reverse and hard to collide, meaning finding two inputs with the same output is extremely difficult.

But maybe not if you have a powerful enough quantum computer, working with billions, trillions, or almost endless potential results at the same time. Your private keys could be solved from only your public keys (wallet address), for instance. And this concern doesn’t stop with crypto. Banks and financial firms worldwide rely on similar cryptographic systems to secure transactions and protect accounts. Secure websites use public key cryptography through HTTPS to keep logins and payments private.

If quantum computers can break these systems, the impact spreads across finance, commerce, and the most basic use of the Internet. It’d be kind of a digital apocalypse.

\n Are Your Funds at Risk Now?

The short answer is no. And they won’t be for a while. There are some quantum computers around, but they’re still giant beasts with few uses and a lot of bugs. Quantum technology isn’t the easiest one to develop or scale. Shor’s algorithm, for instance, is one of the most notable quantum algorithms in stock, and it was first presented back in 1994. Thirty-two years ago, even before cryptocurrencies, and quantum computing is still in diapers today.

Currently, some of the largest superconducting processors are reported with roughly 1,000+ qubits on a single chip (like IBM’s Condor and parallel systems), and other technologies have similar counts in that ballpark. Beating binary systems would need millions of qubits, though, because they’re still not, let’s say, “perfect” qubits, but “noisy” qubits.

Here’s a short video illustrating this.

You see, qubits are extremely sensitive to heat, radiation, and tiny interactions with their environment. This interference makes them lose their quantum state, causing calculation errors and unstable results. That’s quantum noise: random disturbances (almost anything, really) that sabotage qubits.

To reduce this noise, systems use extreme cooling, shielding, better qubit materials, and quantum error correction, where many noisy qubits work together to form one reliable logical qubit. These measures work, but only partially. The problem isn't close to being fully solved and remains one of quantum computing’s main bottlenecks.

Now, all of this doesn’t mean that we should just ignore the potential threat quantum computing is to crypto and today’s digital systems. We have time, and we need to prepare. \n

A Silver Lining

To be fair, cryptographers aren’t just sitting back as quantum computing looms. One major line of defense is post-quantum cryptography, which is already designing algorithms that resist classical and quantum attacks. Bitcoin developers, for instance, have suggested potential upgrades that would allow quantum-resistant signature schemes. On a more experimental level, QANplatform is building chains around quantum-resistant cryptography from day one.

Even more futuristic ideas exist, as some research initiatives combine blockchains and quantum communication. We still haven't figured out how to use these systems, which would use things like quantum entanglement across time, but that isn’t surprising, as these ideas are still just that: ideas. However, they’re showing that the 'quantum threat' can inspire entirely new security models, rather than only patches.

We should say that Obyte, despite being a DAG (Directed Acyclic Graph) and not having miners or “validators”, is still built on public-key cryptography and hash functions —as most cryptocurrencies. It may not be quantum-proofed yet (and no crypto network really is), but our developer team is active and releasing new versions frequently. It’s quite possible that we’ll change to a more difficult hashing algorithm and quantum-safe digital signatures in the future.

Zooming out, quantum computing may end up acting more like a catalyst than a wrecking ball. It gives crypto the opportunity to be clear of legacy assumptions, improve key management, and adopt more advanced systems before other industries. The outcome is a more resilient and proactive ecosystem. If crypto was born out of adversarial thinking, quantum pressure gives it a new and interesting opponent to outgrow. \n


:::info Featured Vector Image by Freepik

:::

\ \

LLM充当法官:如何构建值得信赖的自动化评估管道

2026-02-15 22:52:45

LLM-as-a-Judge uses one language model to evaluate another, enabling scalable, criteria-based scoring of LLM outputs. This guide explains the method, its common biases, and walks through a complete LangChain and Claude example for production-ready monitoring.

欧盟要求苹果、Meta向竞争对手开放iOS系统及消息服务

2026-02-15 22:30:06

\ The Digital Markets Act (DMA) has joined the General Data Protection Regulation (GDPR) as one of the most controversial regulations in tech. The act, which entered into effect in May 2023, introduces new compliance requirements on “gatekeepers,” defined as large digital platforms providing core platform services, including search engines, app stores, and messenger services.

So far, Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft have all been classified as gatekeepers, meaning they now have to apply third parties to interoperate with their services, among other requirements. Non-compliance can result in serious penalties, with the European Commission fining Apple and Meta €500 million and €200 million, respectively, in April 2025 for infringements.

For better or worse, the DMA highlights that big tech providers like Meta and Apple are going to need to change how they build and operate their platforms, giving third-party providers greater integration and interoperability with leading proprietary solutions like the App Store than ever before.

Does the DMA give consumers more choice?

The issue of whether the DMA gives consumers more choice is heavily debated. From one perspective, the act is an anti-monopoly effort, which aims to prevent digital platforms from excluding third-party solution providers with less reach.

From another, it can be considered an example of regulatory overreach, forcing companies to make changes to products in a way that can slow development and negatively impact the user experience.

For John Snoek, COO of app marketplace provider Onside.io, however, the DMA is a net positive for the consumer:

“The DMA clearly does create more real choice: large platforms like Apple, Google and Meta must allow steering, alternative app stores, choice screens for browsers or search, interoperability, etc., that should lower switching costs and make it easier for alternatives to reach users,” Snoek, told The Sociable.

In the past, Apple has pointed to the potential negative impacts of the regulation on end users, with a blog post released in September 2025 stating that the DMA “is forcing us to make some concerning changes to how we design and deliver Apple products to our users in Europe.” Specifically, the tech giant blamed the act for delaying the rollout of features including Live Translation, iPhone Mirroring, Visited Places, and Preferred Routes.

Outside of its impact on consumers, the DMA will be extremely disruptive to big tech’s practices. Snoek, for instance, anticipates that tech companies like Apple and Meta will have to adapt their revenue and service models due to the pressure of more competition. This will likely come from focusing on other parts of their platforms and ecosystems, which don’t fall under the scope of the DMA.

The security concerns of interoperability

One of the other core criticisms that Apple put forward in September was that the DMA would expose end users to greater risks, particularly when downloading apps and making digital payments. The post noted that the act required Apple to allow sideloading, third-party app marketplaces, and alternative payment systems- even if they don’t match the privacy and security standards of the App Store.

The tech giant further suggested that users were exposed to scams, including fake banking apps, malware disguised as games, and payment systems that overcharge.

We reached out to Apple for comment on the DMA but did not receive a response.

But, just how legitimate are these concerns exactly? After all, we can’t ignore the fact that Apple is a public company that’s naturally seeking to maintain its competitive advantage.

“It’s mainly rhetorical, of course, when an ecosystem opens up for new players and less tech-savvy users, there is a risk. However, the DMA does not ban security measures. Gatekeepers like Apple can still impose proportionate security checks, and from our own experience, they do,” Snoek continued. “They just can’t use “security” as a blanket excuse to block rivals or steering.”

“Furthermore, the risk level depends on the solution. For example, Alternative Stores themselves are apps that are vetted by Apple; the apps on the Alternative App Store go through a notarization process of Apple- [including] privacy, security checks. Sideloading, for example, poses more inherent risks to less tech-savvy users,” the COO said.

At this stage, it’s clear the DMA isn’t a paper tiger. With big fines looming over those that don’t comply, Snoek believes that we will see more negotiated compliance as the market builds towards a more level playing field where alternative app stores have greater opportunities to differentiate based on service, pricing, and content.


:::info Tim Keary, Journalist, The Sociable

:::

\