2025-12-16 18:52:49
The narrative that “PHP is dead” has been wrong for a decade. The narrative that “PHP can’t do Web3” is just as incorrect.
While Node.js dominates the frontend dApp ecosystem, PHP and Symfony are quietly powering the heavy lifting of the decentralized web: indexing off-chain data, managing private key orchestration for enterprise wallets, and bridging the gap between Web2 business logic and Web3 protocols.
In this guide, we will build a production-ready Web3 integration using Symfony 7.4 and PHP 8.3+. We won’t use obscure, unmaintained wrappers. We will use the industry-standard libraries to read the blockchain, interact with smart contracts, and implement a Sign-In with Ethereum (SIWE) authentication system using Symfony’s security core.
We are simulating a real-world environment. We will assume you are running Symfony 7.4 (the LTS release as of late 2025).
We will use the following strictly typed, verified libraries:
Create your project and install dependencies. Note that we explicitly allow web3p/web3.php to interface with modern Guzzle versions if needed.
\
composer create-project symfony/website-skeleton my_web3_app
cd my_web3_app
# Install the Web3 standard library
composer require web3p/web3.php:^0.3
# Install crypto utilities for signature verification
composer require kornrunner/keccak:^1.1 simplito/elliptic-php:^1.0
# Install the Maker bundle for rapid prototyping
composer require --dev symfony/maker-bundle
\
Directly instantiating libraries in controllers is an anti-pattern. We will wrap the Web3 connection in a robust Symfony Service using Dependency Injection.
First, configure your node URL in .env:
# .env
ETHEREUM_NODE_URL="https://mainnet.infura.io/v3/YOUR_INFURA_ID"
Now, create the service. We use PHP 8.2 Readonly Classes and Constructor Promotion for clean architecture.
\
// src/Service/Web3Client.php
namespace App\Service;
use Web3\Web3;
use Web3\Eth;
use Web3\Contract;
use Web3\Providers\HttpProvider;
use Web3\RequestManagers\HttpRequestManager;
use Symfony\Component\DependencyInjection\Attribute\Autowire;
readonly class Web3Client
{
private Web3 $web3;
public function __construct(
#[Autowire(env: 'ETHEREUM_NODE_URL')]
private string $nodeUrl
) {
// We utilize a timeout of 10 seconds for RPC calls
$provider = new HttpProvider(new HttpRequestManager($this->nodeUrl, 10));
$this->web3 = new Web3($provider);
}
public function getEth(): Eth
{
return $this->web3->eth;
}
public function getContract(string $abi, string $address): Contract
{
return new Contract($this->web3->provider, $abi);
}
}
Let’s verify our connection by reading the native ETH balance of an address.
Note on Asynchrony: web3p/web3.php uses callbacks by default. To make this compatible with Symfony’s synchronous request/response lifecycle, we wrap the callback in a simple latch or use the returned promise if available. For simplicity and reliability in this version, we will use a referenced variable capture method which is the standard pattern for this library in PHP 8.
\
// src/Controller/WalletController.php
namespace App\Controller;
use App\Service\Web3Client;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\HttpFoundation\JsonResponse;
use Symfony\Component\Routing\Attribute\Route;
use Web3\Utils;
#[Route('/api/wallet')]
class WalletController extends AbstractController
{
public function __construct(private Web3Client $web3Client) {}
#[Route('/balance/{address}', name: 'app_wallet_balance', methods: ['GET'])]
public function balance(string $address): JsonResponse
{
$balance = null;
$error = null;
// Fetch balance via JSON-RPC
$this->web3Client->getEth()->getBalance($address, function ($err, $data) use (&$balance, &$error) {
if ($err !== null) {
$error = $err;
return;
}
$balance = $data;
});
if ($error) {
return $this->json(['error' => $error->getMessage()], 500);
}
// Convert BigInteger to Ether string
// web3p returns PHP GMP/BigInteger objects
$ethBalance = Utils::fromWei($balance, 'ether');
[$whole, $decimals] = $ethBalance;
return $this->json([
'address' => $address,
'balance_wei' => (string) $balance,
'balance_eth' => $whole . '.' . $decimals,
]);
}
}
Start your server (symfony server:start) and visit https://localhost:8000/api/wallet/balance/0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045 (Vitalik’s address). You should see a JSON response with his current balance.
Reading ETH is easy. Reading a token balance (like USDC) requires the ABI (Application Binary Interface).
We will create a Service method to read any ERC-20 balance.
\
// src/Service/TokenService.php
namespace App\Service;
use Web3\Contract;
use Web3\Utils;
class TokenService
{
// Minimal ERC-20 ABI for 'balanceOf'
private const ERC20_ABI = '[{"constant":true,"inputs":[{"name":"_owner","type":"address"}],"name":"balanceOf","outputs":[{"name":"balance","type":"uint256"}],"payable":false,"type":"function"}]';
public function __construct(private Web3Client $web3Client) {}
public function getBalance(string $tokenAddress, string $walletAddress): string
{
$contract = $this->web3Client->getContract(self::ERC20_ABI, $tokenAddress);
$resultBalance = null;
// The "at" method sets the contract address for the call
$contract->at($tokenAddress)->call('balanceOf', $walletAddress, function ($err, $result) use (&$resultBalance) {
if ($err !== null) {
throw new \RuntimeException($err->getMessage());
}
// Result is an array based on outputs in ABI
$resultBalance = $result['balance'];
});
// Assuming 18 decimals for standard ERC-20
// In production, you should fetch the 'decimals' function from the contract first
$formatted = Utils::fromWei($resultBalance, 'ether');
return $formatted[0] . '.' . $formatted[1];
}
}
This is the most critical part of Web3 UX. We do not want users to create passwords. We want them to sign a message with their wallet (Metamask, Rabby, etc.) to prove ownership.
The Logic:
We need a helper to perform ecrecover. PHP does not have this built-in easily, so we use simplito/elliptic-php and kornrunner/keccak.
\
// src/Security/Web3/SignatureVerifier.php
namespace App\Security\Web3;
use Elliptic\EC;
use kornrunner\Keccak;
class SignatureVerifier
{
public function verifySignature(string $message, string $signature, string $address): bool
{
// 1. Hash the message according to Ethereum standard (EIP-191)
$prefix = sprintf("\x19Ethereum Signed Message:\n%d", strlen($message));
$hash = Keccak::hash($prefix . $message, 256);
// 2. Parse Signature (Remove 0x, split into r, s, v)
$signature = substr($signature, 2);
$r = substr($signature, 0, 64);
$s = substr($signature, 64, 64);
$v = hexdec(substr($signature, 128, 2));
// Adjust v for recovery (Ethereum uses 27/28, library expects 0/1)
$recId = $v - 27;
if ($recId < 0 || $recId > 1) {
return false;
}
// 3. Recover Public Key
$ec = new EC('secp256k1');
try {
$pubKey = $ec->recoverPubKey($hash, ['r' => $r, 's' => $s], $recId);
} catch (\Exception $e) {
return false;
}
// 4. Derive Address from Public Key
// Drop first byte (04 prefix), hash the rest, take last 20 bytes
$pubKeyHex = $pubKey->encode('hex');
$pubKeyHex = substr($pubKeyHex, 2);
$addressHash = Keccak::hash(hex2bin($pubKeyHex), 256);
$recoveredAddress = '0x' . substr($addressHash, -40);
// 5. Compare (Case insensitive)
return strtolower($address) === strtolower($recoveredAddress);
}
}
Now we implement the Symfony 7 AbstractAuthenticator.
\
// src/Security/Web3Authenticator.php
namespace App\Security;
use App\Repository\UserRepository;
use App\Security\Web3\SignatureVerifier;
use Symfony\Component\HttpFoundation\JsonResponse;
use Symfony\Component\HttpFoundation\Request;
use Symfony\Component\HttpFoundation\Response;
use Symfony\Component\Security\Core\Authentication\Token\TokenInterface;
use Symfony\Component\Security\Core\Exception\AuthenticationException;
use Symfony\Component\Security\Http\Authenticator\AbstractAuthenticator;
use Symfony\Component\Security\Http\Authenticator\Passport\Badge\UserBadge;
use Symfony\Component\Security\Http\Authenticator\Passport\Passport;
use Symfony\Component\Security\Http\Authenticator\Passport\SelfValidatingPassport;
class Web3Authenticator extends AbstractAuthenticator
{
public function __construct(
private SignatureVerifier $verifier,
private UserRepository $userRepository
) {}
public function supports(Request $request): ?bool
{
return $request->isMethod('POST') && $request->getPathInfo() === '/api/login_web3';
}
public function authenticate(Request $request): Passport
{
$data = json_decode($request->getContent(), true);
$address = $data['address'] ?? '';
$message = $data['message'] ?? ''; // Contains the nonce
$signature = $data['signature'] ?? '';
if (!$address || !$message || !$signature) {
throw new AuthenticationException('Missing Web3 credentials.');
}
// Verify the signature matches the address
if (!$this->verifier->verifySignature($message, $signature, $address)) {
throw new AuthenticationException('Invalid signature.');
}
// Check nonce (Optional but recommended: Verify nonce exists in session/cache)
// $storedNonce = $request->getSession()->get('login_nonce');
// if (!str_contains($message, $storedNonce)) throw ...
return new SelfValidatingPassport(
new UserBadge($address, function ($userIdentifier) {
// Find user by wallet address or create new one
return $this->userRepository->findOrCreateByWallet($userIdentifier);
})
);
}
public function onAuthenticationSuccess(Request $request, TokenInterface $token, string $firewallName): ?Response
{
return new JsonResponse(['message' => 'Welcome to Web3', 'user' => $token->getUser()->getUserIdentifier()]);
}
public function onAuthenticationFailure(Request $request, AuthenticationException $exception): ?Response
{
return new JsonResponse(['error' => $exception->getMessage()], 401);
}
}
Web3 is often about reacting to things happening off-chain. You shouldn’t make your user wait while you query the blockchain. Instead, use a worker.
We will create a command that polls for “Transfer” events and dispatches them to the Messenger bus.
\
// src/Command/BlockchainListenerCommand.php
namespace App\Command;
use App\Service\Web3Client;
use Symfony\Component\Console\Attribute\AsCommand;
use Symfony\Component\Console\Command\Command;
use Symfony\Component\Console\Input\InputInterface;
use Symfony\Component\Console\Output\OutputInterface;
#[AsCommand(name: 'app:blockchain:listen', description: 'Polls for ERC20 Transfer events')]
class BlockchainListenerCommand extends Command
{
public function __construct(private Web3Client $web3Client)
{
parent::__construct();
}
protected function execute(InputInterface $input, OutputInterface $output): int
{
$contractAddress = '0x...'; // USDC or your token
$transferTopic = '0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef'; // Keccak('Transfer(address,address,uint256)')
$output->writeln("Listening for events on $contractAddress...");
// In a real app, you would store the 'last_scanned_block' in a DB
$currentBlock = 'latest';
// Uses eth_getLogs
$this->web3Client->getEth()->getLogs([
'address' => $contractAddress,
'topics' => [$transferTopic],
'fromBlock' => '0x' . dechex(20000000) // Hex block number
], function ($err, $logs) use ($output) {
if ($err) {
$output->writeln("Error: " . $err->getMessage());
return;
}
foreach ($logs as $log) {
// Dispatch to Symfony Messenger here
$output->writeln("Transfer detected in transaction: " . $log->transactionHash);
}
});
return Command::SUCCESS;
}
}
Note: In production, you would run this command inside a supervisord loop or cron, maintaining state of the last scanned block to ensure no events are missed.
We have successfully bridged the gap. You now have a Symfony 7.4 application that can:
Web3 is not about rewriting your entire stack in Solidity or Rust. It’s about orchestration. Symfony is the perfect orchestrator — stable, secure and typed.
If you are looking to integrate high-value assets onto the blockchain or need a secure audit of your current Web3-PHP architecture, I can help.
Contact me to discuss your Web3 Strategy https://www.linkedin.com/in/matthew-mochalkin/
\
2025-12-16 16:37:14
Most “99%+ accurate” IP geolocation claims are misleading because there’s no shared dataset, no standard methodology, and no way to validate global accuracy across billions of constantly changing IPs. IPinfo rejects the industry’s accuracy theater and instead uses continuous measurement, transparency, and real-world validation to deliver trustworthy, evidence-backed IP data accuracy.
2025-12-16 15:10:57
How are you, hacker?
🪐Want to know what's trending right now?:
The Techbeat by HackerNoon has got you covered with fresh content from our trending stories of the day! Set email preference here.
## The Architecture of Collaboration: A Practical Framework for Human-AI Interaction
By @theakashjindal [ 7 Min read ]
AI focus shifts from automation to augmentation ("Collaborative Intelligence"), pairing AI speed with human judgment to boost productivity. Read More.
By @stevebeyatte [ 7 Min read ] From no-code tools to enterprise AI systems, discover the top AI workflow automation platforms to use in 2026, and learn which solution fits your business needs Read More.
By @cv-domain [ 5 Min read ] The .cv domain is shaping a new global identity layer in the AI era, as Cape Verde and Ola.cv build an open, DNS-anchored alternative to LinkedIn. Read More.
By @stevebeyatte [ 3 Min read ] Read the story of a Romanian engineer-musician blending creativity and ML to build human-centric AI cameras while keeping his passion for music alive. Read More.
By @melissaindia [ 5 Min read ] Partner with Melissa to empower VARs and SIs with accurate data, seamless integrations, and scalable verification tools for smarter, faster client solutions. Read More.
By @minio [ 4 Min read ] As DataOps becomes central to modern data work, learn what defines great DataOps engineering—and why fast, high-performance object storage is essential. Read More.
By @josecrespophd [ 11 Min read ] Three overlooked eigenvalue diagnostics can predict whether your AI will succeed, fail, or silently collapse. Here’s the 1950s math the industry keeps ignoring. Read More.
By @beldexcoin [ 3 Min read ] The obscura hardfork enabled Bulletproofs++ on the Beldex mainnet at block height 4939549. Learn what this upgrades means for you. Read More.
By @minio [ 11 Min read ] Learn how Apache Iceberg paired with AIStor forms a high-performance, scalable lakehouse architecture with SQL features, snapshots, & multi-engine support. Read More.
By @ainativedev [ 4 Min read ] OpenAI, Anthropic, Block, and other major tech players have united to launch the Agentic AI Foundation. Read More.
By @hackernoon-courses [ 3 Min read ] Meet Ignatius Sani - a HackerNoon Blogging Course Facilitator and hear his journey from software engineering to technical writing. Read More.
By @carlwatts [ 11 Min read ] A CFO-friendly deep dive into cloud repatriation: real math on 10 PB in AWS/GCP/Azure vs building your own tape-backed object storage tier. Read More.
By @hackernoon-courses [ 3 Min read ] Learn how consistent blogging builds authority, opportunity, and income. Join the HackerNoon Blogging Fellowship to grow your skills and career. Read More.
By @capk [ 8 Min read ] Tools like Copilot, Cursor, and Claude already save me hours every week by reading code, exploring messy open-source projects, and filling gaps where necessary. Read More.
By @pressreleases [ 2 Min read ] HackerNoon announces its AI-detection partnership with GPTZero. This AI detector will now analyse 5000+ monthly blog post submissions reviewed by the editors. Read More.
By @riedriftlens [ 3 Min read ] Buddhist cognitive science deals with your "structure of meaning." Read More.
By @ainativedev [ 3 Min read ] Warp is changing how it charges users, making it the latest in a string of coding-tool companies to revise their pricing models. Read More.
By @chris127 [ 7 Min read ] A blockchain-based UBI pegged to water prices eliminates economic desperation driving migration. No walls, laws, or taxes! Read More.
By @ishanpandey [ 7 Min read ] Discover how Collectibles.com is revolutionizing the $500B collectibles market by bridging traditional collecting with blockchain technology. Read More.
By @oxylabs [ 11 Min read ]
Discover the 12 best web scraping APIs of 2025, comparing performance, pricing, features, & success rates to help teams scale reliable data extraction. Read More.
🧑💻 What happened in your world this week? It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️
ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME
We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.
See you on Planet Internet! With love,
The HackerNoon Team ✌️
.gif)
2025-12-16 14:46:50
In our previous blog AI Can Write Code Fast. It Still Cannot Build Software., we documented why AI coding assistants hit a wall: "3 days to MVP" became "full rearchitect required" after just 2.5 weeks for a moderately complex platform. Through analysis of over 2,500 hours of AI coding usage, we revealed consistent failure patterns (focus dilution, architectural drift, and confidence miscalibration) that occur without governance infrastructure.
But here's the question that article didn't answer: Is this experience universal, or did we just get unlucky?
The answer comes from rigorous research across thousands of developers, production codebases, and controlled experiments. The findings are both shocking and consistent:
When objectively measured, AI-assisted development was slower. When surveyed, 69% claim productivity gains, yet 45% say debugging AI-generated code is time-consuming.
We've analyzed four major research studies that reveal this productivity paradox. The implications are profound for any team betting on even the most advanced AI-assisted development.

If you're going to challenge the narrative that AI makes developers faster, you need solid methodology. A 2025 preprint from METR (Model Evaluation and Threat Research) offers exactly that: a randomized controlled trial with experienced developers working on codebases they knew intimately. (Note: this study has not yet been peer-reviewed.)
Unlike many AI productivity studies that use unfamiliar codebases or synthetic problems, this study focused on experienced contributors working on projects they knew well. The methodology offers an informative perspective, though it represents one specific context among many.
Methodology:
The study used a randomized controlled trial with 16 experienced open-source developers who had contributed to their repositories for multiple years. They completed 246 real tasks (bug fixes, features, refactors averaging ~2 hours each) on established codebases with 1M+ lines of code and 22K+ GitHub stars.
Important Caveats (from METR):
The researchers were careful to note the limitations of their study:
Key Findings:
The study measured both actual task completion time and developer perception. Before starting each task, developers predicted how much faster (or slower) they expected to be with AI assistance. After completing the task, they reported how much faster they felt they had been.

Critical Insight: Developers using AI tools took 19% longer to complete tasks, yet both before and after, they believed they were approximately 20% faster.
This isn't a small measurement error. This is a fundamental perception-reality inversion. A 39-point gap between what developers believe is happening and what's actually happening.
Time analysis revealed where the hours actually went. Developers spent less time actively coding and more time on AI-related overhead:
Across these studies and experiments, common contributing factors include: AI generates code quickly, but developers spend additional time validating, debugging, and re-prompting.
Net result: More total time, but it feels faster because you're typing less.
You've probably seen informal summaries framed as "GitHub Copilot makes developers 55% faster!" It appears in pitch decks, blog posts, and executive presentations everywhere. GitHub's 2024 research clearly limited this finding to isolated coding tasks, though that nuance is often lost in broader discussions.
The methodology matters as much as the numbers. When you dig into what was actually measured, the picture gets more nuanced.
Read the methodology carefully, because what you measure determines what you find.
GitHub's study focused on a narrow slice of the development process: completion time for isolated, well-defined coding tasks in controlled benchmark scenarios. Essentially, they measured initial code generation speed.
What the study did not measure tells a different story:
The implication is significant: AI tools accelerate initial code generation but may not reduce overall development cycle time when accounting for complete software lifecycle activities.
Analogy: Measuring a writer's productivity by how fast they type sentences, then being surprised when the larger work still requires substantial editing.
According to McKinsey's 2023 analysis of generative AI in software development, productivity gains vary significantly by task type. Their methodology: 40+ developers completing bounded tasks over several weeks.
Findings by Task Type:

The critical finding often lost in headlines:
"Time savings shrank to less than 10 percent on tasks that developers deemed high in complexity due to, for example, their lack of familiarity with a necessary programming framework.", McKinsey, 2023
Similarly, for junior developers: "in some cases, tasks took junior developers 7 to 10 percent longer with the tools than without them" (McKinsey, 2023).
The study also noted developers had to "actively iterate" with the tools to achieve quality output, with one participant reporting he had to "spoon-feed" the tool to debug correctly. Tools "provided incorrect coding recommendations and even introduced errors" (McKinsey, 2023).
The pattern: AI accelerates simple, well-defined tasks. Gains diminish sharply with complexity (<10%). For junior developers, AI assistance can be net negative.
The previous three studies used controlled experiments. The Stack Overflow 2025 Developer Survey reveals what nearly 50,000 developers actually experience in the field.
The productivity claim:

Source: Stack Overflow 2025 Press Release
Sounds like success. But here's the counterweight:
The debugging tax:
"45% of developers identified debugging AI-generated code as time-consuming, contradicting claims that AI can handle coding tasks entirely." , Stack Overflow, 2025
This is the "time shifting" pattern from our analysis made explicit: nearly half of developers report that debugging AI output consumes significant time.
The math doesn't add up: If 69% claim productivity gains but 45% say debugging AI code is time-consuming, where's the net gain? The answer: developers perceive the fast code generation as productivity, while discounting the debugging time that follows.
Study 1 explains this. Developers who are objectively 19% slower report feeling 20% faster. The 69% claiming productivity gains are self-reporting from the same population with a 39-point perception gap. The 45% reporting debugging overhead is closer to objective reality, they're measuring actual time spent, not how fast it felt.
\
The research reveals a consistent pattern across all four studies: time shifting rather than time saving.
Compare the two workflows below, traditional development without AI vs AI-assisted development. Keeping the steps and flow at a high level here and focused on the main points of design, development, reviews, testing, integration, and deployment.
Traditional Development:
AI-Assisted Development:
Replaces manual development with prompt, AI-code generator, and debugging of AI code, with many reviews and tests to see if the solution works (red).

AI generation is fast, but the review-debug-test cycle (red) consumes more time than was saved. Every "No" loops back to Prompt, work shifts from creating code to correcting code.
If developers are objectively slower, why do they report feeling faster? The answer lies in how our brains perceive work. Several cognitive biases compound the perception gap:
The trap: Subjective feeling of productivity becomes divorced from objective delivery metrics.
Challenge: ROI calculations based on developer perception systematically overestimate actual value.
If your team perceives they're 20% faster with AI but they're actually 19% slower, your business case for AI tooling licenses may be significantly misestimated.
Recommendation: Measure What Matters
Track the metrics that reflect actual business value, not developer sentiment:
If these metrics don't improve compared to pre-AI tools, then AI tools are creating busy-work, not value.
Warning sign: Teams report feeling productive while delivery metrics stagnate or decline.
Pattern Recognition:
The AI productivity trap looks like this:
Strategic Approach:
Based on the research, here's where AI actually helps versus where it hurts, at least currently:
Use AI for:
Don't use AI for:
Golden Rule: Treat AI as advanced autocomplete, not an autonomous developer. Validate all outputs with more rigor than a junior developer's code.
Organizations face a strategic choice:
Work within current limitations by being selective about when and how you use AI:
Build infrastructure that compensates for AI's limitations:
This series explores a combined strategy: Strategic task decomposition (Approach A) paired with AI-governance infrastructure (Approach B) to work at scale. Effective strategy requires tooling to enforce it.
Episode 2 examines the research evidence for why model improvements alone won't solve these systematic limitations.

Here are the five key takeaways from the research:
Bottom Line: Without governance infrastructure, AI tools create busy-work, not business value.
Episode 2: Why Scaling Won't Fix It
Many assume that more powerful models, better prompts, or larger context windows will solve AI's limitations. "GPT-5 will solve everything!" "Just improve your prompts!" "1 million tokens changes everything!"
The research tells a different story.
We'll examine three scaling promises that fail:
The fundamental issue: Semantic understanding degrades with complexity regardless of model size, prompt quality, or context capacity. This is an architectural problem requiring governance infrastructure, not a resource problem requiring more scale.
\
\ Part of the "AI Coding Assistants: The Infrastructure Problem" research series.
Documenting systematic research on AI-assisted development effectiveness, with focus on governance infrastructure as solution to measured limitations. Based on 20+ years of AI/ML infrastructure experience across commercial and defense domains.
\
\ Up Next: Episode 2, Why Scaling Won't Fix It: Why bigger models, better prompts, and larger context windows don't solve semantic understanding degradation.
2025-12-16 14:45:05
Connected TV advertising will reach $26.6 billion in the U.S. in 2025 according to IAB, making it the fastest-growing segment in digital advertising. It's also becoming the first channel to fully confront what IPv6 adoption means for measurement at scale.
The challenges CTV advertisers are facing, like address rotation breaking frequency caps, privacy extensions disrupting tracking, geo-targeting struggling with vast IPv6 ranges, aren't unique to streaming. They're a preview of what's coming for all digital advertising as IPv6 becomes the dominant protocol.
With IPv6 now being the primary means that users connect to the internet (86% in France, 75% in Germany, 52% in the U.S.), the measurement infrastructure built for IPv4's stability is producing increasingly unreliable results across programmatic display, mobile advertising, video campaigns, and CTV.
\ Current IPv6 adoption by country:

IPv6 will affect your measurement, and you'll want to be ready.
When IPv4 finally ran out of space in 2011, IPv6 answered with scale: 340 undecillion (3.4 × 10³⁸) possible addresses, enough to assign billions of unique identifiers to every person on earth.
Yet that scale also changed the internet’s structure: with so many addresses, only a minute fraction are actually allocated or active, making it impossible to “scan” the IPv6 internet the way we did with IPv4.
More importantly, IPv6 didn’t just expand capacity; it changed behavior. Its privacy-first architecture makes the network far more dynamic, creating new challenges for anyone trying to measure, target, or attribute digital traffic accurately.
Read our comparison of IPv4 and IPv6.
Devices don’t keep the same IPv6 address for long. According to RFC 4941 and its 2021 update RFC 8981, modern devices operating systems (from smartphones to smart TVs) use privacy extensions, a mechanism that creates temporary, randomly generated IPv6 addresses and replaces them every 24–72 hours (sometimes even more frequently).
The goal is privacy: early versions of IPv6 embedded a device’s hardware ID (the MAC address) directly in the IP, making long-term tracking trivial. Privacy extensions fixed that by introducing randomization, but in doing so, they also broke the persistence advertisers and analytics rely on.
The same device can now appear under several different addresses within a single week, making it nearly impossible to link impressions, sessions, or behaviors over time.
ISPs take this privacy principle a step further. Many residential networks periodically rotate the customer’s entire prefix (/56 or /48) effectively reassigning each household a new IPv6 block every day. Studies including Follow the Scent, One Bad Apple Can Spoil Your IPv6 Privacy, and the APNIC Blog confirm that daily prefix cycling is now common among large broadband providers.
Where IPv4 offered relative stability, IPv6 is fluid by design. For digital advertisers across all channels, this means one fundamental shift: you can't assume. You have to measure.
CTV advertising is the first channel where the challenges with IPv6 measurement became impossible to ignore.
Under IPv6, the same household can appear as dozens of different “users” within a week. An advertiser may serve ten impressions to what looks like ten different homes when it’s actually one family whose addresses change every 48 hours.
Frequency capping fails. Reach calculations inflate. Attribution models break. And because IPv6 ranges are large and frequently reassigned by ISPs, geo-targeting also loses precision, the same network block can represent different households over time.
While individual IPv6 addresses can be extremely specific and often tied to a single device, that precision doesn’t last. Privacy extensions and dynamic prefix delegation mean those identifiers rotate constantly, turning what should be a strength into a measurement challenge.
These same problems are coming for every advertising channel. Programmatic display, mobile advertising, and video platforms across the web will see the same phantom audience inflation and attribution drift.
CTV advertisers are just the first to feel it at scale.
As IPv6 becomes the dominant protocol, advertising platforms across all channels need IPv6-aware measurement:

Accurately mapping the IPv6 internet for advertising takes more than legacy methodologies. It requires active measurement, continuous validation, and research-grade expertise applied to the challenges adtech platforms face every day: attribution, targeting, and fraud prevention.
IPv6 networks behave differently from IPv4 in nearly every way, from how addresses are allocated to how ISPs design privacy and naming systems. Legacy IPv4 heuristics simply don’t hold up.
That’s why IPinfo built a new measurement model, one designed specifically for IPv6’s fluid and privacy-centric architecture, grounded in research and active validation rather than static lookups. Most IP data providers simply “bolt on” IPv6 to IPv4-based systems, resulting in patchy coverage and low accuracy.
IPinfo took a different path: we built IPv6 measurement from the ground up, guided by empirical validation and peer-reviewed research.
Even identifying which IPv6 addresses exist is a challenge. Out of 340 undecillion (3.4 × 10³⁸) possible addresses, only a tiny fraction are actually allocated and fewer still are active.
Traditional mapping methods don’t scale to that size; blind scanning hits mostly empty space. That’s why accurate IPv6 mapping requires blending multiple data sources; no single feed can reveal the full picture:

IPv6 coverage means identifying which prefixes are allocated, announced, and active and keeping that intelligence current as networks evolve.
Before joining IPinfo, our Head of Research, Oliver Gasser, led IPv6 measurement research at TU Munich (Technical University of Munich) and the Max Planck Institute for Informatics.
He co-authored several of the foundational academic papers that built modern IPv6 measurement:
These studies directly led to the public IPv6 Hitlist Service at https://ipv6hitlist.github.io, now the global benchmark dataset for understanding IPv6 topology and responsiveness.
Today, that same methodology powers IPinfo’s datasets.
By combining the scientific rigor of the IPv6 Hitlist with ProbeNet’s global scale, we continuously validate, classify, and geolocate IPv6 addresses using peer-reviewed techniques trusted by the research community.
Counting IPs isn’t the right measure of IPv6 coverage: every provider can list all allocated addresses, since that data comes from public WHOIS registries. What matters is evidence: how many of those addresses have been actively measured, verified, or observed in use.
At IPinfo, we combine registry completeness with large-scale, evidence-based validation through ProbeNet and the IPv6 Hitlist. Every week, we:
For comparison, IPv4 coverage includes 3.1B measured addresses, ~484M with RTT data, and 5.4M routers. IPv6 router counts appear much higher than IPv4 (224 M vs 5.4 M) because IPv6 devices are directly addressable and visible in traceroute measurements. IPv6 exposes much more of the network’s inner structure. Because IPv6 depends on ICMP for basic operation and lacks the NAT layers that hide IPv4 devices so traceroutes reach deeper, revealing many more router interfaces. It’s not that the internet suddenly has more hardware; IPv6 simply makes it visible.
Unlike providers who simply catalog IPv6 allocations from public registries, IPinfo continuously measures and validates active IPv6 networks, combining registry coverage with large-scale measurement evidence. This approach makes IPinfo one of the few companies providing IPv6 data grounded in real, observable internet behavior, not just static records.
IPinfo’s datasets cover all allocated IPv4 and IPv6 space, verified through registry data and active measurement. When new allocations occur or ranges transfer, updates propagate across our datasets within 24 hours.
ProbeNet, our internet measurement platform, performs continuous latency sampling, traceroutes, and network validation from over 1,200 points of presence in 140+ countries.
\n
"We cover 100% of the allocated IPv4 and IPv6 space. In practice this means we include every prefix visible on the public internet, since an address that isn’t allocated can’t be used."
— Maxime Mouchet, Data Engineer at IPinfo
\ Whether you're buying CTV, mobile, display, or video, newly allocated residential IPv6 blocks are classified correctly from day one, not months later when legacy databases finally catch up.
These measurements ground our data in empirical reality. For advertising across all channels, this means:
IPv6 is already the majority protocol in many markets, and that percentage grows daily. For businesses that depend on IP intelligence, the choice is clear: Adapt to IPv6's fluid, privacy-conscious architecture with measurement-based data or watch accuracy degrade as IPv4 heuristics become increasingly irrelevant.
IPinfo’s investment in IPv6 measurement, led by Oliver Gasser, a pioneer in global IPv6 research, ensures our data stays aligned with the most advanced methodologies in the field. We've built our infrastructure around a simple principle: the only trustworthy IPv6 data comes from active measurement combined with research-grade methodology. That means:
Is IPv6 affecting your measurement accuracy across channels? Are you confident in your network classification and geolocation for IPv6 traffic?
Want to see how IPv6 impacts your data? Reach out to learn how IPinfo's measurement-based approach can bring clarity to your IPv6 traffic.
2025-12-16 14:36:30
\ I have an open-source-always philosophy. Today, that philosophy led to a 71% fidelity breakthrough on IBM’s 127-qubit Eagle processor.
I’ve been building the Perceptual Grid Engine (PGE) for a while. Originally, it was an architecture for Artificial Intelligence—designed to help models maintain memory consistency and "object permanence" over long context windows.
It is based off of navigational techniques used by blind individuals like myself, as I do everything in a systematic grid-like pattern. I scan from left to right, down, then right to left until I have completed my task. Originally, I saw no reason why AI and now Quantum Computing shouldn’t do the same.
This morning, I stared at my architecture diagram and realized something strange. The logic I used to "hand off" memory states in AI looked suspiciously like Quantum Teleportation.
So I asked the dangerous question: Can I map my software-defined AI grid onto physical quantum hardware to solve the decoherence problem?
An hour later, I had my answer. I didn't just run a simulation. I ran a Round-Trip Stress Test on ibm_fez (one of IBM’s utility-scale quantum processors). The results shocked me.
In AI, if you lose context, the model hallucinates. In Quantum Computing, if you lose coherence, the qubit becomes noise.
Current quantum processors (QPUs) have a "T1 time" (coherence lifetime) of roughly 100-300 microseconds. If your calculation takes longer than that, your data dies. This is the biggest bottleneck in the industry.
Most researchers try to solve this with better hardware (shielding, colder fridges). I decided to solve it with better software logic.
My hypothesis was simple: Don't let the qubit die.
Instead of leaving data in a single qubit (Q0) until it rots, I adapted my PGE logic to "teleport" the state to a fresh qubit (Q2) the moment coherence starts to dip. Then, crucially, I wipe the original qubit clean and send the data back.
It’s effectively a Quantum RAM Refresh Cycle, derived entirely from AI consistency logic.
To make this work on real atoms, I had to use IBM’s latest Qiskit SDK v1.0 and the SamplerV2 primitive. The key was using Dynamic Circuits—specifically the if_test logic.
This isn't a pre-recorded script. The hardware measures the "dying" qubit mid-flight, makes a classical logic decision (0 or 1), and fires a correction gate ($X$ or $Z$) to the fresh qubit before the data decays.
Here is the "Round Trip" engine that ran on the metal:
# The PGE "Ping-Pong" Protocol
# 1. Hop 1: Teleport q0 -> q2
# 2. Reset q0 (Mid-circuit wipe)
# 3. Hop 2: Teleport q2 -> q0 (Return home)
def create_round_trip_circuit():
qc = QuantumCircuit(qr, crz, crx, result)
# --- HOP 1: OUTBOUND ---
# Entangle & Teleport to Grid (q2)
qc.h(1)
qc.cx(1, 2)
qc.cx(0, 1)
qc.h(0)
qc.measure(0, crz)
qc.measure(1, crx)
# Real-time Hardware Correction
with qc.if_test((crx, 1)):
qc.x(2)
with qc.if_test((crz, 1)):
qc.z(2)
# --- THE CRITICAL STEP: RESET ---
qc.reset(0)
qc.reset(1)
# --- HOP 2: INBOUND ---
# Teleport back to Home (q0)
# ... (Repeat logic in reverse) ...
return qc
ibm_fez
I didn't want to rely on a simulator. Simulators are perfect; reality is noisy. I queued the job on the IBM Quantum Platform.
The Test:
The Results:
ibm_fez (127-qubit Eagle Processor)Job ID: d4usjvcgk3fc73auamsg (Publicly verifiable on IBM Quantum)
I expected the signal to drop to 50% (random noise) after the second hop. Instead, it stayed rock solid. We lost less than 3% fidelity after doubling the circuit depth and adding a hardware reset.
I didn't invent quantum teleportation (physics did that in 1993). But I believe this is one of the first demonstrations of a Software-Defined Quantum Memory derived from AI principles.
By treating physical qubits not as "registers" but as "disposable containers" that can be refreshed via teleportation loops, the Perceptual Grid Engine effectively extends the lifespan of quantum data. We are moving from "Static Quantum Computing" to "Dynamic, Self-Healing Quantum Computing."
This proved that the PGE is substrate-independent. It works on neural networks, and it works on superconducting transmon qubits.
I’m open-sourcing the entire codebase today. If you have an IBM Quantum account (even the free tier), you can clone the repo and run the Grid Handoff yourself.
https://github.com/damianwgriggs/Perceptual-Grid-Engine-Quantum-Experiment
\ AI PGE Memory Article:
https://medium.com/@dgriggsde/the-engine-of-creation-unleashed-why-im-giving-away-the-ai-that-writes-marathon-stories-1b4b13107213
\