2026-03-04 19:56:21
tl;dr: the “project” of open source age verification will inevitably implode — probably messily — and waste everyone’s time whilst also reifying narrative of “support” for an approach to user safety that will not deliver its purported benefits.
Here I explain why it will fail from the perspective of ~40 years of free software and open-source coding.
And it’s not “because the user will switch it off”
If you strew a metaphorical rope in front of a bunch of geeks, they will rapidly group together, split into two or more factions, and engage in tugs of war with each other whilst arguing importantly over architectural and strategic errors that the other team is making.
You can go browse the sorry husk of StackOverflow for evidence, but this has also always been the case; for any given software niche there are mutually-hostile solutions:
Software Development in general and Open Source in particular institutionalises “exit” and “competition”, and it is in the nature of the open-source community for people to become sufficiently angry or otherwise motivated to rage-quit an existing project and attempt to set up “differently” for any number of reasons, from project governance to solution architecture to implementation language to personal/corporate conflict to complete ignorance or hatred of existing approaches.
This does not always happen, but long-term consistency of a project usually is a result of a combination of two or more of:
Basically: AV is not a governed visionary ecosystem, it’s a tickbox compliance requirement.
It’s a free-for-all.
Subsequent to announcement that the State of California will demand AV, any number of junior devs now want to make names for themselves by being “first to ship this important feature” and so they will come up with half-assed solutions that fit within their preferred ecosystem (e.g.: DBus/Ubuntu) and nowhere else.
This is fine. Think of it as your five year old kid at the beach making a sandcastle. That’s what they do. They will demand applause, but it’s still an imaginary thing. And there will be dozens of sandcastles on the beach in short order, and they will all prosecute war amongst themselves.
The thing is: Age Verification is literally a gatekeeping solution. If it is to be effective at all, it must be deployed in situations where gatekeeping makes sense — and general purpose operating systems are not those places.
This is a point we’ve already learned from the likes of Digital Rights Management and different methods of copy-prevention for Floppy Disks, CDs and DVDs. To be effective the scope of the gatekeeping needs to be beyond user control, which is not the case in operating systems. Various workarounds such as Trusted Platform Modules have been proposed in-past, and (surprise!) they don’t work well (often: not at all) in Open Source operating systems where the intent is to exclude the user.
If you want to understand the background some more, go read The Coming War On General Purpose Computing — because we’ve seen this coming for more than a decade.
So: to wrap this up really briefly:
Privacy Wonks will hate it, but Mark Zuckerberg is correct that the proper place for prescriptive Age Verification is in the App Store of a mobile device; yes, that means Google and Apple will “find out more about you” but that can be minimised if they choose to implement a privacy-preserving protocol a-la what happened over COVID tracking.
The reason people are angry about this is that they don’t understand that the App-Store-and-Google/Apple-Account approach to AV is a degenerate form of what we should have been doing all along: age attestation, not age verification.
The user should be signed up with their own preferred provider of private age-attestation services which they can enmesh into whatever transactions they require an age test for; this puts the user in control of provider choice and information protection, and the reliant parties — vendors, porn sites, forums, whatever — should be obliged to accept attestation tokens.
But we don’t do that, probably because (a) it makes less money for the industry and (b) because Governments get more ID tracking metadata with the age verification approach.
2026-03-04 17:14:39
Nothing to do with it being Chinese, then?
“TikTok told the BBC it believed end-to-end encryption prevented police and safety teams from being able to read direct messages if they needed to. It confirmed its approach to the BBC in a briefing about security at its London office – saying it wanted to protect users, especially young people, from harm. It described this stance as a deliberate decision to set itself apart from rivals.”
2026-03-03 19:12:38
https://csa-scientist-open-letter.org/ageverif-Feb2026
Article: https://www.politico.eu/article/age-check-social-media-scientist-warning
Archived at https://archive.ph/EADuL
Via:
2026-03-02 19:01:18
If you forcibly isolate an entire generation from influences that they are bound to encounter later in life, you are doing them harm by preventing them learning early how to cope.
“If we are going to eliminate peanuts, and another child is allergic to hazelnuts, and another child is allergic to milk, and another child to [Instagram] — there’s no end to this,” he says.
2026-03-02 17:08:45
Age-verification systems require collecting sensitive data to support the biometric information. In no time, the internet will become a fully surveilled digital panopticon
2026-03-01 02:14:12
“Bans feel decisive, but they avoid the harder truth: The digital environment isn’t temporary, and adolescence can’t be postponed until it becomes convenient for adults. We aren’t raising children for a world without algorithms. We are raising them for a world shaped by artificial intelligence, public visibility and constant comparison. Removing access doesn’t build resilience, judgment or self-regulation. It simply delays the moment those skills are required, often until parental influence has weakened. History shows that prohibition rarely produces maturity.”
https://www.wsj.com/opinion/if-the-kids-are-online-we-should-be-involved-9c6fd4da
Attention, discipline and judgment are learned at home.
Feb. 26, 2026 11:36 am ET
The global rush to ban teenagers from social media is emotionally understandable — and strategically shortsighted (“Social-Media Bans for Youth Gain Momentum Worldwide,” Page One, Feb. 19).
Bans feel decisive, but they avoid the harder truth: The digital environment isn’t temporary, and adolescence can’t be postponed until it becomes convenient for adults. We aren’t raising children for a world without algorithms. We are raising them for a world shaped by artificial intelligence, public visibility and constant comparison. Removing access doesn’t build resilience, judgment or self-regulation. It simply delays the moment those skills are required, often until parental influence has weakened. History shows that prohibition rarely produces maturity.
Teens need adults who are willing to set clear boundaries, enforce consequences, teach digital literacy and model disciplined use of technology themselves. They need schools that teach attention as a skill. They need policymakers who demand transparency and guardrails from platforms that monetize adolescent engagement.
The real issue isn’t whether teenagers have access to social media. The issue is whether we are willing to do the work of raising them within it. A ban transfers responsibility outward, to governments and corporations. But attention, discipline and judgment are learned at home. If we want contributing adults rather than digitally dependent ones, we should focus less on shielding teenagers from the modern world and more on forming them to navigate it.
Rod Wilson
Seal Beach, Calif.