2025-11-27 20:35:50
This is how you kill privacy & anonymity, and disenfranchise the weak, people; do you not ALSO care about that, JM?
2025-11-27 15:10:20
From the outset, Chat Control was a proposal that aimed to introduce mass surveillance. That ambition is clearly still present … among many of the member states in the Council. [It] failed to introduce mass surveillance but has succeeded in paving the way for new attempts
Long tweet. Full version:
The Council of Ministers in the EU has, after three years, now reached a common position on Chat Control. The requirement for mandatory scanning (including end-to-end encrypted messaging services) has been removed, which is a major victory. The EU Council failed to implement mandatory mass surveillance. However, in its proposal, they are laying the groundwork for mass surveillance in the future.
What happens now?
The Council will now enter negotiations with the European Parliament, led by the European Commission. We urge the Parliament to stand firm in the trilogue negotiations and not deviate an inch from its previous position, demanding: no mass surveillance whatsoever without suspicion and a court order, no ID-verification requirements, and no censorship of legal content.
The EU Council is preparing for mandatory mass surveillance and censorship
The Council’s version of Chat Control includes voluntary scanning, vaguely worded legislation that may entail requirements for age verification and mandatory ID checks (even for end-to-end encrypted services), and an article stating that the requirement for mandatory scanning shall be reconsidered every three years. They also introduce a new infrastructure for blocking material, where it is up to each member state to block what they consider illegal. At the same time, a massive EU center is being established to work exclusively on this. All in all, this indicates that the EU Council is aiming to build an infrastructure for mass surveillance, and the legislative proposal is written in a way that opens the door to it.
The EU Council’s Chat Control version
– The EU Council’s Chat Control version introduces a new type of scanning for so-called new material and grooming. This means that AI will scan people’s conversations, photos and videos, in search of criminal content. This will result in enormous numbers of false positives, and people’s private lives will move from an AI detection to being examined by employees at a new EU center. This is mass surveillance and people’s private lives will be scanned without any suspicion and without a court order. This scanning is carried out in cooperation with American companies and can at any time be used to scan for virtually anything; Europol has already requested broader scanning and wants access to material that is not illegal.
– Every three years, the European Commission will challenge the law and attempt to force mandatory scanning (even for end-to-end-encrypted services). Messaging services (including end-to-end encrypted) must take “all reasonable measures” to reduce the risk of their services being misused, including implementation of age verification. This means that the EU may require ID checks and ban anonymous use of messaging services and social media. This poses problems for people who criticize those in power in authoritarian countries, for whistleblowers who want to leak documents, and for sources who wish to speak anonymously with journalists.
– A new infrastructure for blocking material is introduced, where it’s up to each of the member states to issue blocking orders for what they consider illegal. This implies that content that is illegal in one country will also be blocked in a country where it is legal. Once this infrastructure is in place, it also opens the door to a slippery slope when it comes to censorship.
Stop Chat Control
From the outset, Chat Control was a proposal that aimed to introduce mass surveillance. That ambition is clearly still present within the Commission and among many of the member states in the Council. The Council failed to introduce mass surveillance but has succeeded in paving the way for new attempts. This applies not only to future proposals for mandatory chat control scanning every three years. This is part of a broader development in which private and secure communication is being challenged by forces seeking to introduce mass surveillance. ProtectEU is a rebranded Chat Control, aimed at banning encryption. National laws are trying to do the same. We need to put a stop to these attempts here and now.
2025-11-26 19:23:44
It’s the response from the Australian Government that tells you what they really think of the young people they claim to be protecting:
Teens launch High Court challenge to Australia’s social media ban
However, 15-year-olds Noah Jones and Macy Neyland – backed by a rights group – will argue the ban completely disregards the rights of children.
“We shouldn’t be silenced. It’s like Orwell’s book 1984, and that scares me,” Macy Neyland said in a statement.
After news of the case broke, Communications Minister Anika Wells told parliament the government would not be swayed.
“We will not be intimidated by threats. We will not be intimidated by legal challenges. We will not be intimidated by big tech. On behalf of Australian parents, we will stand firm,” she said.
2025-11-26 19:10:26
Paid subscribers get a bypass, of course:
Why is Substack asking to verify my age?
The UK Online Safety Act requires restricted access to content that could be considered sensitive for younger audiences. If you see blurred or blocked content, this doesn’t necessarily mean that the content is harmful, it just may fall into a category that must be age-restricted per the requirements of the OSA.
Verification in the UK is optional, however, without verification, you may continue to come across blurred content or be blocked from accessing certain features (a Substack’s chat, DMs, livestreams) with a prompt to verify your age thus limiting your Substack experience.
Yeah, really…
2025-11-26 18:55:06
This should be obvious, no?
It’s how we deal with football hooligans:
Football (Offences and Disorder) Act 1999 (Notes)
…The measures proposed would provide recourse to the law to prevent a range of offenders from attending matches in this country and travelling to and attending designated matches abroad.
One could try to argue “Yes but we have regulations to force football teams and stadia to stop hooligan violence…” — and yes we do, but that regulation applies only to the UK; the foreign teams & stadia have their own laws.
Maybe they’re looking for a way to avoid painting all Britons as potential Internet hooligans?
2025-11-26 18:38:31
I’m confident a few privacy activists across Europe are seeking GDPR (etc) arguments to critique the mechanisms behind the location-based exposure of “Foreign, Fake MAGA Agents”; I disagree, but I think there will be ripples of positive & negative consequences until a new norm is established & understood. Of course I’m not the only one thinking this:
Another former employee, speaking on condition of anonymity because they are not authorized to speak about their work at X by their current employer, said the company had decided against deploying the idea in the past for two reasons: concern about creating a visible target for bad actors to manipulate and fear that the label could backfire. If a bad actor successfully spoofed a U.S. location, the platform would effectively be incorrectly verifying it as a trusted American voice.
It’s pretty simple:
The only way to break this loop is not to play the game, but we’re not in that universe at the moment.
However: there are some worthwhile zingers in the comments, here:
Reddit:
Nikita Bier, X’s head of product development, said they’re working to resolve the use of VPNs to alter account location. How?