MoreRSS

site iconAlec MuffettModify

Alec is a technologist, writer & security consultant who has worked in host and network security for more than 30 years, with 25 of those in industry.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Alec Muffett

Age Verification causing people to stop installing security updates on iPhone

2026-04-03 16:19:31

What could possibly go wrong?

Australian eSafety: “It is a ‘dark pattern’ to permit people to 1/ amend mistakes 2/ be in control of information about themselves 3/ be obliged to validate abuse reports”

2026-04-02 04:12:25

Clearly the perspective which the Australian eSafety Commissioner brings to the table is “users are untrustworthy scum and must be whipped into control” – basically like MPs, then.

Also: “false negatives” are not a thing and never occur; if an AI declares you to be likely “too young” it cannot possibly be a technological problem.

https://www.bbc.co.uk/news/articles/cy4181pkxl2o

The Timeless Fear of Corrupting the Youth | WSJ

2026-03-28 15:02:58

Excellent piece:

The lesson from these examples isn’t that protecting children online is misguided or an unworthy goal. It is that the means proposed to achieve this end pose significant risks to human rights, and that the tools created for that purpose can easily become instruments of broader control over speech once governments acquire them.

https://www.wsj.com/politics/social-media-freedom-speech-meta-youtube-ruling-32aaee3b archived at: https://archive.ph/PMCjL

The Big Tech verdicts you’re cheering for are actually terrible for free speech | The Foundation for Individual Rights and Expression

2026-03-28 03:23:28

Read this:

Declaring the target to be “design features” — such as infinite scroll or notifications — instead of speech doesn’t change things. The First Amendment isn’t fooled by synonyms, and what these lawsuits target is, inescapably, speech. Some allegations are aimed at content hosted by platforms that some perceive as harmful. And the ways platforms arrange, display, and choose how users consume content are editorial choices that are protected by the First Amendment. That those features might be designed to keep users’ attention is hardly a groundbreaking discovery. That is the point of all media. Imposing liability because speech is too appealing would be a breathtaking incursion on free speech.

https://www.fire.org/news/big-tech-verdicts-youre-cheering-are-actually-terrible-free-speech

‘…all of the evidence submitted to “prove” Meta knew their product was harmful was internal safety research they were conducting to improve their moderation and detection’

2026-03-27 22:56:16

There are a bunch of “must read” Threads on Bluesky today, which to me indicate that people crowing about a “safer internet for children” due to the Meta lawsuits, are actually making things worse for everybody, including kids. I can’t embed entire threads so please click-through and read up & down + links: you should follow the authors, too:

But all of the evidence submitted to "prove" Meta knew their product was harmful was internal safety research they were conducting to improve their moderation and detection, and that deeply concerns me and everyone I know in the T&S industry.

rahaeli (@rahaeli.bsky.social) 2026-03-26T05:30:31.339Z

…and:

Way back at the dawn of time, there was a LiveJournal community called "pro-anorexia", which was about what you'd imagine from the name. This was before I was in charge of T&S at LJ, and when I came on, I was very curious why nobody had removed it!

rahaeli (@rahaeli.bsky.social) 2026-03-26T03:41:30.712Z

…and:

I found this so mind blowing when I started getting into content moderation. There are literal experts on any content category you could think of where their entire job is to understand the social tradeoffs of removing that content. Trust & Safety alters the way you think about social problems.

Jess Miers ?? (@jmiers230.bsky.social) 2026-03-26T17:51:22.462Z

‘The Encryption Problem: Where “Design Liability” Leads’ | … IF YOU ARE CHEERING META LOSING 2X RECENT LAWSUITS YOU ARE SUPPORTING THE END OF ONLINE PRIVACY

2026-03-27 17:06:11

…under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications […] encryption itself harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors — choices made by people, not by the platform’s design.

“Everyone Cheering The Social Media Addiction Verdicts Against Meta Should Understand What They’re Actually Cheering For” | Techdirt

https://www.techdirt.com/2026/03/26/everyone-cheering-the-social-media-addiction-verdicts-against-meta-should-understand-what-theyre-actually-cheering-for/