MoreRSS

site iconAlec MuffettModify

Alec is a technologist, writer & security consultant who has worked in host and network security for more than 30 years, with 25 of those in industry.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Alec Muffett

Hosting a WebSite on a Disposable Vape | BogdanTheGeek’s Blog

2025-09-15 23:04:59

This is kind of the geek version of one of those “you can create an entire market garden in your backyard from newspaper and old soda cans” TikTok posts. Compute power is so cheap nowadays that it will never be erased from public accessibility, and society needs to adapt to technology rather than vice-versa.

Enjoy!

https://bogdanthegeek.github.io/blog/projects/vapeserver/

HT Jim Finnis.

I strongly suspect that Bluesky “content moderation” and “safety” is following the same trajectory that Facebook did, and that “distribution” excuses it…

2025-09-15 19:36:24

In case you missed it: a bunch of people on Bluesky posted about the recent assassination / murder of a prominent right-wing American figure with a statement which in Latin* would be “requiescat in urina” — and then they had their posts blocked:


"rest in piss"-gate has turned into a moderation nightmare for bluesky, which nonetheless refuses to explain its underlying rationale even as (or perhaps because) it seems to be changing in real time. after my own suspension, I spoke to dozens of people who got in trouble for using the same phrase

Nathan Grayson (@nathangrayson.bsky.social) 2025-09-12T18:44:19.886Z

Truthfully: this sounds like the behaviour of a text classifier which — working from a small training set of postings made by challenging individuals — decided to go do a mass-takedown of offending content.

Colloquially: “a bot worked out that some words were ‘bad’ and took down everything containing them.”

Between this and the recent deployments of age verification in the USA and in the UK, I am wondering if Bluesky’s circumstances are so desperate to not run foul of Government attention / regulation / fines, that it’s taken to proactive and deep compliance in the knowledge that “the nerds will be okay, they can just run up another PDS or implement client haxx and thereby circumvent the controls.

That’s not a healthy way to approach bad regulation.


[*] translated to confuse image text classifiers, just in case they’re still being zealous

2015: “AI Images Look Ridiculously Fake” | 2025: “AI Images Look Ridiculously Real” | The solution to risk is education

2025-09-15 18:35:46

Ten years ago we were mocking image generation algorithms which produced nothing but psychedelic dog faces, now everyone older than 25 is panicking about fake pictures of bad teen bedrooms being used to empathise real kids into being abducted.

WRONG-O. This could be a predator who has synthesized an entirely fabricated persona for the purposes of targeting and grooming children. These fake profiles can be used to lower your child’s guard in order to collect and exploit their personal info:

What if the kid responds to the scammer with a deepfake?

My take: Everybody under ~25 doesn’t really care about this stuff, and shouldn’t. It will come out in the wash.

What everybody over ~25 ought to be doing is investing time in educating kids into the kinds of critical thinking which they will need to survive in a world of plausible lies. This is a challenge we faced with Photoshop, with Television, with Radio, with Airbrushing and Magazines, with Telegrams, Newspapers, Pamphlets, and the Printing Press.

Somehow we’ve always adapted to change.

Just because you’re so old that you don’t have the “this might be fake” filters working for you, doesn’t mean that the next generation will necessarily fall for the deception. Instead: teach the kids. Help them become critical thinkers. Tell them to expect untruth. Show them how to detect falsehoods.

Especially when those falsehoods come from politicians, activists, and other politically vested interests.

References:

The LinkedIn post contains some pretty good general security advice, but I find the premise to be ill-founded.

Postscript

Yes, everyone already did the “…nobody knows you’re a dog” or “…thinks you’re a dog” jokes, 10+ years ago. It’s okay, we’re good for that, but thank you for reading this far anyway.

“Why does remy have such strongly held opinions on internet censorship?”

2025-09-14 08:00:38

This:

Center for the Alignment of AI Alignment Centers | …this is a wonderful and absolutely necessary tool for AI policy experts

2025-09-13 21:43:01

Perfect.

Every day, thousands of researchers race to solve the AI alignment problem. But they struggle to coordinate on the basics, like whether a misaligned superintelligence will seek to destroy humanity, or just enslave and torture us forever. Who, then, aligns the aligners?

We do.

https://alignmentalignment.ai/