MoreRSS

site iconBryce WrayModify

Based in the Dallas/Fort Worth area in Texas, U.S.A., I’m a nerdy advocate for static websites and the tools that build them — particularly Eleventy and Hugo.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Bryce Wray

Hugo sites on Cloudflare Workers — or not

2025-07-12 02:46:00

Longer-term considerations about recently announced changes at Cloudflare.


On further reflection, I’ve decided Cloudflare’s quiet-ish announcement about the Cloudflare Pages platform, about which I first wrote a few weeks ago, bears some more discussion. That’s especially true for sites like this one, built on the Hugo static site generator (SSG).

In fact, the whole thing has led me to think about how one might want to make a Hugo site more portable, to minimize the potential impact of such changes on vendors’ parts both now and in the future. If you, too, have used Cloudflare Pages as a Hugo site’s home and are now pondering what to do, perhaps this post will help you understand your options more clearly.

Our story so far . . .

In case you missed it: Cloudflare essentially put Cloudflare Pages (CFP) on life support a few months back, and began advising potential CFP users to build sites on the newly enhanced Cloudflare Workers (CFW) platform instead.1 While the CFP platform will continue to exist at least for the time being, Cloudflare really wants folks to change over to CFW.

And, to be fair: this may not be that big a deal for sites built on JavaScript-based SSGs. Indeed, the CFW documentation includes a list of recommended site-building frameworks, each of which is a mass of JavaScript dependencies. As a result, for the most part, making CFW work with any of these frameworks can be as simple as npm install. That’s not the case with the Go-based Hugo, which is a binary.

When the CFP-to-CFW issue arose on the Hugo Discourse forum, Joe Mooring of the Hugo project took time to provide great guidance about putting a Hugo site on CFW. This made it easy enough to convert my own simple site from CFP to CFW the same day I found out about all this.

But, in the ensuing weeks, I’ve seen online comments from Hugo users with more complex CFP-hosted sites and, unfortunately, ongoing issues trying to transition to CFW from the much easier CFP. For example, those whose sites depend on Git submodules, such as for externally produced themes, have found CFW currently unsuitable if used with a private repository.2

These users’ frustrations are sufficient as to make them reconsider whether it’s worth even bothering with making the transition work vs. just starting over with a competing and, presumably, Hugo-friendlier (or less Hugo-unfriendly) host. Thoughts of this type inevitably lead one to wonder how to make one’s Hugo project as portable as possible, for just such cases.3

After much ensuing head-scratching and research in this vein, including even revisiting a few of my own past posts about the where-to-put-your-static-site issue, I reached some conclusions about how, and where, a Hugo-based site should exist in the light of these new realities. As I walk you through some of my considerations, I hope they’ll help your own decision-making process if you’re entertaining similar contemplations.

Binaries are the biggie

For a Hugo site, the first and foremost issue involves the handling of binaries.

Building with Hugo requires a host whose build image either has the Hugo binary or, at the very least, lets you install it during the build. Additionally: if you’re styling your site with Sass, you must also be able to get the host to install the Dart Sass binary into the correct path. (Even if you presently have no interest in using Sass on your Hugo site, you still may want your host at least to make it possible, just in case you change your mind later.)

With the standard method of deploying to Cloudflare Pages — namely, pushing a commit to a site’s connected Git repository — a Hugo site owner could, with relative ease:

  • Specify the Hugo version (one was included in the CFP build image, but I personally prefer to pick the version myself).
  • Use the Dart Sass binary and specify the version.

On the other hand: the Cloudflare Workers build image offers a pre-selected Hugo binary, perhaps not the latest, and doesn’t allow you to pick a version.4 Moreover, the CFW build image doesn’t offer Dart Sass at all. Of course, the latter isn’t terribly surprising since, again, Cloudflare expects most SSG users to be running JS-based SSGs, and those usually work with Sass through some interaction with the Sass package5 rather than the Dart Sass binary.

What about the competition? Here’s how the only competing hosts I’ll mention6 fare in this regard:

  • Hugo — The build images for Netlify, Render, and Vercel provide Hugo and let you specify the version. Netlify and Vercel give you two ways to specify the HUGO_VERSION environment variable: through the GUI, or in a config file — netlify.toml or vercel.json, respectively. With Render, the only way to set the Hugo version is with a shell script; otherwise, as of this writing, you get a Hugo version from multiple years ago.
  • Dart Sass — With Netlify, you can get the Dart Sass binary and specify its version through scripting in netlify.toml, but not through the Netlify GUI. As for Render and Vercel, I know a shell script suggested by Hugo’s Bjørn Erik Pedersen worked at one time, but I haven’t tried it on either host recently.

The bottom line on these binaries and the three hosts’ native deployment environments is: you can spec your chosen Hugo binary on all three hosts (although not so easily with Render), but using and spec’g the Dart Sass binary is safest with Netlify.

However, in my experience, it’s easier for a Hugo user to solve the whole problem with any of these hosts by using a separate CI/CD provider, either GitHub Actions or GitLab CI/CD. This host-agnostic approach gives you much more control over which binaries you download, which versions you get, and other factors that are important for Hugo users.7 Although explaining the process is beyond this post’s scope (if needed, refer to the Hugo “Host and deploy” docs), suffice it to say that each host I’ve discussed here allows building sites through both GitHub Actions and GitLab CI/CD.8

Note: To be fair, I remind you of my 2022 findings concerning potential issues in using GitHub Actions with a Vercel-hosted Hugo site in which Hugo’s native image-processing functionality is in use. However, I haven’t tested sufficiently to know if the problem still exists, and that was three whole years ago; so I suspect (hope?) that, since, there have been plenty of improvements to the infrastructure that even Vercel’s free tier uses.

One heretical afterthought to consider

Before I press on to the finish, I’ll dwell briefly on what may be the elephant in this discussion’s room: the choice of SSG itself.

As noted, Cloudflare’s recent changes are potentially much more of a hassle for Hugo users than for those using JavaScript-based SSGs. But, as you probably already knew, Cloudflare isn’t alone in this respect. Indeed, most hosting platforms clearly favor the JS-based tools which have long constituted the overwhelming majority of site-building products; and this favoritism likely will only grow over time.

So, is it time for you, a Hugo user, to throw in the towel and jump ship to a different, JS-based SSG? Will that make your site more future-proof?

Well, only you can make that call. If you do switch, I can tell you from my years of experience that the Hugo-to-whatever conversion process will be anywhere from fairly easy to excruciating, depending largely on two factors: (a.) how big your site is; and (b.) how much Hugo-specific customization your site has. Mine has several hundred pages and more than a little Hugo-ish code that would be a bear to translate, so this site isn’t a likely candidate for now.

That said, my long-time readers know I have strayed from the Hugo ranch numerous times in the site’s nearly six years of existence, so I can offer a little more specific advice on the subject of possibly switching from Hugo to something else.

Of the JS-based SSGs I’ve used over the years to build this site whenever it wasn’t a Hugo project, the only SSG that’s on Cloudflare’s aforementioned list of recommended platforms is Astro; and, mind you, my time on Astro was miniscule compared to the many months I used Eleventy. (I also used the now largely moribund Gatsby, and even it gets a little love in the current Cloudflare Workers documentation — in fact, more than for Eleventy, much less Hugo.) Even when just tinkering, I haven’t used either Astro or Eleventy extensively in a couple of years; but I feel either is a solid alternative as a site-building platform to which the typical JS-favoring host is at least less averse than it is to Hugo.9

So, where?

All right, let’s get to the bottom line. After I’d given all this thought to how I could make my own Hugo site more portable and thus less vulnerable to the whims of different hosts, what did I end up doing about the site’s hosting?

In fact, I did nothing. As of this post’s initial publication, the site is still on Cloudflare Workers. It all still works, after all. But, now, I know how to make a quick exit if I do choose. It’s my hope that what I’ve shared in this post will give you similar knowledge.

But where would I go if I don’t stay with CFW? It would be between Netlify and Vercel. (While I admire Render as a company, I’m not as comfortable with configuring for it, especially where Hugo-specific things are concerned, as I am with the other two.) If I had to pick a winner, it would come down to how wedded I’d be to using external CI/CD, as I now do with the CFW site and did with its CFP predecessor. That’s because, in my testing, I found external CI/CD somewhat easier with Vercel than with Netlify, while Netlify’s native GUI provides better support for Hugo than does Vercel’s. So it really would come down to whether I’d prefer external CI/CD. If yes, it would be Vercel. If no, it would be Netlify.10


  1. Yeah, I know: CFP is, and has always been, built atop CFW anyway; but you get the idea. ↩︎

  2. On June 23, a commenter on the Cloudflare Developers Discord said, “Did a little bit of checking, looks like ssh urls in submodules are not currently supported[.]” Seeing a reference to this, someone on the unofficial Hugo Discord observed, “so if u have a private repository, the URL alone wouldn’t allow CF to read the repository.” ↩︎

  3. To quote Foghorn Leghorn: “Fortunately, I keep my feathers numbered for just such an emergency.” ↩︎

  4. Update, 2025-07-18: I later learned, via Discord, from a fellow Hugo user that you actually can select the Hugo version with the Workers build image, in the same way as you would’ve with Pages — i.e., through use of a HUGO_VERSION environment variable. It’s just not clearly documented. I don’t know whether a similar capability exists for using a DART_SASS_VERSION environment variable to get the Dart Sass binary; the HUGO_VERSION trick likely works because there already is a Hugo binary in the Workers build image, but the same doesn’t appear to be true for a Dart Sass binary. ↩︎

  5. Incidentally, another exception involves someone using Sass with the Rust-based Zola SSG. Zola uses the Rust grass crate for a “more-or-less”-ish compatibility with Dart Sass. I say “‘more-or-less’-ish” because the latest release of grass, at least as of this post’s initial appearance, is lagging quite a bit behind that of the official Dart Sass binary. Whether that matters much is up to each Sass-using Zola site owner; but, were I that user, I wouldn’t like it very much, especially given the fairly active cadence of Dart Sass updates. Also on the subject of Zola: currently, its binary isn’t in the CFW build image. ↩︎

  6. A quick review of the free tier of Digital Ocean’s Apps Platform shows that DOAP remains as unsuitable as I found it in 2023, thus deserving no real mention in any comparisons herein. ↩︎

  7. One notable example is if you like to use Git Info variables. Most hosts’ “native” methods don’t make that very easy↩︎

  8. Be aware that, if you do the Hugo build process on the CI/CD provider, you’ll need to experiment with the correct location of the respective config file. For example, it may need to be in your Hugo directory’s /static directory rather than the usual location (the root directory), but my own tests showed me this isn’t always true and can vary according to the specific workflow code you’re using to deploy the site from within the CI/CD provider. Again, experiment. Failure to put the file in the correct location means that, when the CI/CD provider turns the process over to Netlify, Render, or Vercel, the latter won’t “see” the config file and the build likely will error out rather than proceeding. ↩︎

  9. A few days ago, long-time Hugo user Patrick Kollitsch converted his website to Astro. Please note that he is an extremely knowledgeable coder, as one look at his site repository will make clear, so his switch isn’t necessarily a guide for all; but his site is a large one with several years’ worth of content, so I salute the effort he undertook to make the change. ↩︎

  10. Still, with a site that regularly needs a lot of changes, one would be better off using external CI/CD with Netlify to circumvent the Netlify free tier’s monthly build limits. I wrote about this very thing five years ago and the situation hasn’t changed. ↩︎

Reply via email

Mixed nuts #15

2025-06-27 02:12:00

Thoughts on site hosting, AI-related angst, mangled past participles, and gaming on Fedora.


For those who’ve never read either the previous entry in this series or any of its like-named predecessors, each “Mixed nuts” post allows me to bloviate — er, opine — regarding multiple and often unrelated subjects, rather than sticking mainly to one topic. Today’s latest in the line includes a follow-up to my recent post about this site and Cloudflare, then proceeds to what for me is an increasingly sore point where AI and text are concerned. Whether it gets better from there will be yours to decide.


It’s been a few weeks since I issued that post about how Cloudflare, having put its Pages site-hosting product into maintenance mode, is urging Pages users to switch their sites to the Cloudflare Workers platform. At the time, I noted that I’d made the transition on this simple site without too much pain. However, since then, a number of online conversations have made me feel I unnecessarily minimized the effort such changes might require. That goes double for my fellow Hugo users, since sites built on other, JavaScript-based tools have it considerably easier. The bottom line is that some should look into the free tiers of alternatives such as Netlify, Render, and Vercel. I already explained in 2023 how each such alternative has both upsides and downsides.

This is for those who insist they can easily spot AI-generated text. Many of us old farts were using bulleted lists and em dashes and en dashes long before artificial intelligence was no more than a (usually) reliable plot device for sci-fi, much less the fever dream of tech bros. So, for God’s sake, stop using those as “proofs” that some text is AI-generated. As for my own writing, I reiterate what I said over two years ago: “. . . although the stuff on this site . . . may not be any good, it always has been and will be written by a human, namely me.”

I wish I could cease noticing what seems to be the increasingly rampant mangling of past participles (e.g.,“have ran” or “have went”). I see it and hear it online, multiple times, every day. What further irks me about it is that, more often than not, the people committing this linguistic butchery seem to be bright folks who should know better — especially when this happens in a scripted video or presentation, for which you’d think (hope?) that one or more people actually read through the text before its delivery. All that said, I’ve also had to accept that many “should-know-better” types, when writing online, apparently can’t be bothered with the difference between “you’re” and “your” or between “it’s” and “its,” so . . . unnggh.

The Fedora distribution of Linux may drop support for 32-bit packages next year, likely endangering the Steam-hosted gaming I’ve been enjoying on that distro for a while now. At least, this action will endanger it unless Flatpak-supplied Steam is immune to the problem, and I lack the knowledge to discern the accuracy of the various online opinions about this. (See also this GamingOnLinux link.) Of course, there are many other Linux distros, but I don’t know how soon they, too, may follow the same path. Eventually, they’ll all have to take similar actions to avoid the Year 2038 problem; but, even if I were to survive to that point, I’d be in my early eighties and, likely, well past caring. YMMV.

Reply via email

From Pages to Workers (again)

2025-05-28 05:59:00

After I learn of changes in Cloudflare’s priorities, this site’s deployment process goes backward down memory lane.


This site has lived on Cloudflare Pages (CFP) for most of the last four years, having been initially on Cloudflare Workers (CFW) as a “Workers site” after stays on several other web hosts. I’d gained the distinct impression that CFP was the path on which Cloudflare intended to stay where hosting static websites was concerned.

This morning, I learned not only that this was no longer the case but also that I’d “missed the memo” about it, and from a good while ago at that. A few hours of docs-reading and tinkering later, I had migrated the site back to running on a Cloudflare Worker. (Cloudflare doesn’t call them “Workers sites” anymore.)

A buried lede

Every morning, one of my usual practices is to look through the Hugo Discourse forum to see what’s going on in Hugo-ville. Today’s visit brought me up short with a discussion of recent Cloudflare changes and their effect on Hugo users’ hosting on it. Nearly two months earlier, Cloudflare had issued a blog post that was mostly about enhancements to CFW. I had seen the post — the Cloudflare Blog is among many I follow via RSS — but apparently hadn’t scrolled down far enough to catch what I now consider its buried lede, at least for CFP users such as I:

Now that Workers supports both serving static assets and server-side rendering, you should start with Workers. Cloudflare Pages will continue to be supported, but, going forward, all of our investment, optimizations, and feature work will be dedicated to improving Workers. We aim to make Workers the best platform for building full-stack apps, building upon your feedback of what went well with Pages and what we could improve. [Emphases are Cloudflare’s.]

In short: the CFP platform is now largely in maintenance mode, while its parent platform, CFW, is where Cloudflare will be investing its future dev efforts.

I was chagrined, but also got the message. Even though someone on the Cloudflare Discord later told me that I could probably keep things as they are for now, the same person also said that migrating the site to CFW still would be the wisest choice. As I would later mention elsewhere on Discord:

I know CF says that existing Pages projects are OK, but it hasn’t been that long since CF was urging people to transition from Workers projects to Pages projects, and now the opposite seems to be the case . . . Not crazy about having to [migrate], but would rather move with the CF tide than be on a maintenance-only platform.

From CFP back to CFW

This meant I’d have to make some changes. And, as the saying goes, there was bad news and good news.

The bad news: Hugo is not among the recommended frameworks. Indeed, all of the current list’s members are JavaScript-based, so one might pessimistically suppose Hugo will be excluded for a while. Also: while there definitely is Cloudflare documentation for migrating from CFP to CFW, following it is no walk in the park.

The good news: Hugo’s amazingly helpful Joe Mooring had created an example repository which showed how to do this, right down to a custom build script and the necessary configuration file. So I adapted those for my site’s purposes, created a new CFW project which would handle my site’s contents, and did the usual site-swapping DNS stuff to point my domain to that Worker rather than a CFP project.

One aspect that initially slowed the migration process was the site’s existing use of a Pages Function to manage my Content Security Policy and the caching of static assets. That was a problem because a Pages Function actually is a Worker, so you can’t just move it, unchanged, into another Worker and expect good results. Fortunately, Cloudflare’s wrangler utility, used for doing a ton of stuff with both CFW and CFP, can compile the Pages Function code into a single file that works within a properly configured Worker.

The only remaining tricky thing for me was that, since October, 2023, I’d been doing my Hugo builds locally and then deploying the results directly to CFP, which I’d found ’waaaaay faster than the usual method of pushing changes to a linked online repository and then waiting for a cloud infrastructure to build and deploy the site. In addition, my way had been letting me push changes to the online repo without having to rebuild online as well, which was a more comforting way to manage version control. Thus, I ended up doing even a little more local retooling but got it to work by (1.) disconnecting the online repo from the CFW project and (2.) changing my local script to deploy to the CFW project rather than, as before, the CFP project.

It still ain’t broke

During all of this rigamarole today, I did give some serious thought to whether I might be better off simply heading back to one of the previous hosts I’d used, rather than hoping Cloudflare doesn’t make it even more complicated down the line to host my humble little site (for the big zero dollars a month I pay for it, of course).

In the end, I stuck with Cloudflare, simply because it quickly became clear that, annoyances notwithstanding, none of the alternatives was truly any better. Besides, I’d still have to deal with various idiosyncrasies, regardless of which host I chose. It wasn’t quite a case of “If it ain’t broke…” — since, after all, I’d started the day assuming it wasn’t “broke” as a CFP site, only to end up deciding otherwise — but it was close enough.

Reply via email

Loading print CSS only when needed

2025-05-22 04:15:00

How to help a small percentage of visitors without inconveniencing the vast majority.


Since my site is a blog (rather than, e.g., a place for obtaining things like tickets to shows), you might think no visitor would need or want to print any of its pages. However, I occasionally hear from those who do, one of whom also requested that I provide print-specific CSS to make the results look better. I did, but knew it also meant I was making my other, non-printing visitors download CSS that they neither needed nor wanted.

As of yesterday, that is no longer a problem.

I’ve noted here before that I won’t let AI write my posts but I will make use of AI when I need help with code. This post is about the latter case.

From time to time, I think about how I might better handle the site’s delivery of CSS. For example, I practice what I call “sorta scoped styling,” wherein I split the CSS into files that get loaded only on certain types of pages. However, this wouldn’t help with the print CSS. While I did mark its link as media="print" — which, among other things, makes browsers treat it as a lower-priority download — I wanted to find a way to load it conditionally, only when that small number of users actually tried to print one of the site pages. So, yesterday, I asked ChatGPT:

Is there a way, through JavaScript or other coding, to have a browser download a website’s print-specific CSS file only if the user is actually printing a page? The obvious intent is to reduce how much CSS the website must deliver, especially since a relatively small percentage of users actually print web pages anymore.

That began a “discussion” which, although the AI’s response contained some of the hallucinatory behavior for which LLMs have become infamous, successfully gave me code which met my needs.

The code uses the matchMedia() method (and, for maximum compatibility, it also acts on beforeprint events) to detect an active print request from the browser. Only when such a request occurs will the code load the print CSS; so, now, only those users who are actually printing content from the site will download the additional styling to make their printouts look more “print-y” and less “web-y,” so to speak.

Armed with this AI-created JavaScript code submission, I added it to the appropriate partial templates for my Hugo site’s purposes.1 (For those who choose to disable JavaScript, the noscript section at the end delivers the print CSS anyway, just the way everyone else formerly got it.)

{{- /* for those who've requested CSS for printing */ -}}
{{- $printCSS := resources.Get "css/print.css" -}}
{{- if hugo.IsProduction -}}
 {{- $printCSS = $printCSS | resources.Copy "css/print.min.css" | postCSS | fingerprint -}}
{{- end -}}
{{- with $printCSS -}}
 {{ $safePrintLink := $printCSS.RelPermalink | safeURL }}
 <script>
 function loadPrintStylesheet() {
 if (document.getElementById('print-css')) return; // Prevent multiple loads

 const link = document.createElement('link');
 link.rel = 'stylesheet';
 link.href = '{{ $safePrintLink }}';
 link.type = 'text/css';
 link.media = 'print';
 link.id = 'print-css';
 {{- if hugo.IsProduction }}
 link.integrity='{{ $printCSS.Data.Integrity }}';
 {{- end -}}
 document.head.appendChild(link);
 }

 // Use media query listener
 const mediaQueryList = window.matchMedia('print');

 mediaQueryList.addEventListener('change', (mql) => {
 if (mql.matches) {
 loadPrintStylesheet();
 }
 });

 // Fallback for browsers that fire beforeprint/afterprint
 window.addEventListener('beforeprint', loadPrintStylesheet);
 </script>
 <noscript>
 <link rel="stylesheet" href="{{ $printCSS.RelPermalink }}" type="text/css" media="print"{{- if hugo.IsProduction }} integrity="{{ $printCSS.Data.Integrity }}"{{- end -}}>
 </noscript>
{{- end }}

This works fine on Chrome and Safari, as well as browsers based on their engines (Blink and WebKit, respectively), but I did find one oddity in Gecko-based browsers such as Firefox. While other browsers will load the print CSS when their respective Print Preview windows pop up, a Gecko-based browser will not load it if “Disable cache” is enabled — as often is the case when one is using the browser’s development tools. In that specific circumstance, you end up having to cancel out from the Print Preview window and then load it again to see the desired effect. By contrast, the other browsers will properly load the print CSS even with “Disable cache” enabled.

That said, now we’re talking about a glitch that affects an even tinier number of users than those who have any need for my site’s print CSS. Namely, they’re users who (a.) are using a Gecko-based browser and (b.) want to print from my site and (c.) are viewing my site with “Disable cache” enabled. And, even for them, closing and reloading Print Preview will fix the problem.


  1. My original also contains code that, in production, enables a serverless function to provide a nonce for Content Security Policy purposes↩︎

Reply via email

Thoughts on two topics

2025-04-29 01:26:00

Whither the web, plus my reaction to some Fediverse drama.


This originally was going to be about just one thing, namely the overarching subject of two recent and significant blog posts by Open Web Advocacy (OWA). However, I needed to add a second “thing” when some unexpected-to-me Fediverse drama occurred even before I could actually start writing this.

Topic 1: Whither the web

We’re a few months away from a court ruling in the antitrust case wherein Google has been found guilty of having a monopoly in the web search biz. If the U.S. Department of Justice (DOJ) were to get its way in the ruling, Google would have to divest itself of the Chrome browser and stop paying makers of other browsers for defaulting to the Google search engine. I’ve already written in two posts this year about unintended bad results from those actions, so I was pleased in the last few days to see two major OWA blog posts on the subject.

  • Break Google’s Search Monopoly without Breaking the Web(2025-04-23) — This is a truly massive post (its downloadable PDF version is nearly 130 pages long) but it has a “Key Takeaways (TL;DR)” section at the top that aptly summarizes it. I suspect its enormous size and depth result from a wish by OWA that it will be sufficiently substantial for use as a position paper suitable for reference in the appeals process that almost certainly will follow the ruling. That said, I suggest you read as much of the full thing as your time allows. Although it’s not without flaws and I don’t agree with every tiny part of it, I think that, overall, its points — especially where the fates of Mozilla Firefox and the dominated-by-Google Chromium project are concerned — are very much worth reading and considering, especially by those who have power, either now or in the future, over how it all shakes out.
  • Is It Worth Killing Mozilla to Shave Off Less Than 1% From Google’s Market Share?(2025-04-28) — Sticking to the specific subject of how the ruling could doom Mozilla if the DOJ gets what it wants, this post is a much more normal length than its predecessor. I’d hope that even those who hate Mozilla for some of its actions and positions (as mentioned in the post’s closing section) can see OWA’s compelling case for not throwing out the Mozilla baby with the Google-is-a-monopolist bathwater. As OWA notes, there are other options that swat Google without starving Mozilla and, with it, the Firefox browser and its Gecko browser engine.

Topic 2: Reacting to some Fediverse drama

In a normal day up until this past Friday, I may spend all of two or three minutes looking at posts on Mastodon, so Fediverse drama easily can happen without my knowing about it. Such was the case in the last few days, when I suddenly learned of a firestorm in certain Mastodon instances over the instance I’d been using for some time, Fosstodon.

The short story is that:

  • A certain Fosstodon moderator got called out on certain things he’d said on Mastodon and other social platforms.
  • Some users on other instances became angry at Fosstodon’s initial response to this (essentially, “regardless of that moderator’s personal views, he’s doing the job for us as requested”).
  • A movement began to either limit or ban Fosstodon’s interactions with those instances. Some did so, while others threatened to take similar actions within a few days.
  • The moderator in question did depart at some point during the height of the drama, but the damage had been done and limits/bans continued.

Thus, in order to be on an instance that wasn’t (to my knowledge) in the crosshairs of such actions of fedi-banishment, I migrated my account from Fosstodon to another tech-oriented instance, Hachyderm. There, I am @[email protected], to which this site’s appropriate links have pointed since yesterday.

I recommend Corey Snipes’s post, “Thoughts on Fosstodon,” as a good explanation of how it felt to us Fosstodon users who found ourselves unexpectedly involved in a war of angry words and accusations. Note that he, too, moved from Fosstodon to Hachyderm as a result of the goings-on. Also worth reading are these posts from the founders of Fosstodon:

Reply via email

Matters of trust

2025-03-25 03:55:00

Sometimes, gut feelings get punched in the gut.


It can be unsettling to recognize just how much trust that software vendors expect us to place in their wares. The same can occur when, having given that level of trust, we find ourselves perhaps wishing we hadn’t. The last few weeks have provided for our examination two key cases in point: the Mozilla Firefox Terms of Use (TOU) SNAFU and the purchase of Strongbox.

The Firefox TOU controversy

Mozilla issued two significant blog posts near the end of February, 2025: “Introducing a terms of use and updated privacy notice for Firefox” and, two days later, “An update on our Terms of Use.” As the second post’s title suggests, the first post had provoked an avalanche of questions and concerns from the browser-watching community, but the second post helped little or not at all in stemming the flow of worries.

There was sufficient drama surrounding all this that I suspect you’ve already read and/or seen plenty about it, so I’ll avoid adding to it here. If you need additional context, just search for something like Firefox Terms of Use controversy and you’ll probably get more results — with, yes, a goodly mixture of opinions and sometimes well-founded comments — than you could hope to see. Suffice it to say these posts (and the resulting drama) shocked the increasingly small bloc of Firefox users, as well as those non-Firefox users who’d nonetheless wished the browser well in its quest to survive in a world trending toward domination by Chrome and Chromium.

Mozilla shed a ton of good will, not to mention much of the trust of Firefox’s previously most loyal users, with both these actions and Mozilla’s attempts at explaining them. Just exactly why Mozilla felt it necessary to make the changes is the key, I think, but at this writing we don’t know the answer to that. It comes down to which kind of legal exposure that the language in the TOU and the updated Privacy Notice was intended to prevent: i.e., was it to protect Mozilla regarding data-selling it already had been doing, or data-selling it was planning to do? Neither is a good look for a company and browser which have long proudly presented themselves as privacy-friendly.

Selling Strongbox to Applause — but not applause

Until a few days ago, security-minded Apple device users looking for KeePass-compatible password management software could safely select the Strongbox app for macOS and iOS. That changed with the announcement that Applause was buying Strongbox. Online communities of Strongbox users almost uniformly said they felt betrayed by this takeover of the app. (For example, check this Reddit thread.)

The particularly sensitive nature of the data one entrusts to a password management app only magnified these users’ outrage over the idea that Strongbox would now be in the hands of an entity which they hadn’t selected and didn’t want, especially one which has been known for (among other things) adding “phone-home” analytics to other apps it’s acquired. Moreover: unlike some other KeePass-compatible apps, Strongbox is not completely open-source, so “ah, just fork it and move on” isn’t an option even for those willing and technically able to go that route.

Ah, life

As I wrote a couple of months ago:

. . . I increasingly realize and reluctantly accept that most of my choices in life are from among products and services controlled by really nasty people and/or entities . . .

. . . but, that said, I also increasingly realize and reluctantly accept that, from time to time, it’ll turn out that previously trusted entities and tools either are no longer worthy of that trust or, in fact, may never have been so in the first place. That’s life, I guess.

Reply via email