2025-06-27 02:12:00
Thoughts on site hosting, AI-related angst, mangled past participles, and gaming on Fedora.
For those who’ve never read either the previous entry in this series or any of its like-named predecessors, each “Mixed nuts” post allows me to bloviate — er, opine — regarding multiple and often unrelated subjects, rather than sticking mainly to one topic. Today’s latest in the line includes a follow-up to my recent post about this site and Cloudflare, then proceeds to what for me is an increasingly sore point where AI and text are concerned. Whether it gets better from there will be yours to decide.
It’s been a few weeks since I issued that post about how Cloudflare, having put its Pages site-hosting product into maintenance mode, is urging Pages users to switch their sites to the Cloudflare Workers platform. At the time, I noted that I’d made the transition on this simple site without too much pain. However, since then, a number of online conversations have made me feel I unnecessarily minimized the effort such changes might require. That goes double for my fellow Hugo users, since sites built on other, JavaScript-based tools have it considerably easier. The bottom line is that some should look into the free tiers of alternatives such as Netlify, Render, and Vercel. I already explained in 2023 how each such alternative has both upsides and downsides.
This is for those who insist they can easily spot AI-generated text. Many of us old farts were using bulleted lists and em dashes and en dashes long before artificial intelligence was no more than a (usually) reliable plot device for sci-fi, much less the fever dream of tech bros. So, for God’s sake, stop using those as “proofs” that some text is AI-generated. As for my own writing, I reiterate what I said over two years ago: “. . . although the stuff on this site . . . may not be any good, it always has been and will be written by a human, namely me.”
I wish I could cease noticing what seems to be the increasingly rampant mangling of past participles (e.g.,“have ran” or “have went”). I see it and hear it online, multiple times, every day. What further irks me about it is that, more often than not, the people committing this linguistic butchery seem to be bright folks who should know better — especially when this happens in a scripted video or presentation, for which you’d think (hope?) that one or more people actually read through the text before its delivery. All that said, I’ve also had to accept that many “should-know-better” types, when writing online, apparently can’t be bothered with the difference between “you’re” and “your” or between “it’s” and “its,” so . . . unnggh.
The Fedora distribution of Linux may drop support for 32-bit packages next year, likely endangering the Steam-hosted gaming I’ve been enjoying on that distro for a while now. At least, this action will endanger it unless Flatpak-supplied Steam is immune to the problem, and I lack the knowledge to discern the accuracy of the various online opinions about this. (See also this GamingOnLinux link.) Of course, there are many other Linux distros, but I don’t know how soon they, too, may follow the same path. Eventually, they’ll all have to take similar actions to avoid the Year 2038 problem; but, even if I were to survive to that point, I’d be in my early eighties and, likely, well past caring. YMMV.
2025-05-28 05:59:00
After I learn of changes in Cloudflare’s priorities, this site’s deployment process goes backward down memory lane.
This site has lived on Cloudflare Pages (CFP) for most of the last four years, having been initially on Cloudflare Workers (CFW) as a “Workers site” after stays on several other web hosts. I’d gained the distinct impression that CFP was the path on which Cloudflare intended to stay where hosting static websites was concerned.
This morning, I learned not only that this was no longer the case but also that I’d “missed the memo” about it, and from a good while ago at that. A few hours of docs-reading and tinkering later, I had migrated the site back to running on a Cloudflare Worker. (Cloudflare doesn’t call them “Workers sites” anymore.)
Every morning, one of my usual practices is to look through the Hugo Discourse forum to see what’s going on in Hugo-ville. Today’s visit brought me up short with a discussion of recent Cloudflare changes and their effect on Hugo users’ hosting on it. Nearly two months earlier, Cloudflare had issued a blog post that was mostly about enhancements to CFW. I had seen the post — the Cloudflare Blog is among many I follow via RSS — but apparently hadn’t scrolled down far enough to catch what I now consider its buried lede, at least for CFP users such as I:
Now that Workers supports both serving static assets and server-side rendering, you should start with Workers. Cloudflare Pages will continue to be supported, but, going forward, all of our investment, optimizations, and feature work will be dedicated to improving Workers. We aim to make Workers the best platform for building full-stack apps, building upon your feedback of what went well with Pages and what we could improve. [Emphases are Cloudflare’s.]
In short: the CFP platform is now largely in maintenance mode, while its parent platform, CFW, is where Cloudflare will be investing its future dev efforts.
I was chagrined, but also got the message. Even though someone on the Cloudflare Discord later told me that I could probably keep things as they are for now, the same person also said that migrating the site to CFW still would be the wisest choice. As I would later mention elsewhere on Discord:
I know CF says that existing Pages projects are OK, but it hasn’t been that long since CF was urging people to transition from Workers projects to Pages projects, and now the opposite seems to be the case . . . Not crazy about having to [migrate], but would rather move with the CF tide than be on a maintenance-only platform.
This meant I’d have to make some changes. And, as the saying goes, there was bad news and good news.
The bad news: Hugo is not among the recommended frameworks. Indeed, all of the current list’s members are JavaScript-based, so one might pessimistically suppose Hugo will be excluded for a while. Also: while there definitely is Cloudflare documentation for migrating from CFP to CFW, following it is no walk in the park.
The good news: Hugo’s amazingly helpful Joe Mooring had created an example repository which showed how to do this, right down to a custom build script and the necessary configuration file. So I adapted those for my site’s purposes, created a new CFW project which would handle my site’s contents, and did the usual site-swapping DNS stuff to point my domain to that Worker rather than a CFP project.
One aspect that initially slowed the migration process was the site’s existing use of a Pages Function to manage my Content Security Policy and the caching of static assets. That was a problem because a Pages Function actually is a Worker, so you can’t just move it, unchanged, into another Worker and expect good results. Fortunately, Cloudflare’s wrangler
utility, used for doing a ton of stuff with both CFW and CFP, can compile the Pages Function code into a single file that works within a properly configured Worker.
The only remaining tricky thing for me was that, since October, 2023, I’d been doing my Hugo builds locally and then deploying the results directly to CFP, which I’d found ’waaaaay faster than the usual method of pushing changes to a linked online repository and then waiting for a cloud infrastructure to build and deploy the site. In addition, my way had been letting me push changes to the online repo without having to rebuild online as well, which was a more comforting way to manage version control. Thus, I ended up doing even a little more local retooling but got it to work by (1.) disconnecting the online repo from the CFW project and (2.) changing my local script to deploy to the CFW project rather than, as before, the CFP project.
During all of this rigamarole today, I did give some serious thought to whether I might be better off simply heading back to one of the previous hosts I’d used, rather than hoping Cloudflare doesn’t make it even more complicated down the line to host my humble little site (for the big zero dollars a month I pay for it, of course).
In the end, I stuck with Cloudflare, simply because it quickly became clear that, annoyances notwithstanding, none of the alternatives was truly any better. Besides, I’d still have to deal with various idiosyncrasies, regardless of which host I chose. It wasn’t quite a case of “If it ain’t broke…” — since, after all, I’d started the day assuming it wasn’t “broke” as a CFP site, only to end up deciding otherwise — but it was close enough.
2025-05-22 04:15:00
How to help a small percentage of visitors without inconveniencing the vast majority.
Since my site is a blog (rather than, e.g., a place for obtaining things like tickets to shows), you might think no visitor would need or want to print any of its pages. However, I occasionally hear from those who do, one of whom also requested that I provide print-specific CSS to make the results look better. I did, but knew it also meant I was making my other, non-printing visitors download CSS that they neither needed nor wanted.
As of yesterday, that is no longer a problem.
I’ve noted here before that I won’t let AI write my posts but I will make use of AI when I need help with code. This post is about the latter case.
From time to time, I think about how I might better handle the site’s delivery of CSS. For example, I practice what I call “sorta scoped styling,” wherein I split the CSS into files that get loaded only on certain types of pages. However, this wouldn’t help with the print CSS. While I did mark its link as media="print"
— which, among other things, makes browsers treat it as a lower-priority download — I wanted to find a way to load it conditionally, only when that small number of users actually tried to print one of the site pages. So, yesterday, I asked ChatGPT:
Is there a way, through JavaScript or other coding, to have a browser download a website’s print-specific CSS file only if the user is actually printing a page? The obvious intent is to reduce how much CSS the website must deliver, especially since a relatively small percentage of users actually print web pages anymore.
That began a “discussion” which, although the AI’s response contained some of the hallucinatory behavior for which LLMs have become infamous, successfully gave me code which met my needs.
The code uses the matchMedia()
method (and, for maximum compatibility, it also acts on beforeprint
events) to detect an active print request from the browser. Only when such a request occurs will the code load the print CSS; so, now, only those users who are actually printing content from the site will download the additional styling to make their printouts look more “print-y” and less “web-y,” so to speak.
Armed with this AI-created JavaScript code submission, I added it to the appropriate partial templates for my Hugo site’s purposes.1 (For those who choose to disable JavaScript, the noscript
section at the end delivers the print CSS anyway, just the way everyone else formerly got it.)
{{- /* for those who've requested CSS for printing */ -}}
{{- $printCSS := resources.Get "css/print.css" -}}
{{- if hugo.IsProduction -}}
{{- $printCSS = $printCSS | resources.Copy "css/print.min.css" | postCSS | fingerprint -}}
{{- end -}}
{{- with $printCSS -}}
{{ $safePrintLink := $printCSS.RelPermalink | safeURL }}
<script>
function loadPrintStylesheet() {
if (document.getElementById('print-css')) return; // Prevent multiple loads
const link = document.createElement('link');
link.rel = 'stylesheet';
link.href = '{{ $safePrintLink }}';
link.type = 'text/css';
link.media = 'print';
link.id = 'print-css';
{{- if hugo.IsProduction }}
link.integrity='{{ $printCSS.Data.Integrity }}';
{{- end -}}
document.head.appendChild(link);
}
// Use media query listener
const mediaQueryList = window.matchMedia('print');
mediaQueryList.addEventListener('change', (mql) => {
if (mql.matches) {
loadPrintStylesheet();
}
});
// Fallback for browsers that fire beforeprint/afterprint
window.addEventListener('beforeprint', loadPrintStylesheet);
</script>
<noscript>
<link rel="stylesheet" href="{{ $printCSS.RelPermalink }}" type="text/css" media="print"{{- if hugo.IsProduction }} integrity="{{ $printCSS.Data.Integrity }}"{{- end -}}>
</noscript>
{{- end }}
This works fine on Chrome and Safari, as well as browsers based on their engines (Blink and WebKit, respectively), but I did find one oddity in Gecko-based browsers such as Firefox. While other browsers will load the print CSS when their respective Print Preview windows pop up, a Gecko-based browser will not load it if “Disable cache” is enabled — as often is the case when one is using the browser’s development tools. In that specific circumstance, you end up having to cancel out from the Print Preview window and then load it again to see the desired effect. By contrast, the other browsers will properly load the print CSS even with “Disable cache” enabled.
That said, now we’re talking about a glitch that affects an even tinier number of users than those who have any need for my site’s print CSS. Namely, they’re users who (a.) are using a Gecko-based browser and (b.) want to print from my site and (c.) are viewing my site with “Disable cache” enabled. And, even for them, closing and reloading Print Preview will fix the problem.
My original also contains code that, in production, enables a serverless function to provide a nonce for Content Security Policy purposes. ↩︎
2025-04-29 01:26:00
Whither the web, plus my reaction to some Fediverse drama.
This originally was going to be about just one thing, namely the overarching subject of two recent and significant blog posts by Open Web Advocacy (OWA). However, I needed to add a second “thing” when some unexpected-to-me Fediverse drama occurred even before I could actually start writing this.
We’re a few months away from a court ruling in the antitrust case wherein Google has been found guilty of having a monopoly in the web search biz. If the U.S. Department of Justice (DOJ) were to get its way in the ruling, Google would have to divest itself of the Chrome browser and stop paying makers of other browsers for defaulting to the Google search engine. I’ve already written in two posts this year about unintended bad results from those actions, so I was pleased in the last few days to see two major OWA blog posts on the subject.
In a normal day up until this past Friday, I may spend all of two or three minutes looking at posts on Mastodon, so Fediverse drama easily can happen without my knowing about it. Such was the case in the last few days, when I suddenly learned of a firestorm in certain Mastodon instances over the instance I’d been using for some time, Fosstodon.
The short story is that:
Thus, in order to be on an instance that wasn’t (to my knowledge) in the crosshairs of such actions of fedi-banishment, I migrated my account from Fosstodon to another tech-oriented instance, Hachyderm. There, I am @[email protected], to which this site’s appropriate links have pointed since yesterday.
I recommend Corey Snipes’s post, “Thoughts on Fosstodon,” as a good explanation of how it felt to us Fosstodon users who found ourselves unexpectedly involved in a war of angry words and accusations. Note that he, too, moved from Fosstodon to Hachyderm as a result of the goings-on. Also worth reading are these posts from the founders of Fosstodon:
2025-03-25 03:55:00
Sometimes, gut feelings get punched in the gut.
It can be unsettling to recognize just how much trust that software vendors expect us to place in their wares. The same can occur when, having given that level of trust, we find ourselves perhaps wishing we hadn’t. The last few weeks have provided for our examination two key cases in point: the Mozilla Firefox Terms of Use (TOU) SNAFU and the purchase of Strongbox.
Mozilla issued two significant blog posts near the end of February, 2025: “Introducing a terms of use and updated privacy notice for Firefox” and, two days later, “An update on our Terms of Use.” As the second post’s title suggests, the first post had provoked an avalanche of questions and concerns from the browser-watching community, but the second post helped little or not at all in stemming the flow of worries.
There was sufficient drama surrounding all this that I suspect you’ve already read and/or seen plenty about it, so I’ll avoid adding to it here. If you need additional context, just search for something like Firefox Terms of Use controversy and you’ll probably get more results — with, yes, a goodly mixture of opinions and sometimes well-founded comments — than you could hope to see. Suffice it to say these posts (and the resulting drama) shocked the increasingly small bloc of Firefox users, as well as those non-Firefox users who’d nonetheless wished the browser well in its quest to survive in a world trending toward domination by Chrome and Chromium.
Mozilla shed a ton of good will, not to mention much of the trust of Firefox’s previously most loyal users, with both these actions and Mozilla’s attempts at explaining them. Just exactly why Mozilla felt it necessary to make the changes is the key, I think, but at this writing we don’t know the answer to that. It comes down to which kind of legal exposure that the language in the TOU and the updated Privacy Notice was intended to prevent: i.e., was it to protect Mozilla regarding data-selling it already had been doing, or data-selling it was planning to do? Neither is a good look for a company and browser which have long proudly presented themselves as privacy-friendly.
Until a few days ago, security-minded Apple device users looking for KeePass-compatible password management software could safely select the Strongbox app for macOS and iOS. That changed with the announcement that Applause was buying Strongbox. Online communities of Strongbox users almost uniformly said they felt betrayed by this takeover of the app. (For example, check this Reddit thread.)
The particularly sensitive nature of the data one entrusts to a password management app only magnified these users’ outrage over the idea that Strongbox would now be in the hands of an entity which they hadn’t selected and didn’t want, especially one which has been known for (among other things) adding “phone-home” analytics to other apps it’s acquired. Moreover: unlike some other KeePass-compatible apps, Strongbox is not completely open-source, so “ah, just fork it and move on” isn’t an option even for those willing and technically able to go that route.
As I wrote a couple of months ago:
. . . I increasingly realize and reluctantly accept that most of my choices in life are from among products and services controlled by really nasty people and/or entities . . .
. . . but, that said, I also increasingly realize and reluctantly accept that, from time to time, it’ll turn out that previously trusted entities and tools either are no longer worthy of that trust or, in fact, may never have been so in the first place. That’s life, I guess.
2025-02-26 05:38:00
Some thoughts on where things are in the world of web browsers.
While I stopped doing any meaningful testing of web browsers over a year ago, I remain interested in the arena in which those apps engage daily. Here are some semi-idle thoughts on what’s been going on there of late, especially for those of us who actually care about which browsers we use, and what could follow in the not-too-distant future.
The latest Chromium-based browser to fall in line regarding whacking support for Manifest V2 extensions apparently will be Microsoft Edge. Although there probably never was much serious doubt whether Microsoft would buck Google on the Manifest V3/V2 issue, there now should be none. It won’t.
In an ideal world — i.e., one in which the vast majority of the world’s web users weren’t so willfully ignorant about browsers in general and browser extensions in particular — Mozilla’s stance regarding Manifest V3/V2 would help keep the Firefox browser afloat. We Do Not Live in That World, nor shall we.
While perusing one of the seemingly endless “my-browser-is-great-and-all-others-suck” battles on Reddit, I found a link to an interesting Lobsters discussion about why and how the Brave browser blocks the Lobsters site, a situation of which I hadn’t been aware. If you’re using Brave, use this link from the Internet Archive; otherwise, use this link from Lobsters itself.
Whenever such discussions on Reddit and elsewhere get into the whole issue of whether Firefox will even continue to exist if the U.S. Department of Justice1if an upcoming court ruling eventually drops the big one on Google, thus cutting off all that money Google pays Mozilla for getting the preferred search engine setting in a default Firefox installation, a frequent theme seems to be: “Well, it’s okay because Firefox is open-source.” Those taking that position note that there already are numerous Firefox forks so, even if Mozilla dropped the Firefox project altogether, those forks would continue and others would follow. Again, We Do Not Live in That World. Maybe, for the much simpler web of thirty years ago, small (and probably unpaid) teams of devs could keep a web browser going; but the web of today and the future is far more demanding. As is often noted, today’s browsers have gigantic code bases and essentially are “little” (but not too little) operating systems in and of themselves. So not only Firefox and its forks but, to be clear, Chromium and its forks — with the possible exception of Edge, thanks to Microsoft’s virtually unlimited resources — could easily face an existential crisis later this year. And not just in the U.S., either, so non-U.S. readers of this piece shouldn’t feel overly safe from the repercussions of said crisis.
Of course, there is one other major browser, namely the Webkit-based Safari, that also has a huge bankroll behind it. However, it runs on only the devices of its parent company (Apple, of course), so it’s irrelevant to many if not most of the world’s users. There was at least a Safari for Windows for a few years in the late 2000s and early 2010s, but that’s never coming back; nor do I expect there ever will be one for either Android or Linux. Thus, regardless of its apparent excellence on its targeted Apple platforms and its improving adherence to web standards, Safari is and will remain a non-factor in the browsers landscape for the vast majority of users.2
This is up to a judge’s ruling, not (as I erroneously wrote) the U.S. DOJ. ↩︎
In the same vein, the much-hoped-for Ladybird browser likely won’t be relevant as a truly cross-platform browser for a very long time to come. ↩︎