2026-01-22 06:57:00
The Cloudflare acquihire of Astro, visited links of a not-different color, and a 1Password syntax highlighting bug.
Since I last wrote herein, three events have piqued my nerdy curiosity, suggesting some appropriate musings on them. The first affects the web development business; the second has to do with a change in how visited links look in some web browsers; and the third concerns a browser extension bug which, for a few weeks, fouled up code blocks in some websites (including this one).
Among the tools I’ve used over the years to build and maintain this site, one of the more interesting is Astro. This open-source framework has come a long way since its beta versions, which with I tinkered a few years ago; and, in the process, it’s gained a ton of support among the web dev community. However, that support apparently never translated to a revenue stream that would sustain either Astro itself or the Astro Technology Company which employed a substantial number of the folks behind Astro. The resulting financial shortfall risked Astro’s perhaps becoming abandonware in the not-too-distant future; that would have been a sad ending to a greatly admired project.
As a result, according to jointly released blog posts, Cloudflare has acquihired the Astro Technology Company. Although not all acquihires are good for the object of said events, this one — at least for now — looks like a win-win. Cloudflare gets to own a solid and popular development framework (much like Vercel owns Next.js), and the Astro team gets to keep that framework growing and improving.
Turns out I missed a change from last April that affects the color of visited links in a Blink-based browser such as Chrome or Chromium. In the early days of the web, a link you hadn’t yet visited in a given browser would always be a bright blue color (#0000ff) while one you had visited therein would be purplish (#800080). More to the point, a visited link would be that purplish color regardless of which originating page had gotten you there.
For example: if you were on foo.com and clicked a link to bar.com, you would subsequently see that bar.com link in the purplish color on any other page, not just on foo.com, as long as the browser still kept that visit in its history. Of course, over time, websites have gotten a lot more creative about their various link colors but the behavior remained the same.
Or, at least, it did until last April’s release of Chromium 136. Now, with any browser based on that version or later, you’ll “see” the visited link as visited only if that visit came from the site you’re viewing. This change, attributed by Google to the need to firm up browser security, received attention at the time from Bleeping Computer and Tom’s Guide, among other sites.
At this writing, only Blink-based browsers like Chrome and Chromium work this way for visited links’ appearances; it remains to be seen when, or whether, Gecko- and WebKit-based browsers will follow the Chromium project’s lead.
Most password management apps have their own extensions for browsers, simplifying the act of entering one’s credentials when necessary. Each such extension injects some additional code into the content delivered to the browser. This usually works without interfering with the appearance of a web page but, for a few weeks last month, such was not the case following an update to the 1Password browser extension.
The problematic version, v.8.11.23.x, totally clobbered the syntax highlighting on some (but not all) web pages with code blocks like this one, which shows a little CSS:
.sitemap-div {
margin: 0 auto;
width: 90%;
}With the buggy extension version enabled, the code block looked something like this, at least on my site:
As for why, here’s an example of what the 1Password bug did to anything from the combination of Hugo and Chroma. This is how a specific code block on one of my Hugo-generated pages is supposed to look in HTML:
<div>
<div class="highlight">
<pre tabindex="0" class="chroma">
<code class="language-plaintext" data-lang="plaintext">
<span class="line">
<span class="cl">
/Users/$USERNAME/Library/Caches/hvm/$HUGO_VERSION/hugo
</span>
</span>
</code>
</pre>
</div>
</div>. . . and this is the HTML for the same code block after the then-buggy 1Password extension got through with it:
<div>
<div class="highlight">
<pre tabindex="0" class="chroma language-plaintext">
<code class="language-plaintext" data-lang="plaintext">
/Users/$USERNAME/Library/Caches/hvm/$HUGO_VERSION/hugo
</code>
</pre>
</div>
</div>Those missing spans obviously made a lot of difference, as did the insertion of a spurious language-plaintext in the pre element’s class declaration.
As one might expect, the glitch was soon the subject of an active discussion in the 1Password Community Forum, where one complaint after another showed screen captures of how the bug had “massacred their boy,” so to speak. What had happened? Well, based on various comments I read there and elsewhere about the issue, it appears that the buggy version was injecting the JavaScript-based Prism syntax highlighter. The additional code apparently was mistakenly left behind from a particular inter-version development test. Although some sites somehow escaped unscathed, the bug clearly caused styling conflicts with numerous other pages’ own syntax highlighting code.
Despite the 1Password team’s relatively quick acknowledgement of the SNAFU, it took several days1 before a fix arrived in the form of v.8.11.27.x, first in the Chrome Web Store and later the corresponding “stores” for Firefox and Safari; and not until v.8.12.0 did the release notes mention the fix:
We’ve fixed an issue where the 1Password extension could break syntax highlighting for code blocks on some websites.
The delay may have been largely due to the holiday-season absence of certain 1Password devs. The Passover/Christmas/New Year’s season is almost never a good time to get anything fixed, and browser extensions are no exception to that rule. ↩︎
2025-12-27 00:29:00
Switching to hvm and converting my scripts to work with Hugo’s packaging for macOS.
Until a few days ago, those who use the Hugo static site generator on macOS have had to deal with Apple’s quarantine feature each time they downloaded a new Hugo version. With the recent release of Hugo 0.153.0, that ceased to be the case. For most Hugo-on-macOS users, that’s a good thing. For nerds like me who’ve been managing their Hugo-on-macOS workflows through scripting, it was . . . complicated. However, with major help from one of Hugo’s key personnel, I was able to make this “new normal” a good thing for me, too.
Hugo 0.153.0 changed the macOS deliverable. Where the binary used to reside in a tar.gz archive, it now comes in a regular macOS .pkg that installs with a double-click. Still, it remains a terminal app and, thus, not as readily updatable as a typical macOS GUI app with a “Check for updates” menu item and associated functionality.1
In a Hugo Discourse entry — “0.153.0 for macOS: .pkg rather than .tar.gz” — I asked, “what is a Best Practices way for us macOS users to handle version updates going forward?” One user suggested I use the hvm (Hugo Version Manager) tool maintained by Hugo contributor Joe Mooring; but, when I did, I found that it wasn’t yet able to handle this new packaging. After seeing my report to this effect, Mooring suggested I open a related issue in the hvm repo, which I did.
Also within that same Hugo Discourse discussion, Hugo maintainer Bjørn Erik Pedersen explained the reason for the packaging change:
People have been asking for a signed and notarised [macOS version] … for a long time, and since Apple has tightened the security on this (you need to manually go into the security prefs and whitelist any non-signed/notarised app), I decided it was time to do it right, and that meant either pkg or dmg, and pkg is much nicer.
I promptly replied:
Oh, please don’t misunderstand… — I think it’s a great idea. I’m mainly just trying to figure out how I update it locally going forward. (My old method deleted the previous version and pulled the current one, using an
xattr -dr com.apple.quarantinecommand as a workaround for just the issues you mentioned.)
In the hvm issue I’d filed, Mooring and I conversed about the situation and how best to resolve it. Just four days later — which, apparently, included his actually purchasing a Mac of his own (!) — he updated hvm to a new version, 0.9.0, that was able to deal with the new packaging. Because hvm allows you to install and use a new version (as well as delete any older ones, if you choose) with just a couple of keystrokes, that solved my problem regarding updates going forward.
Now, my only remaining problem to solve was in my own scripting through which, up to then, I’d managed my local Hugo operations for the last couple of years.
Prior to Hugo 0.153.0, an install.sh script about which I once wrote would download a designated Hugo version’s .tar.gz file, extract from it the Hugo binary, and place that binary in a bin directory (after deleting any other Hugo binary that might already be there).2 I have now adapted install.sh so that, rather than downloading and extracting the former Hugo .tar.gz, it now gets the desired version of hvm’s .tar.gz, after which I use hvm as needed to manage the Hugo binary itself.
That was easy enough to do, but my other Hugo-management scripts were another matter altogether. Because the bin directory is in my $PATH, those scripts had no trouble accessing the pre-0.153.0 Hugo binary and, thus, could run various hugo commands and their flags just fine. However, this no longer was the case with hvm, which accesses Hugo’s 0.153.0+ .pkg download and extracts the Hugo binary into:
/Users/$USERNAME/Library/Caches/hvm/$HUGO_VERSION/hugoHere, $HUGO_VERSION is, e.g., 0.153.2, the latest Hugo version as of this writing.
With this arrangement, I still could manually run hugo (flagged or not) from the command line with no problem, but that wasn’t true for the scripts. Specifically, I used start.sh for purely local development, testbuild.sh for local development in a production environment, and build.sh when I just wanted to build the site, not serve it locally. For example, start.sh had this line3 for running the local Hugo server to my liking:
hugo server --port 3000 --bind=0.0.0.0 --baseURL=http://${MY_IP}:3000 --panicOnWarning --forceSyncStatic --gcNow, with the hvm-installed Hugo binary in its new location, the line failed to “find” the hugo command — triggering that Command not found response about which a web search for “script command not found” will tell you volumes (this Red Hat article about resolving the issue in Linux is among the better sources, since macOS and Linux have a lot in common) — so this errored out the script. The usual solution for this sort of thing is to hard-code the path to the Hugo binary; but, since the path would now vary based on the Hugo version that hvm was using, I initially thought I’d have to make a minor edit to each of these scripts every time I changed my Hugo version.
Then, fortunately, I remembered that hvm itself eliminates the need for such tedium.
That’s because part of the hvm setup procedure involves source-controlling the .hvm text file that hvm will create in the top level of your Hugo project. .hvm is a one-line file listing the Hugo version you’re using. For example, the one I’m using as of this writing says only:
0.153.2This simplified my fixes to a one-time process for each of the problematic scripts:
MY_PATH variable pointing to the contents of a mypath.txt file that includes the beginning of the path to the hvm-managed Hugo binary.HUGO_VERSION variable pointing to the contents of the .hvm file.hugo to ${MY_PATH}/hvm/${HUGO_VERSION}/hugo — followed by whatever flags, if any, I want.With those done, my scripts run as before, letting me go back to managing my Hugo setup as I prefer.
So that’s how I’ve settled into this “new normal.” Perhaps I’m a Cult of One in doing it my way, as I suggested to Joe Mooring in that hvm issue; but I offer this information on the chance that other macOS-using Hugo aficionados may find it of use in their own projects.
I do know that some veteran Hugo users rarely or never update their sites’ Hugo versions, fearful of dealing with breaking changes (especially across multiple sites). As for my situation, this personal site is my only Hugo project, so I usually update whenever there’s a new version. ↩︎
It also allowed downloading and installing Dart Sass, which I continue to do on a “just-in-case” basis even though I’ve kept the site almost exclusively on vanilla CSS, albeit enhanced with PostCSS for multi-browser compatibility purposes, for quite some time. ↩︎
I use the baseURL flag because I like to test the site locally on multiple devices based on my LAN. The MY_IP variable provides the current local IP address of choice, which changes from time to time based on a variety of conditions. ↩︎
2025-11-25 04:44:00
Getting the straight story from one who definitely knows what’s what at Cloudflare.
Earlier this year, I had some things to say about a Cloudflare announcement concerning its Cloudflare Workers (CFW) platform and, more to the points I was making, the Cloudflare Pages (CFP) product on which this site had lived for a good while. It now turns out that I may have misunderstood things at the time, so this post is my attempt to fix things somewhat.
My concerns came from this statement, a few paragraphs down in the Cloudflare announcement:
Now that Workers supports both serving static assets and server-side rendering, you should start with Workers. Cloudflare Pages will continue to be supported, but, going forward, all of our investment, optimizations, and feature work will be dedicated to improving Workers. We aim to make Workers the best platform for building full-stack apps, building upon your feedback of what went well with Pages and what we could improve. [Emphases are Cloudflare’s.]
I considered this to be the announcement’s buried lede, and ended up summarizing it thusly:
In short: the CFP platform is now largely in maintenance mode, while its parent platform, CFW, is where Cloudflare will be investing its future dev efforts.
All that brings me to today, when I saw a Hacker News thread about a TechLife post concerning using Hugo with CFP. I added the following comment to the thread (the opening paragraph is a quote from one of my earlier posts on the subject):
In an announcement [0] earlier this year, Cloudflare essentially put Cloudflare Pages on life support and began advising potential CFP users to build sites on the newly enhanced Cloudflare Workers platform instead.
I later wrote about this, particularly as it related to Hugo users.[1][2]
[0]: https://blog.cloudflare.com/full-stack-development-on-cloudflare-workers/
[1]: https://www.brycewray.com/posts/2025/05/pages-workers-again/
[2]: https://www.brycewray.com/posts/2025/07/hugo-sites-cloudflare-workers-or-not/
Not long thereafter, I saw that my comment had received a reply from none other than Cloudflare’s Kenton Varda:
This is a bit of a misunderstanding.
We are not sunsetting Pages. We are taking all the Pages-specific features and turning them into general Workers features -- which we should have done in the first place. At some point -- when we can do it with zero chance of breakage -- we will auto-migrate all Pages projects to this new implementation, essentially merging the platforms. We’re not ready to auto-migrate yet, but if you’re willing to do a little work you can manually migrate most Pages projects to Workers today. If you’d rather not, that’s fine, you can keep using Pages and wait for the auto-migration later.
Given his key role in the CFW project (as of this writing, he is its Tech Lead and has been for quite some time), that’s pretty much coming from the proverbial horse’s mouth!
I responded to thank him for the clarification, while also noting that I wish Cloudflare had run that key paragraph past him before sending me, and others, into subsequent confusion about its meaning:
. . . which sounds (at least to me) more like an “either/or” situation, and a “Pages-is-going-into-maintenance mode” situation, than your answer suggests. But perhaps that’s just how I took it.
Finally: I will continue to keep those two earlier posts on this site, only with notes pointing to this post — not so much as a mea culpa or anything like that but, rather, just in the hope of providing a more complete look at the whole thing.
2025-10-31 01:28:00
There are two paths to the goal, although I can use only one.
After wondering for a good while about a Firefox-specific weirdness I was seeing on Cloudflare-hosted sites, I finally found that there are two solutions. Only thing is, I am able to use only one of those two. That said, I’ll describe the problem and how I found the answers — or, in my particular case, answer, singular.
Note: I originally planned to have more content in this post, but a death occurred in my family during the editing process so I decided to keep this briefer. I hope what does follow will be sufficient for your understanding.
The difficulty I encountered was that, on Firefox1, Cloudflare-hosted sites wouldn’t show up in HTTP/3, but instead fell back to HTTP/2. (You can test for yourself with the Cloudflare test page for this type of connectivity.) I saw this on both macOS and Fedora Linux. While the performance penalty was likely tiny or nonexistent, it still bugged me and I wanted to find out what was causing this.
I filed a bug in Bugzilla but, if you do so without a Bugzilla account, that automatically generates an issue on the webcompat repo. Mine ended up here and, subsequently, resulted in this Bugzilla issue.
In time, I learned this is related to something called the “Happy Eyeballs” algorithm, Firefox’s handling of which has its own Bugzilla issue. It turned out that there are two remedies for the specific problem I’d seen. One is to use IPv6, and that’s a non-starter for me because my ISP doesn’t support IPv6. The other is to de-activate DNS over HTTPS (DoH) in Firefox; once I did so, that freed up Firefox to “see” HTTP/3 on Cloudflare-hosted sites.
Incidentally, Safari often exhibits the same behavior if you have iCloud Private Relay activated. This appears to be intentional. ↩︎
2025-09-27 01:12:00
The Google ruling, Netlify’s pricing changes, and other tales of interest.
Yet again, I’ll indulge myself in commenting on a variety of topics stemming from the nerdy stuff to which I pay attention. I’d originally intended for this latest post to be focused on just one of them — I’ll leave it to you to guess which — but, the longer I procrastinated, the greater number of happenings that I wanted to discuss. It’s not a desirable habit, but it’s Moi, folks. Let’s have at it.
I’ve spent most of this calendar year waiting to see what happened in the Google antitrust case. My expectation had been that we’d get a ruling in August, but it ended up slipping into early September. The most important effect of the ruling (PDF), at least for us ordinary web-browsing folks out here, was that Alphabet’s Google arm gets to keep Chrome and can keep paying organizations like Mozilla to promote the Google search engine. One can honestly and fairly debate the merits of both Google and Mozilla; but it’s a good thing that the incredibly important Chromium project will still have the nearly unlimited financial support that only Alphabet can give it, and it’s another good thing that there will still be financial backing for Mozilla’s Firefox browser (and, by extension, the numerous other FOSS projects that exist because Firefox’s Gecko engine can continue to exist). By the way, the ruling itself won’t go into effect for perhaps years due to the appellate process, but nearly all legal opinions I’ve seen on that particular aspect seem to agree that (what I consider to be) the good parts won’t change.
Starting in mid-2020, Netlify’s free tier allowed 300 minutes per month of website deployments. I long ago wrote about how to get around this by using an external CI/CD provider, rather than Netlify’s servers, to build a Netlify-hosted site. However, Netlify’s recently announced overhaul to its pricing plans has changed that. Now, a project on the free plan gets 300 credits per month, and each deployment — even if the build itself comes from elsewhere — costs fifteen of those, meaning you get a maximum of twenty deploys a month on the free plan. This will be problematic for some and a nothing-burger for others. Just sayin’. And, in case you’re wondering: Netlify-hosted projects that were on the previous free plan prior to this change are grandfathered with the old 300-minute limit that’s unrelated to credits; but, going forward, the 300-credit free plan is the new normal.
Those who work with anything built on npm-hosted dependencies have been reminded and re-reminded in recent weeks that the resulting supply chain can, um, have its moments. Two different supply chain attacks using especially crafty social-engineering ploys briefly made the use of numerous popular dependencies problematic. GitHub (which, like npm, is owned by Microsoft) announced a plan to improve the situation, but there inevitably will be ways of getting around even the “best laid” plans.
Apple released its latest major OS versions on September 15, and I have made peace with them for the most part. I am not a huge fan of the much-maligned Liquid Glass look but, after tweaking things here and there, have managed to live with it without a whole lot of pain. Based on some of the videos I saw from the earliest betas of these OSs a few months ago, it could’ve been a lot worse. And there are some new things I really like, such as being able to use a real Phone app on the Mac rather than an awkward interaction with the FaceTime app whenever I want to do an audio-only speakerphone call using my monitor’s audio system.
I learned only this week of yet another Chromium-based browser in the wild, called Helium. It’s fully FOSS — consider it a cooler, easier, and more updates-friendly way to use ungoogled-chromium — and is an attractive, lean, and quick performer. Helium is still in beta and the project has a few quirks that make it not yet ready for daily driving (at least mine), but it’s promising. If you can abide listening to Theo Browne on YouTube, this link will take you to the relevant part of a browsers-comparison video where he discussed Helium and explained his confidence in those behind this project, to which he apparently donated some funding. Or, for an alternative take, you can also look at the (mostly negative and, I feel, often off-topic) comments in this Hacker News thread.
2025-08-30 01:21:00
Distro choices, uses, and a few continuing nits to pick.
Two years have passed since I began telling you about putting Linux on the 2017 Intel iMac I’d recently shelved in favor of a 2023 Apple Silicon Mac Studio. Apparently, some of the posts I wrote about this have become among my most frequently visited content, so here’s a brief update on how things are going with Linux on the older Mac.
My distro-hopping of the early days — mainly between Fedora and Arch — settled on Fedora in late 2023, while I ran what turned out to be a short-lived project for testing web browsers. I found Fedora easier for that, because several browser-makers provide official versions for not only Red Hat-based distros like Fedora but also Debian-based distros; you need only add the appropriate repositories.
Even after I ceased worrying about testing browsers, I still judged Fedora more convenient for the access to those official versions, not only for browsers but also other apps, such as 1Password. That remains true today. For other apps, I generally rely on either the official Fedora repository or Flathub.
I should add that I use “vanilla” Fedora Workstation. While I have tried immutable distros, they felt a bit limiting in some ways. (To be fair, I’m sure that’s at least part of the intent for data-securing purposes, and it probably does work better for many folks.)
My main use of Linux these days is as a gaming platform, thanks to the continuing advances of the Proton project. However, I’ve also found a handy weather radar app, Supercell Wx, which I can highly recommend.
I wish I could tell you I found solutions to some of the Linux-on-Mac issues I reported back in 2023, but that’s not the case. Moreover, I suspect those solutions won’t be forthcoming for the reasons I outlined at that time. Specifically . . .
You may have expected that I’d have more to report in this regard, but last year’s health problems kept me mostly off the old Mac for months at a time, so it’s really only this year that I have resumed any degree of truly active use of Linux on that device.
While the Apple Silicon Mac remains my daily driver, I fully anticipate continuing to use Linux on the Intel iMac as long as I’m able. Since Linux on Apple Silicon seems problematic for now and may remain so into the foreseeable future, I will inevitably have to decide what to do whenever Apple EOLs macOS for my newer Mac, just as it did in 2023 for the older one. On the other hand: since that event is likely several years out and I’m already about to turn seventy, will I even care by then? (Eyes, typing fingers, and brain cells tend to fail at some point. One can hold off Father Time only so long.) Perhaps I’ll find out someday.