MoreRSS

site iconPredrag Gruevski

An independent software engineer applying compiler technology to the data space. I most often write about Rust, compilers, performance optimizations, and data querying technology.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Predrag Gruevski

Is this trait sealed, or not sealed — that is the question

2024-09-03 08:00:00

cargo-semver-checks v0.35 can determine whether Rust traits are "sealed", allowing it to catch many tricky new instances of SemVer breakage. Why is accurate sealed trait detection so important, and why is implementing it correctly so hard?

How to Query (Almost) Everything

2024-07-22 08:00:00

In 2022, I gave a talk at a virtual conference with an unforgettable name: HYTRADBOI, which stands for "Have You Tried Rubbing a Database On It?" Its goal was to discuss unconventional uses of database-like technology, and featured many excellent talks.

My talk "How to Query (Almost) Everything" received copious praise. It describes the Trustfall query engine's architecture, and includes real-world examples of how my (now-former) employer relies on it to statically catch and prevent cross-domain bugs across a monorepo with hundreds of services and shared libraries. For example:

The Wi-Fi only works when it's raining

2024-04-01 08:00:00

Happy April 1st! This post is part of April Cools Club: an April 1st effort to publish genuine essays on unexpected topics. Please enjoy this true story, and rest assured that the tech content will be back soon!

That's what my dad said when I asked what was wrong with our home internet connection. "The Wi-Fi only works when it's raining."

Illustration of a Wi-Fi antenna attached to the exterior of an upper floor of an apartment building. It's currently raining, and the Wi-Fi is working flawlessly.

Let's back up a few steps, so we're all on the same page about the utter ridiculousness of this situation.

At the time, I was still a college student — this was over 10 years ago. I had come back home to spend a couple of weeks with my parents before the fall semester kicked off. I hadn't been back home in almost a full year, because home and school were on different continents.

My dad is an engineer who had already been tinkering with networking gear longer than I'd been alive. Through the company he started, he had designed and deployed all sorts of complex network systems at institutions across the country — everything from gigabit Ethernet for an office building, to inter-city connections over line-of-sight microwave links.

He is the last person on Earth who would say a "magical thinking" phrase like that.

"What?" I uttered, stunned. "The Wi-Fi only works while it's raining," he repeated patiently. "It started a couple of weeks ago, and I haven't had a chance to look into it yet."

"No way," I said. If anything, rain makes wireless signal quality worse, not better. Never better!

Two weeks without reliable internet? I started a speed-run through the stages of grief...

Denial

I pulled open my laptop and started poking at the network.

Pinging any website had a 98% packet loss rate. The internet connection was still up, but only in the most annoying "technically accurate" sense. Nothing loads when you have a 98% packet loss rate! The network may as well have been dead.

I was upset. I had just started dating someone a few months prior, and she was currently on the other side of the planet! How was I to explain that I couldn't stay in touch because it wasn't raining? Mobile data at the time was exorbitantly expensive, so much so that I didn't have a data plan at all for my cell service at home. I couldn't just use my phone's data plan to work around the problem, like one might do today in a similar situation.

I was pacing around the house, fuming. Grief, stage two!

That's when the rain started.

Bargaining

Like a miracle, within 5 minutes of the rain starting, the packet loss rate was down to 0%!

I couldn't believe my eyes! I was ready for the connection to die at any second, so I opened a million tabs at once — as if I don't normally do that anyway...

The rain held up for about an hour, and so did the internet connection.

Then, 15 minutes or so after the rain stopped, the packet loss rate shot back up to 90%+. The internet connection went back to being unusable.

I was ready to do just about anything to get more rain.

Thankfully, the weather stayed grey and murky for the next few days. Each time, the pattern stayed the same:

  • The rain starts, and not even a few minutes later the internet connection is crisp and fast.
  • The rain stops, and within 15 minutes the internet connection is unusable again.

As much as I hated to admit it, the evidence was solid. The Wi-Fi only works when it's raining!

At this point, I had a choice to make.

I could keep going through the stages of grief: I could sulk and plan my calls with my girlfriend around the weather forecast.

Or, I could break out of that downward spiral and get to the bottom of what was going on.

"Magical thinking be damned! Am I an engineer or what?" I told myself.

That settled it. I wasn't going to take this lying down.

Determination

Some context on our home networking setup is in order.

Remember how my dad's company had extensive experience with networking solutions? Well, we had a fancy networking setup at home too — and it had worked flawlessly for the best part of 10 years!

My dad's office had a very expensive, very fast For the time, of course. commercial internet connection. The home internet options, meanwhile, weren't great! In my family, we are often stubbornly against settling for less unless there's absolutely no other choice.

The office and our apartment were a few blocks away from each other along a small hill, with our second-floor apartment holding the higher ground. With a bit of work, my dad set up a line-of-sight Wi-Fi bridge — a couple of high-gain directional Wi-Fi antennas pointed at each other — between the office and our apartment. This let us enjoy the faster commercial internet connection at home!

I started poking around the network to figure out where the connection was breaking down.

The local Wi-Fi router at home was working well — no packets lost. The local end of the Wi-Fi bridge was fine too.

But pinging the remote end of the Wi-Fi bridge was showing a 90%+ packet loss rate — and so did pinging any other network device behind it. Aha, there's something wrong with the Wi-Fi bridge!

But what? And why now, when the system had been working fine for almost 10 years, rain or shine? Maybe years of work experience isn't a good metric here either 😄

How can a rain storm fix a Wi-Fi bridge, anyway?

So many confusing questions. Time to get some answers!

Debugging

Like any experienced engineer, the first thing I tried was turning everything off and then on again. It didn't work.

Then I checked all the devices on the network individually:

  • Maybe one of the devices has gone bad with age? Nope. I physically connected my laptop to each device's local Ethernet, then ran diagnostics, pinged the devices over the wired connection, etc.
  • Maybe a cable got unseated or came loose? Nope.
  • Maybe a power brick has become faulty over time? Nope.
  • Maybe an automatic firmware update failed and broke something? Nope.
  • Maybe an antenna connector has corroded from spending years outdoors? Nope.

Unlike debugging software, a lot of this hardware debugging was annoyingly physical. I had to climb up ladders, trace cables that hadn't been touched in 10 years, and do a lot of walking back and forth between our home and my dad's office.

On my umpteenth back-and-forth walk, as I was bored and exasperated, I started noticing how much our neighborhood had changed in the many years I hadn't been living at home full-time. Before college, I spent four years at a boarding high school. I was on our national math and programming teams for the IMO and IOI), so I even spent most of each summer away from home at prep camps and at the competitions themselves. Many of the little neighborhood shops were new. Many houses had gotten a fresh coat of paint. Trees that used to be barely more than saplings had grown tall and strong.

Then it hit me.

Realization

I ran home and climbed up onto the scaffolding holding up the Wi-Fi bridge's antenna. I was hanging precariously off the side of our apartment building, two stories up in the air. In retrospect, a safety harness would have been a good idea... Things people do for internet! Don't forget, a girl was involved too — I wasn't doing this merely for Netflix or Twitter.

Then I looked downhill, at the antenna that formed the second half of the Wi-Fi bridge.

Or at least, toward the antenna, because I couldn't see it — a tree in a neighbor's yard was in the way! Its topmost branches were swaying back and forth in the line-of-sight between the antenna pair.

Bingo!

The Problem and the Fix

Here's what was going on.

Many years ago, we installed the Wi-Fi bridge. For a long time, everything was great!

But every year, our neighbor's tree grew taller and taller. Shortly before when I came back home that summer, its topmost branches had managed to reach high enough to interfere with our Wi-Fi signal.

It was only barely tall enough to interfere with the signal, though!

Every time it rained, the rain collected on its leaves and branches and weighed them down. The extra weight bent them out of the way of the Wi-Fi line-of-sight! Interestingly, objects outside the straight line between antennas can still cause interference! For best signal quality, the Fresnel zone between the antennas should be clear of obstructions. But perfection isn't achievable in practice, so RF equipment like Wi-Fi uses techniques like error-correcting codes so that it can still work without a perfectly clear Fresnel zone.

Each time the rain stopped, the rainwater would continue to drip off the tree. Slowly, over the course of 15ish minutes, that would unburden the tree — letting it rise back up into the path of our bits and bytes. That's when the Wi-Fi would stop working.

The fix was easy: upgrade our hardware. We replaced our old 802.11g devices with new 802.11n ones, which took advantage of new magic math and physics to make signals more resistant to interference. One such piece of magic new to 802.11n Wi-Fi is called "beamfoming" — it's when a transmitter can use multiple antennas transmitting on the same frequency to shape and steer the signal in a way that improves the effective range and signal quality. Modern Wi-Fi does beamforming with only a few antenna elements, but if we scale that number way up we get a phased array antenna. Ever wondered how come Starlink antennas are flat and not a "dish" like old satellite TV antennas? They use phased arrays to aim their signal at the Starlink satellites streaking across the sky — without any moving parts. Magic! Physics!

A few days later, the new gear arrived and I eagerly climbed back up the scaffolding to install the new antennas.

A few screws, zip ties, and cable connections later, the Wi-Fi's "link established" lights flashed green once again.

This time, it wasn't raining.

All was well once again.

Hope you enjoyed this true story! April Cools is about surprising our readers with fun posts on topics outside our usual beat. Check out the other April Cools posts on our website, and consider making your own blog part of April Cools Club next year!

If you liked this post, consider subscribing or following me on social media.

Thanks to Hillel Wayne and Jeremy Kun for reading drafts of this post. All mistakes are my own.

SemVer in Rust: Tooling, Breakage, and Edge Cases — FOSDEM 2024

2024-03-18 08:00:00

Last month, I gave a talk titled "SemVer in Rust: Breakage, Tooling, and Edge Cases" at the FOSDEM 2024 conference.

The talk is a practical look at what semantic versioning (SemVer) buys us, why SemVer goes wrong in practice, and how the cargo-semver-checks linter can help prevent the damage caused by SemVer breakage.

TL;DR: SemVer is impossibly hard for humans, but automated tools can cover our greatest weaknesses. This is common theme in Rust, isn't it? At scale, lots of problems are too hard for humans. Memory safety is too hard, so Rust has the borrow checker. Parallelism is too hard, so we have the compiler help us. And so on...

Full talk abstract (click to expand)

In theory, semantic versioning (SemVer) is simple: breaking changes require major versions. SemVer rules do not change over time. Crates always adhere to SemVer. Careful coding is enough to avoid accidental breaking changes.

None of those statements are true!

In practice, SemVer is complex and accidental breakage is common: 1 in 6 of the top 1000 Rust crates has violated semantic versioning at least once, frustrating both users and maintainers alike.

If you write Rust but don't have the time for a PhD in SemVer, this talk is for you. We'll take a practical look at SemVer in Rust: what it buys us, how Rust's features lead to strange SemVer edge cases, and how we can prevent accidental breakage using a SemVer linter called cargo-semver-checks.

You can watch my talk on YouTube, or embedded below. An A/V equipment failure during my talk caused 10min of my talk to be missing from the official FOSDEM recording. I re-recorded the missing portion, and edited it into a complete video of the talk — that's the version I'm including here. Read on for an annotated version of the talk, I believe Simon Willison coined the term "annotated talk", and described it on his blog. I like this idea, and I'm broadly aiming to follow the same approach. covering the same ideas in written form and including some additional content that did not make it into the talk due to time constraints.

The talk video and outline are below, so you can jump ahead or switch between the written and video formats as you like.

Outline

What semantic versioning (SemVer) buys us

Jump to this chapter in the video.

SemVer is about communication.

It's a way for library maintainers to communicate with users, and with the tooling those users use. It sets expectations on the amount and nature of work required to adopt a new version of a library.

If the changes are substantial and may require action from the user of the library, we say that's a major change. The maintainer would bump the major version number, and users will know that this version upgrade might require a bit of work to adopt. Automated tooling will usually avoid making this kind of upgrade on its own. Some ecosystems and companies have created "codemod" systems, which can automatically refactor downstream code to make it comply with breaking API changes. This makes it possible to apply major changes automatically, but it requires a substantial amount of extra work on top of a large amount of pre-existing infrastructure.

Otherwise, if the library remains compatible with the previous version, users expect their automated tooling to take care of upgrading them. This is great! They benefit from performance upgrades, security patches, and new functionality — and (in the ideal case) no human time was spent to get those benefits.

Here's a concrete example.

GitHub pull request with description: "Automation to keep dependencies in Cargo.lock current. The following is the output from "cargo update", followed by 25 libraries being bumped to new non-major versions. The pull request has passed tests and is merged.

Many of my projects have a job that runs cargo update once a week, commits the results, opens a pull request, and merges it automatically if CI passes.

In this example, we just got 25 libraries' worth of improvements — without requiring any time investment from this project's maintainers. Excellent! This frees up maintainers to invest their limited time elsewhere, starting a virtuous cycle that leaves the entire community better off. To see why this is such a big deal, imagine manually bumping versions in a project with a dependency tree as big as this one — yikes! 😱

But this only works as long as none of these dependencies have accidentally violated SemVer. And so long as they use SemVer in the first place. SemVer is not the only versioning scheme, but it's overwhelmingly common in Rust since cargo update by default assumes all crates adhere to SemVer. In other language ecosystems, this kind of automation might not work as well as it does in Rust.

If a breaking change has accidentally slipped into one of these versions, then our CI run fails, the pull request doesn't get merged, and a maintainer has to intervene to fix the problem manually. Our automation didn't work, so we're back to square one.

Overview: SemVer is hard, but automation can help

Jump to this chapter in the video.

I'm going to convince you of two major things.

"SemVer is so hard, no mere mortals can uphold it." above an image of the Rust urchin looking spiky and dangerous.

First, that semantic versioning in practice is so hard that no mere mortals can uphold it. None of us are good enough to do it on a consistent basis.

I'll show you that the rules of semantic versioning are much more complex than they seem.

I'll show you that even the rules that seem simple have a ton of non-obvious edge cases.

And I'll show you empirical evidence based on real world data that this is not a skill issue. It's not something that can be solved with more experience, or with harder work, or just by caring more about your projects and your users.

"Computers are no mere mortals. They are really good at SemVer." next to a pair of Ferris claws carefully holding up a crate to inspect it.

Then I'll show you that computers are really good at semantic versioning.

We can use linters like cargo-semver-checks to address almost all of the problems we're going to run into as part of this talk.

And I'll even show you how cargo-semver-checks works under the hood, so you can trust its results and so you can contribute to it for the benefit of all of us in the Rust community.

SemVer is hard — we keep breaking it by accident

Jump to this chapter in the video.

Throughout this talk, we'll go through a series of falsehoods about SemVer in Rust. Each of those statements will sounds plausible and reasonable, but is actually false. This is how we'll get a sense of how hard SemVer really is.

Our first falsehood: Rust crates always adhere to SemVer.

Slide titled "Falsehoods we believed about SemVer." First bullet point: crates always adhere to SemVer. There is copious space left over on the slide, hinting that there will many more items on the list by the end of the talk.

If you've been part of the Rust ecosystem for long enough, you know this to be false.

Issues reporting breaking changes get opened everywhere all the time.

Slide packed with screenshots of GitHub issues with titles: "Version 0.5.1 breaks SemVer guarantees"; "Backwards incompatibility for ArgMatches and UnwindSafe"; "SemVer breaking change caused by mio upgrade to 0.8"; "Semver violation in 0.18.12 — please re-export 'git2' since it's part of public API"
Another GitHub issue overlaid on all the ones from the previous slide. This one is titled "tracing 0.1.38 included accidentally-breaking change from added Drop impl." The "accidentally-breaking" portion of the title is circled, and a spiky Rusturchin is looking at it.

This last one perfectly sums up the issue: the breakage wasn't intended, it was accidental.

No maintainer wakes up in the morning and says: "I'm going to break the entire ecosystem today."

Everyone loses when accidental SemVer breakage happens

Jump to this chapter in the video.

SemVer breakage is a lose-lose all around.

Everyone is worse off: maintainers, downstream users, and the community as a whole.

A large sobbing emoji over the pile of GitHub issues reporting accidental SemVer breakage.

From a maintainer's perspective, nobody likes to see an issue like this get reported.

None of us like realizing that we accidentally broke the entire ecosystem.

A large sobbing emoji next to our automated workflow for updating project dependencies, which has been stamped with a big red "denied" symbol.

As a user, we lose because our automation doesn't work and our project's build might be broken.

We no longer get improvements "for free." Instead, we have to update our dependencies manually.

In a large project with many dependencies, this could be a huge amount of work.

A large sobbing emoji next to the activity log on a GitHub issue about an accidental breaking change. The activity log shows that dozens of people had to make changes to their own projects to rectify the breakage. Items in the activity log have titles like "Can't build 0.23.0" or "Revert tracing from 0.1.38 back to 0.1.37." The screenshot shows ten such items, and UI elements indicate the activity log continues past the bottom edge of the screenshot.

From an ecosystem perspective, the breakage means a lot of work across many projects needs to happen just to make everyone's build start passing again.

The screenshot above is just a fraction of all the issues and commits referencing that particular accidental breakage.

This work is stressful, disruptive, and ultimately unproductive. Maintainers have to drop what they were doing, and instead do work that doesn't lead to any new features nor performance improvements.

It's pure wasted effort, community-wide.

A large sobbing emoji next to the text: "SemVer violations are miscommunication"

SemVer is about communication, so SemVer violations are miscommunication.

When a release goes out with the wrong version number, it sets incorrect expectations with users and their tooling. Then the tooling fails and we all end up frustrated.

This is expensive miscommunication! All of us would be much better off if it didn't happen. Even if our own projects aren't directly affected by a given breakage incident, we'd all prefer if the maintainers of our tools and dependencies could invest their limited time toward more productive endeavors.

So why does breakage keep happening?

Why "just be more careful" won't fix it

Jump to this chapter in the video.

At this point, one might think that maybe we should "just" be more careful. Maybe this is a skill issue! Maybe the answer is to "just get good." For any readers unfamiliar with the phrase, this is a reference to "git gud", a phrase coined in gaming culture. It's used as an unconstructive response, implying that "real" gamers (in our case, serious maintainers) don't have the indicated problem — they learn to overcome it through hard work and skill. In SemVer's case, that won't work — "would that it were so simple!"

The same slide titled "Falsehoods we believed about SemVer" from earlier. The first bullet point, "crates always adhere to SemVer," is crossed off. The next bullet point says: "Careful coding is enough to avoid violating SemVer."

This is another falsehood.

Careful coding is not enough to avoid SemVer violations.

"1 in 6 of the top 1000 crates have broken SemVer at least once," in large text above the cargo-semver-checks logo. The bottom of the slide says: "Joint work with Tomasz Nowak, Mieszko Grodzicki, Bartosz Smolarczyk, Michał Staniewski"

That's right. More than 1 in 6 of our most popular crates have shipped a SemVer violation at least once.

These are the crates that are maintained by the most experienced, most careful maintainers in our entire community. Without a doubt, they've personally experienced the pain of accidental SemVer breakage. If they can't get semantic versioning right day in and day out, what hope is there for the rest of us?!

This is data that we gathered by running cargo-semver-checks. We worked hard to ensure our results are faithful and not just the result of false-positives. Regular readers may remember reading about our process on this blog, or seeing the discussion about our results on r/rust. For example, the maintainer of the time crate requested to see our findings for their crate, and we discussed them here.

"Over 3% of the 14000 scanned releases had at least one SemVer violation," in large text above the cargo-semver-checks logo. The bottom of the slide says: "Joint work with Tomasz Nowak, Mieszko Grodzicki, Bartosz Smolarczyk, Michał Staniewski"

As part of the study, we scanned more than 14000 releases.

More than 3% of them had at least one semantic versioning violation that cargo-semver-checks discovered and would have prevented.

To put this 3% number in context:

The pull request automatically updating dependencies from earlier in the presentation. An arrow next to the 25 upgraded dependency versions points to the text: "Statistically, there's a SemVer violation somewhere in here..."

Statistically, we shouldn't be surprised if a SemVer violation is lurking somewhere in these updated crates.

Now, this pull request happened to pass our tests just fine. Maybe we got lucky, and there is no SemVer violation. Maybe we just weren't affected by it this time.

But luck is not a strategy. With a breakage rate that high, many pull requests like this one will fail due to accidental breakage. That's just the cost for one project — multiply it out across the entire community and the cost quickly gets out of hand.

SemVer's rules are much more complex than they seem

Jump to this chapter in the video.

The same slide titled "Falsehoods we believed about SemVer" from earlier. This time the second bullet point, "careful coding is enough to avoid violating SemVer," is also crossed off. A third bullet point says "Breaking changes always require major versions." There is still plenty of blank space left where more items can be added to the list.

Another surprising falsehood is that not all breaking changes require major versions in Rust.

For this, we need to consult Rust's API evolution RFC 1105.

Screenshot from Rust's "API evolution" RFC 1105. It defines the terms "major change" and "minor change" as requiring a major and minor SemVer bump, respectively, and defines the term "breaking change" to mean a change that strictly speaking can cause downstream code to fail to compile. Of the remaining text, two portions are highlighted: "in Rust today, almost any change is technically a breaking change" and "all major changes are breaking, but not all breaking changes are major."

In Rust today, almost any change is technically a breaking change! Regular readers may recall my blog post on this exact topic.

A rules-first approach would require almost every new release of a Rust crate to come with a major version bump. This isn't helpful! This is why SemVer isn't about the rules — it's about communication.

For example, if almost every release is a major bump, then our dependency-updating automation still wouldn't work. And all this because of some changes that are technically breaking — but where that breakage in practice is avoidable, is extremely rare, or is only triggered by particularly convoluted code that is inadvisable to write in the first place.

This is why not all breaking changes are major. Surprisingly, this isn't unique to Rust! What's unique to Rust is that this rule is explicitly written down in an easy-to-cite place. You'll see shortly that many of the "breaking but not major" cases clearly apply to other programming languages too.

The rules of SemVer are meant to serve users, not vice versa. This is the choice that best serves users.

Here are some of the breaking changes that are not major. These are merely the most common edge cases — there are more!

The same API evolution RFC slide as before, with a text bubble showing examples of breaking changes that are not major. There's room for four bullet points in the bubble, but only two bullet points are shown at the moment: "adding new items to a module" and "changes that break type inference, requiring type annotations in downstream code."

The first one is that adding new items to a module is technically a breaking change.

This is because of some quirks related to glob imports.

I think we'd all agree that adding new functionality to a library should not in general be a major change, so it makes sense that this is considered minor even though it's breaking. Nearly all languages that support glob imports have the same breakage case! For example, exposing a new non-underscored function in Python is also a breaking change — a one-to-one Python translation of the Rust code here will demonstrate it. Most languages implicitly agree that this type of breakage "doesn't count" for SemVer purposes; Rust merely made that rule explicit by writing it down.

Another example is that breaking type inference is not considered major.

This is because it's possible to avoid being broken by such a change by adding explicit type annotations in downstream code. In principle, better tooling should be able to add these kinds of type annotations when they become necessary. In the future, this change might no longer be breaking — so it's a reasonable choice to make it non-major today.

The same API evolution RFC slide as before, with the remaining two bullet points added to the text bubble. They say: "reverting API changes" and "critical soundness or security fixes, subject to the maintainer's judgment call."

A third example is reverting accidental API changes. Again, not unique to Rust! Rust just wrote it down explicitly.

This is something we ran into as part of our SemVer study. A few times, a maintainer had accidentally caused a private portion of their library to become public API. It would be extremely unfortunate if undoing that accident required a major bump, even if it was caused and corrected mere minutes apart.

The last example is that critical soundness or security fixes can be published in minor changes even if they are breaking. Not unique to Rust either!

This again comes back to "SemVer is about communication."

Semantic versioning allows the maintainer to make a judgment call about what is the lesser evil: whether it's more dangerous to risk letting the soundness or security vulnerability persist, or to break everyone's build.

If the vulnerability is bad enough, forcing faster adoption by breaking everyone's build might be the better outcome overall.

This is not a complete list of all the edge cases! For more details, including a code example of how adding a new public item is a breaking change, check out this post.

The same slide titled "Falsehoods we believed about SemVer" from earlier. All three bullet points shown so far are crossed off: "crates always adhere to SemVer" / "careful coding is enough to avoid violating SemVer" / "breaking changes always require major versions." There is still plenty of blank space left where more items can be added to the list.

Zooming out — we've already seen three reasonable-looking statements that turned out to be false. We're just getting started!

The takeaway so far is that SemVer is hard.

There are many rules with many edge cases. Learning all the rules means earning a PhD in SemVer. Following all the rules requires superhuman attention to detail. The odds are stacked against us!

Say we cared about SemVer so much that we forced all maintainers to learn all the rules, then demanded perfect SemVer adherence at all costs. We'd have SemVer, but at what cost? Progress would grind to a halt!

Instead we'd like to accelerate the pace of development. We can only do that by drastically lowering the cost of SemVer adherence.

Automation like cargo-semver-checks is how we do that. This is the way!

How cargo-semver-checks fits into the picture

Jump to this chapter in the video.

Computers are really good at SemVer.

They can't do everything — the Halting Problem gets in the way as usual. But our abilities are complementary: computers are the best where we do poorly, and vice versa.

The cargo-semver-checks logo, joined by the logos of the tokio and PyO3 projects, the cargo tool's logo, and the logos of Amazon AWS and Google.

cargo-semver-checks is a SemVer linter that is broadly adopted across the Rust ecosystem.

It's used by fundamental Rust crates like tokio and PyO3.

Cargo itself uses cargo-semver-checks to check its own library components.

Companies like Amazon and Google use it to prevent breaking changes in the crates they publish.

The cargo-semver-checks logo, shown together with the intended usage command: `cargo semver-checks && cargo publish`. A text bubble explains that in this invocation, the `cargo semver-checks` command detects the version bump, then scans for API changes inappropriate for that bump.

cargo-semver-checks is designed to be used as: cargo semver-checks && cargo publish.

It detects the kind of version bump that you're making (major, minor, or patch), then scans for API changes that might be inappropriate for that bump.

You can get cargo-semver-checks through cargo install, or by downloading a pre-built binary.

Screenshot from the README of the "release-plz" release manager for Rust. It shows capabilities like automatic changelog generation with git-cliff, version bumps in Cargo.toml, and automatic scanning for API breaking changes with cargo-semver-checks.

Release managers like release-plz can automatically run cargo-semver-checks as part of publishing your crate, and we have a GitHub Action designed to be used in CI, Today, that GitHub Action is most suitable for use as part of a CI publishing pipeline, and is not a great fit for running on individual pull requests. This is something we plan to fix! The limiting factor is finding a sustainable source of funding for the project. We'd love your help!

Example: Can deleting a pub fn not be a breaking change?

Jump to this chapter in the video.

A GitHub pull request showing a public function called "add" being deleted from a Rust file.

Say a crate exposes a public function called add, and a pull request deletes that function.

This is obviously a breaking change, and cargo-semver-checks will point that out:

Output of running `cargo semver-checks` on the aforementioned pull request. It indicates the failure of a lint called `function_missing` which says that the function `easy_01::add` previously at line 1 in file `src/lib.rs` is no longer part of the public API of the crate. The output indicates this is a major breaking change, and the total runtime was 0.012 seconds.

This is great! But maybe we didn't need a tool here — we would have caught this "by eye" too.

Not so fast!

Deletions of public items are not always a major breaking change!

The same slide titled "Falsehoods we believed about SemVer" from earlier. A fourth bullet point says "Deletions of public items are always a major breaking change." The three prior bullet points are crossed off, and there is still plenty of space left for more bullet points.

There are at least two ways to delete a public function without a breaking change.

Two blocks of code. The first shows a public function defined inside a private module — even though it's public, the function cannot be imported from outside its crate. Deleting it is not a breaking change. The second block shows a public module marked `#[doc(hidden)]`, and a public function defined inside it. Even though this function can be imported, it is not considered public API since it would have to be imported from a `#[doc(hidden)]` module. Deleting it is not a SemVer major change either.

One way is if the public function is inside a private module. The function isn't reachable — there's no way to import it. Nothing outside its crate could have used it, so deleting it can't break anyone.

The other way is trickier: it involves the #[doc(hidden)] attribute. This is a way to mark a piece of your crate's public surface area as not being public API.

#[doc(hidden)] is most often used by crates that define macros: macro-generated code lives in the downstream crate, so it can only access public items from the crate that defined the macro. But those publicly-visible implementation details are intended to be used only by the macro — they are not public API on their own. That's why they are marked #[doc(hidden)].

If our public function is #[doc(hidden)], or if it must be imported from a #[doc(hidden)] module, then it isn't public API and its deletion is not a major breaking change.

So is it safe to say "oh, this function is defined inside a #[doc(hidden)] module so it must not be public API?"

Surprisingly, no!

Another block of code. It shows a public module marked `#[doc(hidden)]` containing a public function. A line of code adjacent to the module performs a re-export (`pub use`) of the public function, making it possible for other crates to import the function without touching any non-public APIs. Neither the function nor its re-export are themselves `#[doc(hidden)]`, so this function is public API under the path `this_crate::example`. Its deletion would be a major breaking change.

Here we have a public module that's #[doc(hidden)] and a public function inside it.

But that public function is public API, because it's re-exported without #[doc(hidden)]. Users of this crate could have imported it that way without using any #[doc(hidden)] items.

Who knew that a simple question like "is it breaking if I delete a public function" could have so many edge cases! There even more edge cases than I've mentioned here. Properly handling #[doc(hidden)] in cargo-semver-checks was hard! For example, #[doc(hidden)] could be applied to enum variants, or even individual fields within a struct or enum variant. In that case, the struct or enum itself is public API but some of its components are not. Another example is that maintainers often apply #[doc(hidden)] on deprecated items in order to hide them from documentation such as docs.rs — but deprecating an item is not a major breaking change, and in this case #[doc(hidden)] does not exempt that item from the public API. For even more edge cases, check out my post on how we check SemVer in the presence of hidden items.

So far, we've seen that:

  • Deleting a public function might not be breaking.
  • Adding a public function is definitely breaking, but not SemVer-major.

This might seem completely backwards, but it's accurate! SemVer is hard.

We found hundreds of SemVer violations here while scanning the top 1000 Rust crates.

cargo-semver-checks handles all these cases correctly. As of this writing, there's a rare edge case that the tool sometimes doesn't handle correctly: re-exporting an item defined in another crate. All cross-crate analysis is currently blocked on upstream functionality. Thankfully, this is not something crates often do, so at the moment this is an occasional annoyance than a show-stopping bug. Computers easily outperform humans here.

Example: Can adding fields to a struct be a breaking change?

Jump to this chapter in the video.

GitHub pull request showing a `pub struct Foo` with two existing public fields called `first` and `second`, and a new public field `third` of type `Option<String>` being added as part of the pull request. The struct has a public constructor `Foo::new()`, and this pull request does not modify its function signature. Instead, it ensures that the created struct sets the new `third` field to a default value of `None`.

Here we have a pull request that is adding a new field to an existing public struct Foo.

The author of this pull request was quite careful! They noticed the struct has a constructor Foo::new(), and they made sure the new field doesn't cause a change in the constructor. Instead, they initialized the new field to a default value.

This seems entirely reasonable! None of the methods are broken. All the prior public fields still work. This is a purely additive change. It's a solid pull request, merge it!

The same slide titled "Falsehoods we believed about SemVer" from earlier. A fifth bullet point says "Adding fields to a struct can only be breaking via changes to its methods." The four prior bullet points are crossed off, and there is space left for more bullet points.

Oops! 💥

A breaking change just slipped past us.

Annotations over the code in the aforementioned pull request. They point out that the `pub struct Foo` was not marked `#[non_exhaustive]`, and that all its prior fields were public, therefore downstream users were allowed to construct `Foo` values with struct literal notation: `Foo { first: 0, second: false }` Such uses are broken by this pull request, since they don't specify any value for the new field named `third`.

The issue is that this struct is not marked #[non_exhaustive], and all of its prior fields were public. This means downstream crates could have constructed the struct directly via a struct literal, by specifying values for all its fields instead of calling Foo::new().

Adding a new field will break that code since it doesn't specify what value the new field should have — that's a compile error.

This is not at all obvious! No human is perfect, and this could easily slip through code review. We found breakage like this hundreds of times in our SemVer study of the top 1000 Rust crates.

Output of running `cargo semver-checks` on the aforementioned pull request. It indicates the failure of a lint called `constructible_struct_adds_field` which says that a new field has been added to a struct constructible with a literal, which requires that all literals of that struct must be updated to include the new field. The lint identifies the problematic field as `Foo::third` at line 4 in file `src/lib.rs`. The output indicates this is a major breaking change, and the total runtime was 0.010 seconds.

cargo-semver-checks will catch this issue 100% of the time. In fact, cargo-semver-checks even differentiates between two ways to cause breakage here: adding a new public field will require specifying the field in struct literals, while adding a new private field will disallow using struct literals altogether. Anecdotally, many Rustaceans I've spoken to were surprised to learn that structs could be marked non-exhaustive at all! If you use cargo-semver-checks, you don't need to be an expert in Rust — the necessary expertise is distilled into the tool and is a few keystrokes away.

Adding fields to a struct can sometimes be a breaking change; terms and conditions apply. If you use cargo-semver-checks, you don't have to remember this fact — let alone its terms and conditions.

Example: Can modifying a private item cause a breaking change?

Jump to this chapter in the video.

GitHub pull request showing a private struct `Foo` that holds a `&'static str` value, and derives `Clone`. The pull request changes the struct's field from `&'static str` to `Rc<str>`, mentioning that this adds support for non-static strings while also preserving cheap cloning via the ref-counted string type.

Here we have a private struct Foo, and we're just changing some internal implementation details. It used to hold a &'static str, and we now want to support non-'static strings.

The struct is Clone, so to keep cloning cheap we're going to use a reference-counted string type: Rc<str>.

We changed private implementation details of a private type. We didn't touch any public API. Surely we couldn't have broken any public API? If I didn't touch it, I didn't break it!

The same slide titled "Falsehoods we believed about SemVer" from earlier. A sixth bullet point says "If I didn't touch it, I didn't break it!" The five prior bullet points are all crossed off.

Darn! 💥

But ... how?! What broke?

Run cargo semver-checks and let's see what it says.

Output of running `cargo semver-checks` on the aforementioned pull request. It indicates the failure of a lint called `auto_trait_impl_removed` which means that a public type has stopped implementing one or more auto traits, which may break downstream code that depends on those traits being implemented. The lint identifies that type `Bar` is no longer `Send` nor `Sync`, on line 16 of file `src/lib.rs`. The output indicates this is a major breaking change, and the total runtime was 0.010 seconds.

How strange! The pull request changed the private struct Foo, but cargo-semver-checks complains about a public type Bar.

Our pull request didn't change any type Bar!

GitHub pull request UI and cargo-semver-checks output laid out side by side. The pull request UI shows the only changes are in `struct Foo`, while cargo-semver-checks mentions a breaking change in type `Bar`. An annotation saying `type Bar is in here` sits on top of the pull request UI, pointing to a clickable UI element used to display the rest of the file — which was hidden by default since it wasn't modified.

Bar's definition isn't even shown in the pull request review screen, so surely it's irrelevant here? Maybe this is a false-positive in cargo-semver-checks?

Unfortunately, no such luck. We did cause a breaking change, and since the broken API was never shown in the UI, we were never likely to spot it during code review.

Here's what happened.

A series of annotations over source code. They point out that `pub struct Bar` is a public type which contains a value of type `Foo`. As a public type, `Bar`'s implemented traits are public API as well. Another annotation says that auto traits are automatically implemented whenever possible: a type implements an auto trait if all its constituents also implement the trait. The `&'static str` was both `Send` and `Sync` (the two auto traits cargo-semver-checks identified as being no longer implemented), but `Rc<str>` is neither. This means `Foo` stopped being `Send` and `Sync`, which made `Bar` stop being `Send` or `Sync`.

pub struct Bar exists elsewhere in our library, and contains a Foo value.

As a public struct, the traits it implements are public API as well.

Rust has a small group of traits called auto traits, which are automatically implemented for types whenever possible. Send and Sync are the most commonly used auto traits — this is how the Rustonomicon describes them: We've previously discussed auto traits and the SemVer breakage they might cause in this post.

Send and Sync are also automatically derived traits. This means that, unlike every other trait, if a type is composed entirely of Send or Sync types, then it is Send or Sync. Almost all primitives are Send and Sync, and as a consequence pretty much all types you'll ever interact with are Send and Sync. Major exceptions include: [...] Rc isn't Send or Sync (because the refcount is shared and unsynchronized).

The struct Foo's original &'static str field implemented both Send and Sync, whereas Rc<str> implements neither. That change makes struct Foo no longer implement Send or Sync, so pub struct Bar is no longer Send nor Sync either.

This change in the traits of a public API type is breaking!

Rust playground output showing a compilation error: `Rc<str>` cannot be shared between threads safely. The error occurs at the point where a function called `use_parallelism()` attempts to use a value of type `Bar` while requiring it to implement the `Sync` trait. The `Bar` value does not implement the trait `Sync`. The compiler helpfully points out that `Bar` is not `Sync` because its contained `Foo` is not `Sync`, whose contained `Rc<str>` in turn is the underlying source of the problem.

Our downstream users might have been using Bar in a use case that relies on parallelism. Some Bar may have been shared across threads, or passed between threads.

Their code is now broken. Instead of working code, they will see an error like the above.

We saw hundreds of accidental breaking changes like this in our SemVer study of the top 1000 Rust crates. But this wasn't a skill issue!

Not only do the maintainers of those crates know about auto traits — they've certainly been on the receiving end of breakage caused by auto traits. They have the skills — but they weren't set up for success here.

This is a case where private code can break public API via "spooky action at a distance," where the affected public API is never displayed during code review. None of us stand a chance in such circumstances — without automated help, shipping breakage like this is a question of time.

cargo-semver-checks uses the Rust compiler's own machinery to determine the auto traits each type implements.

It will catch this issue 100% of the time.

cargo-semver-checks lints are database queries in disguise

Jump to this chapter in the video.

Now that you've seen some of the issues cargo-semver-checks can flag, let's talk about how it works and why you should trust what it can find.

Let's come back to the earlier example of determining whether deleting a public function is a major breaking change.

Checklist of conditions that must be true if a function's removal is a major breaking change: previously, the function must have been public; another crate could have imported and used it; that import did not rely on any `#[doc(hidden)]` items, and now that same import name no longer satisfies the previous conditions. Finding all such functions sounds like a database query...

We have a major breaking change only if all of those conditions are true.

It's breaking because we've found a case where an import of a public API component from an older version no longer works in the newer version. Either the function is no longer publicly available, or it can't be imported anymore, or it's #[doc(hidden)] meaning it isn't public API anymore. In any case, that's a major breaking change.

Say we want to find all such functions that have caused a breaking change.

One could read the rule on this slide as "select functions where X and Y and Z ..."

That sounds like a database query!

Diagram showing a visual representation of the aforementioned query. It shows a pair of crate versions. A function at a public importable path is singled out in the old crate version. In the new crate version, a function at the same importable path is also shown, and a circle around it is annotated with the condition `count = 0`, indicating that no such function in the new crate version exists.

Structurally, it looks like this.

We are comparing a pair of versions: old version on the left, new one on the right.

We're looking for public functions that are importable and public API on the left. We're going to try to match them to public functions in the new version, at the same import path as the function that we were just looking at.

If we can't find any such matching function in the new version of the crate (i.e. if "we count zero matching functions") then we've found a breaking change: we've found a specific function that previously could be imported and used, but now it can't be imported and used anymore.

This is exactly what cargo-semver-checks runs under the hood.

The same slide with the checklist of conditions required for a function's removal to be a major breaking change. The left side of the slide has the checklist, while the right side shows a database query in the Trustfall query language. An arrow annotation connects each condition in the checklist to its corresponding clause in the query, demonstrating that they both describe the same operation.

We aren't going to dig into the query syntax here.

But at a glance, we can see the query does the same thing we described in plain language earlier:

  • It looks at public functions in the old version of the crate.
  • It ensures that some public API path could be used to import them.
  • In the new version, it attempts to match each function to a corresponding public function at the same public API import path.
  • It sets the count = 0 condition on the number of such matching functions in the new crate.
  • Along the way, it outputs some of the values that will be handy when constructing our error message: we want to know which function was the problem, at which path, etc.

We just wrote down the SemVer rule in human language, we translated it into a database query, and we called it a day. The business logic of SemVer can be entirely ignorant of how we run the query, or how we obtain the information on the public API.

This is pretty nice!

Layer diagram of the cargo-semver-checks architecture. The top layer is titled "cargo-semver-checks" and contains all the lint logic. It's connected to the middle layer, titled "Trustfall" and representing the query engine. That layer in turn is connected to multiple blocks titled "Rust 1.73 rustdoc" through "Rust 1.76 rustdoc", which are collectively titled "adapters" and hold the format-specific logic. The diagram shows that the lint logic is not related at all to the format logic — the cargo-semver-checks lints don't know anything about the format of the data they are querying!

Here's how that works under the hood.

cargo-semver-checks on top is where all the lints are stored. Each lint consists of a query, some string templating for forming the user-facing diagnostic message, and some metadata such as a reference link where the user can learn more about the type of breaking change that was detected. This layer doesn't know where the data is coming from, or what format it's in.

At the bottom is all the logic related to the incoming data format. We use Rust's built-in rustdoc tool to generate JSON describing the API of each version of the crate being checked. This JSON format is not stable — it changes often — so we have different code paths in order to support multiple formats.

In the middle lies the Trustfall query engine. cargo-semver-checks runs its lints as Trustfall queries, and Trustfall in turn uses small pieces of code called adapters that understand the nuances of each rustdoc JSON format we support.

This separation between the SemVer logic and the underlying data format is the key to cargo-semver-checks success:

  • Support for multiple stable Rust versions. Unlike many other tools that use rustdoc JSON, cargo-semver-checks does not require using a specific nightly Rust version. Any reasonably recent Rust stable release would do, as would most pre-releases.
  • Lints are easy to write. They query a high-level schema which talks about Rust structs and fields, enums and variants, functions and their arguments — the familiar concepts of the Rust language, instead of implementation details of a specific JSON format. No prior knowledge of static analysis or query optimization is required to write lints or ensure they run quickly! For a deeper dive into how the query optimizations work, check out my "Speeding up Rust semver-checking by over 2000x" post
  • Maintenance is easy. When the JSON format gets changed, we do not need to change any lints. There are dozens of lints (more are added every week!), so it would be prohibitive if we had to update them on every format change. Instead, we make a new adapter copy and tweak it as needed to accommodate the changes in the format, and everything just works. cargo-semver-checks isn't the first SemVer linter for Rust! Prior attempts at linting SemVer were either abandoned due to excessive maintenance burden, or require a specific nightly Rust version to work — or both.

A peek at Trustfall, an engine for querying everything

Jump to this chapter in the video.

Slide titled "Trustfall: Turn everything into a database!" Represent data as a graph, then query any data sources. Battle-tested: 7+ years in production. Engine built in Rust; adapters can be Rust, Python, JavaScript, or WebAssembly. You can query APIs, databases, arbitrary file formats — all in-place and without ETL. The project is free & open-source on GitHub: https://github.com/obi1kenobi/trustfall

Trustfall is another project I started.

It allows us to represent data as a graph and query any kind of data sources. It is not something that's specific to Rust or rustdoc at all.

Its first iteration was deployed to production in late 2016, The modern Trustfall query engine is the "from the ground up" Rust rewrite of a Python project called graphql-compiler which my previous employer open-sourced. so it's had 7+ years in production use.

It can be used to query any kind of API, database, file format, etc. It can run queries in-place, without needing ETL or any similarly heavyweight process.

Its adapters can be written in Rust, Python, JavaScript, or WASM — or any other language that can have bindings to Rust.

I've given two prior talks related to Trustfall:

You can try Trustfall in our playgrounds over rustdoc JSON or over the HackerNews REST APIs.

The rustdoc JSON playground uses the same exact code that powers cargo-semver-checks, and lets you find out interesting things about a variety of Rust crates — such as which Rust or clippy lints they've disabled and where.

The HackerNews playground lets you check, for example, which Twitter or GitHub users comment on stories about OpenAI.

In both of these cases, the Trustfall query engine is compiled to WASM and runs entirely in your browser. So feel free to run any query you like, no matter how expensive — it's your CPU and your bandwidth that's used to compute it 😁

Conclusion: Solving maintainability led to a tool that users love

Jump to this chapter in the video.

Slide titled: "Trustfall makes cargo-semver-checks possible." The slide has the following text: "Focus on linting and ergonomics, not rustdoc JSON format changes. 58 lints and growing — twice as many as a year ago. 32 contributors and growing — many new lints are first-time contributions! Our users love us!" Below this text is a screenshot of a GitHub comment from user "thomaseizinger" saying: "CI just caught an accidental breaking change! How good is this 😍"

There are hundreds of ways to accidentally break semantic versioning rules in Rust.

That problem is hard enough to solve by itself, without also worrying about JSON format changes breaking your implementation. We want to have a working SemVer linter, and we also want the rustdoc maintainers to be able to freely change the JSON format if that has benefits across the Rust community! The Rust language is still growing — for example, Rust 1.75 added async fn in traits — and rustdoc JSON has to be able to express these new concepts. The rustdoc team has a hard enough job as it is, and we don't want to tie their hands further by restricting which kinds of format changes may happen when.

Trustfall makes cargo-semver-checks possible.

It lets us prevent an ever-growing number of accidental breaking changes, while also making lint-writing approachable to people of all backgrounds. Many lints are first-time contributions from our community members who had no prior experience writing linters!

Most importantly, our users love us. Everyone prefers to find out about accidentally-breaking changes before they get pushed to production, instead of finding out when someone opens an issue like "hey, you broke my project."

Slide titled: "Toward fearless cargo update." SemVer is valuable, but impossible without automated help. cargo-semver-checks is a solution with lots of happy users. How you can help: contribute code and lints to cargo-semver-checks. Sponsor its development: https://github.com/sponsors/obi1kenobi . Use cargo-semver-checks when others depend on your packages.

Hopefully by this point I've convinced you that:

  • Semantic versioning is valuable, but it's impossible without automated help.
  • cargo-semver-checks is a solution to this problem that has lots of happy users.

If you'd like to help, you can contribute code, lints, and funding to cargo-semver-checks.

There are dozens more breaking changes that we need to write lints for, and lots of other not-yet-built functionality as well. So please consider becoming a GitHub Sponsor — either personally or via your company.

Finally, for the sake of everyone in the Rust community, please try to avoid accidental breaking changes. Nobody will blame you for them, but it's a lot better for everyone if you find them before you ship the new release.

cargo update should be fearless — cargo-semver-checks is here to help!

Four challenges cargo-semver-checks has yet to tackle

2024-01-23 08:00:00

My last post covered the key cargo-semver-checks achievements from 2023. Here are the biggest challenges that lie ahead!

Many of the remaining challenges in cargo-semver-checks are obvious: we all want more lints, fewer false-positives, etc. etc. Let's set those aside.

Instead, let's talk about four non-obvious challenges we have yet to tackle: