2025-04-01 08:00:00
When people say Rust is a “safe language”, they often mean memory safety. And while memory safety is a great start, it’s far from all it takes to build robust applications.
Memory safety is important but not sufficient for overall reliability.
In this article, I want to show you a few common gotchas in safe Rust that the compiler doesn’t detect and how to avoid them.
Even in safe Rust code, you still need to handle various risks and edge cases. You need to address aspects like input validation and making sure that your business logic is correct.
Here are just a few categories of bugs that Rust doesn’t protect you from:
unwrap
or expect
build.rs
scripts in third-party cratesLet’s look at ways to avoid some of the more common problems. The tips are roughly ordered by how likely you are to encounter them.
as
For Numeric Conversionssplit_at_checked
Instead Of split_at
Debug
SafelyPath::join
With Absolute Pathscargo-geiger
Overflow errors can happen pretty easily:
// DON'T: Use unchecked arithmetic
If price
and quantity
are large enough, the result will overflow.
Rust will panic in debug mode, but in release mode, it will silently wrap around.
To avoid this, use checked arithmetic operations:
// DO: Use checked arithmetic operations
Static checks are not removed since they don’t affect the performance of generated code. So if the compiler is able to detect the problem at compile time, it will do so:
The error message will be:
error: this arithmetic operation will overflow
-/main.rs:4:13
|
4 | let z = x * y; // Compile-time error!
| ^^^^^ attempt to compute `2_u8 * 128_u8`, which would overflow
|
= note: ` ` on by default
For all other cases, use checked_add
, checked_sub
, checked_mul
, and checked_div
, which return None
instead of wrapping around on underflow or overflow. 1
Quick Tip: Enable Overflow Checks In Release Mode
Rust carefully balances performance and safety. In scenarios where a performance hit is acceptable, memory safety takes precedence. 1
Integer overflows can lead to unexpected results, but they are not inherently unsafe. On top of that, overflow checks can be expensive, which is why Rust disables them in release mode. 2
However, you can re-enable them in case your application can trade the last 1% of performance for better overflow detection.
Put this into your Cargo.toml
:
[]
= true # Enable integer overflow checks in release mode
This will enable overflow checks in release mode. As a consequence, the code will panic if an overflow occurs.
See the docs for more details.
as
For Numeric ConversionsWhile we’re on the topic of integer arithmetic, let’s talk about type conversions.
Casting values with as
is convenient but risky unless you know exactly what you are doing.
let x: i32 = 42;
let y: i8 = x as i8; // Can overflow!
There are three main ways to convert between numeric types in Rust:
⚠️ Using the as
keyword: This approach works for both lossless and lossy conversions. In cases where data loss might occur (like converting from i64
to i32
), it will simply truncate the value.
Using From::from()
: This method only allows lossless conversions. For example, you can convert from i32
to i64
since all 32-bit integers can fit within 64 bits. However, you cannot convert from i64
to i32
using this method since it could potentially lose data.
Using TryFrom
: This method is similar to From::from()
but returns a Result
instead of panicking. This is useful when you want to handle potential data loss gracefully.
Quick Tip: Safe Numeric Conversions
If in doubt, prefer From::from()
and TryFrom
over as
.
From::from()
when you can guarantee no data loss.TryFrom
when you need to handle potential data loss gracefully.as
when you’re comfortable with potential truncation or know the values will fit within the target type’s range and when performance is absolutely critical.(Adapted from StackOverflow answer by delnan and additional context.)
The as
operator is not safe for narrowing conversions.
It will silently truncate the value, leading to unexpected results.
What is a narrowing conversion?
It’s when you convert a larger type to a smaller type, e.g. i32
to i8
.
For example, see how as
chops off the high bits from our value:
So, coming back to our first example above, instead of writing
let x: i32 = 42;
let y: i8 = x as i8; // Can overflow!
use TryFrom
instead and handle the error gracefully:
let y = i8 try_from.ok_or?;
Bounded types make it easier to express invariants and avoid invalid states.
E.g. if you have a numeric type and 0 is never a correct value, use std::num::NonZeroUsize
instead.
You can also create your own bounded types:
// DON'T: Use raw numeric types for domain values
// DO: Create bounded types
;
Whenever I see the following, I get goosebumps 😨:
let arr = ;
let elem = arr; // Panic!
That’s a common source of bugs. Unlike C, Rust does check array bounds and prevents a security vulnerability, but it still panics at runtime.
Instead, use the get
method:
let elem = arr.get;
It returns an Option
which you can now handle gracefully.
See this blog post for more info on the topic.
split_at_checked
Instead Of split_at
This issue is related to the previous one.
Say you have a slice and you want to split it at a certain index.
A typical mistake is to use split_at
:
let arr = ;
let = arr.split_at;
The above code will panic because the index is out of bounds!
To handle that more gracefully, use split_at_checked
instead:
let arr = ;
// This returns an Option
match arr.split_at_checked
Again, this returns an Option
which allows you to handle the error case.
More info about split_at_checked
here.
It’s very tempting to use primitive types for everything. Especially Rust beginners fall into this trap.
// DON'T: Use primitive types for usernames
However, do you really accept any string as a valid username? What if it’s empty? What if it contains emojis or special characters?
You can create a custom type for your domain instead:
;
The next point is closely related to the previous one.
Can you spot the bug in the following code?
// DON'T: Allow invalid combinations
The problem is that you can have ssl
set to true
but ssl_cert
set to None
.
That’s an invalid state! If you try to use the SSL connection, you can’t because there’s no certificate.
This issue can be detected at compile-time:
Use types to enforce valid states:
// First, let's define the possible states for the connection
In comparison to the previous section, the bug was caused by an invalid combination of closely related fields. To prevent that, clearly map out all possible states and transitions between them. A simple way is to define an enum with optional metadata for each state.
If you’re curious to learn more, here is a more in-depth blog post on the topic.
It’s quite common to add a blanket Default
implementation to your types.
But that can lead to unforeseen issues.
For example, here’s a case where the port is set to 0 by default, which is not a valid port number:
// DON'T: Implement `Default` without consideration
// Might create invalid states!
Instead, consider if a default value makes sense for your type.
// DO: Make Default meaningful or don't implement it
Debug
SafelyIf you blindly derive Debug
for your types, you might expose sensitive data.
Instead, implement Debug
manually for types that contain sensitive information.
// DON'T: Expose sensitive data in debug output
Instead, you could write:
// DO: Implement Debug manually
;
This prints
User
For production code, use a crate like secrecy
.
However, it’s not black and white either:
If you implement Debug
manually, you might forget to update the implementation when your struct changes.
A common pattern is to destructure the struct in the Debug
implementation to catch such errors.
Instead of this:
// don't
How about destructuring the struct to catch changes?
// do
Thanks to Wesley Moore (wezm) for the hint and to Simon Brüggen (m3t0r) for the example.
Don’t blindly derive Serialize
and Deserialize
– especially for sensitive data.
The values you read/write might not be what you expect!
// DON'T: Blindly derive Serialize and Deserialize
When deserializing, the fields might be empty. Empty credentials could potentially pass validation checks if not properly handled
On top of that, the serialization behavior could also leak sensitive data.
By default, Serialize
will include the password field in the serialized output, which could expose sensitive credentials in logs, API responses, or debug output.
A common fix is to implement your own custom serialization and deserialization methods by using impl<'de> Deserialize<'de> for UserCredentials
.
The advantage is that you have full control over input validation. However, the disadvantage is that you need to implement all the logic yourself.
An alternative strategy is to use the #[serde(try_from = "FromType")]
attribute.
Let’s take the Password
field as an example.
Start by using the newtype pattern to wrap the standard types and add custom validation:
// Tell serde to call `Password::try_from` with a `String`
;
Now implement TryFrom
for Password
:
With this trick, you can no longer deserialize invalid passwords:
// Panic: password too short!
let password: Password = from_str.unwrap;
(Try it on the Rust Playground)
Credits go to EqualMa’s article on dev.to and to Alex Burka (durka) for the hint.
This is a more advanced topic, but it’s important to be aware of it. TOCTOU (time-of-check to time-of-use) is a class of software bugs caused by changes that happen between when you check a condition and when you use a resource.
// DON'T: Vulnerable approach with separate check and use
The safer approach opens the directory first, ensuring we operate on what we checked:
// DO: Safer approach that opens first, then checks
Here’s why it’s safer: while we hold the handle, the directory can’t be replaced with a symlink. This way, the directory we’re working with is the same as the one we checked. Any attempt to replace it won’t affect us because the handle is already open.
You’d be forgiven if you overlooked this issue before.
In fact, even the Rust core team missed it in the standard library.
What you saw is a simplified version of an actual bug in the std::fs::remove_dir_all
function.
Read more about it in this blog post about CVE-2022-21658.
Timing attacks are a nifty way to extract information from your application. The idea is that the time it takes to compare two values can leak information about them. For example, the time it takes to compare two strings can reveal how many characters are correct. Therefore, for production code, be careful with regular equality checks when handling sensitive data like passwords.
// DON'T: Use regular equality for sensitive comparisons
// DO: Use constant-time comparison
use ;
Protect Against Denial-of-Service Attacks with Resource Limits. These happen when you accept unbounded input, e.g. a huge request body which might not fit into memory.
// DON'T: Accept unbounded input
Instead, set explicit limits for your accepted payloads:
const MAX_REQUEST_SIZE: usize = 1024 * 1024; // 1MiB
Path::join
With Absolute PathsIf you use Path::join
to join a relative path with an absolute path, it will silently replace the relative path with the absolute path.
use Path;
This is because Path::join
will return the second path if it is absolute.
I was not the only one who was confused by this behavior. Here’s a thread on the topic, which also includes an answer by Johannes Dahlström:
The behavior is useful because a caller […] can choose whether it wants to use a relative or absolute path, and the callee can then simply absolutize it by adding its own prefix and the absolute path is unaffected which is probably what the caller wanted. The callee doesn’t have to separately check whether the path is absolute or not.
And yet, I still think it’s a footgun.
It’s easy to overlook this behavior when you use user-provided paths.
Perhaps join
should return a Result
instead?
In any case, be aware of this behavior.
cargo-geiger
So far, we’ve only covered issues with your own code. For production code, you also need to check your dependencies. Especially unsafe code would be a concern. This can be quite challenging, especially if you have a lot of dependencies.
cargo-geiger is a neat tool that checks your dependencies for unsafe code. It can help you identify potential security risks in your project.
This will give you a report of how many unsafe functions are in your dependencies. Based on this, you can decide if you want to keep a dependency or not.
Phew, that was a lot of pitfalls! How many of them did you know about?
Even if Rust is a great language for writing safe, reliable code, developers still need to be disciplined to avoid bugs.
A lot of the common mistakes we saw have to do with Rust being a systems programming language: In computing systems, a lot of operations are performance critical and inherently unsafe. We are dealing with external systems outside of our control, such as the operating system, hardware, or the network. The goal is to build safe abstractions on top of an unsafe world.
Rust shares an FFI interface with C, which means that it can do anything C can do. So, while some operations that Rust allows are theoretically possible, they might lead to unexpected results.
But not all is lost! If you are aware of these pitfalls, you can avoid them.
That’s why testing, fuzzing, and static analysis are still important in Rust.
For maximum robustness, combine Rust’s safety guarantees with strict checks and strong verification methods.
Let an Expert Review Your Rust Code
I hope you found this article helpful! If you want to take your Rust code to the next level, consider a code review by an expert. I offer code reviews for Rust projects of all sizes. Get in touch to learn more.
2025-03-06 08:00:00
Want to finally learn Rust?
When I ask developers what they look for in a Rust learning resource, I tend to get the same answers:
All of the above are valid points, especially for learning Rust – a language known for its notoriously steep learning curve.
If you’ve been thinking about learning Rust for a while now and perhaps you’ve already started dabbling with it, now’s the time to fully commit. I’ve put together my favorite Rust learning resources for 2025 to help you jumpstart your Rust journey. I made sure to include a healthy mix of self-paced resources, interactive exercises, and hands-on workshops.
Key Points:
The classic Rust learning resource. If it was a cocktail, it would be an Old Fashioned. Rustlings works great for beginners and for anyone wanting a quick refresher on specific Rust concepts.
You can run Rustlings from your command line, and it guides you through a series of exercises. All it takes is running a few commands in your terminal:
Go to the official Rustlings repository to learn more.
Key Points:
Rustfinity is a bit like Rustlings, but more modern and structured. It gives you a browser interface that guides you through each exercise and runs tests to check your solutions. No need to set up anything locally, which makes it great for workshops or learning on the go.
You start with “Hello, World!” and work your way up to more complex exercises. It’s a relatively new resource, but I’ve tried several exercises myself and enjoyed the experience.
They also host “Advent of Rust” events with some more challenging problems available here.
Learn more at the Rustfinity website.
Key Points:
This is another relatively new resource by Luca Palmieri, who is the author of the popular “Zero To Production” book. It’s a collection of 100 exercises that help you learn Rust. The course is based on the “learning by doing” principle and designed to be interactive and hands-on.
You can work through the material in your browser, or download a PDF or buy a printed copy for offline reading. The course comes with a local CLI tool called wr
that verifies your solutions.
You can learn more about the course here.
Key Points:
If you already know how to code some Rust and want to take it one step further, CodeCrafters is currently the best resource for learning advanced Rust concepts in your own time. I like that they focus on real-world systems programming projects, which is where Rust is really strong.
You’ll learn how to build your own shell, HTTP server, Redis, Kafka, Git, SQLite, or DNS server from scratch.
Most people work on the projects on evenings and weekends, and it takes a few days or weeks to complete a project, but the sense of accomplishment when you finish is incredible: you’ll complete a project that teaches you both Rust and the inner workings of systems you use daily.
Try CodeCrafters For Free
CodeCrafters is the platform I genuinely recommend to friends after they’ve learned the basics of Rust. It’s the next best thing after a personal mentor or workshop.
You can try CodeCrafters for free here and get 40% off if you upgrade to a paid plan later. Full disclosure: I receive a commission for new subscriptions, but I would recommend CodeCrafters even if I didn’t.
On top of that, most companies will reimburse educational resources through their L&D budget, so check with your manager about getting reimbursed.
Key Points:
I’m biased here, but nothing beats a hands-on workshop with a Rust expert. Learning from experienced Rust developers is probably the quickest way to get up to speed with Rust – especially if you plan to use Rust at work. That’s because trainers can provide you with personalized feedback and guidance, and help you avoid common pitfalls. It’s like having a personal trainer for your Rust learning journey.
My workshops are designed to be hands-on and tailored to the needs of the participants. I want people to walk away with a finished project they can extend.
All course material is open source and freely available on GitHub:
You can go through the material on your own to see if it fits your needs. Once you’re ready, feel free to reach out about tailoring the content for you and your team.
Speed Up Your Learning Process
Is your company considering a switch to Rust?
Rust is known for its steep learning curve, but with the right resources and guidance, you can become proficient in a matter of weeks. I offer hands-on workshops and training for teams and individuals who want to accelerate their learning process.
Check out my services page or send me an email to learn more.
2025-02-06 08:00:00
You know the drill by now. It’s time for another recap!
Sit back, get a warm beverage and look back at the highlights of Season 3 with us.
We’ve been at this for a while now (three seasons, one year, and 24 episodes to be exact). We had guests from a wide range of industries: from automotive to CAD software, and from developer tooling to systems programming.
Our focus this time around was on the technical details of Rust in production, especially integration of Rust into existing codebases and ecosystem deep dives. Thanks to everyone who participated in the survey last season, which helped us dial in our content. Let us know if we hit the mark or missed it!
2025-01-28 08:00:00
I’ve been working with many clients lately who host their Rust projects on GitHub. CI is typically a bottleneck in the development process since it can significantly slow down feedback loops. However, there are several effective ways to speed up your GitHub Actions workflows!
Want a Real-World Example?
Check out this production-ready GitHub Actions workflow that implements all the tips from this article: click here.
Also see Arpad Borsos’ workflow templates for Rust projects.
This is easily my most important recommendation on this list.
My friend Arpad Borsos, also known as Swatinem, has created a cache action specifically tailored for Rust projects. It’s an excellent way to speed up any Rust CI build and requires no code changes to your project.
name: CI
on:
push:
branches:
- main
pull_request:
jobs:
build:
runs-on: ubuntu-latest-arm64
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
# The secret sauce!
- uses: Swatinem/rust-cache@v2
- run: |
cargo check
cargo test
cargo build --release
The action requires no additional configuration and works out of the box. There’s no need for a separate step to store the cache — this happens automatically through a post-action. This approach ensures that broken builds aren’t cached, and for subsequent builds, you can save several minutes of build time.
Here’s the documentation where you can learn more.
--locked
flagWhen running cargo build
, cargo test
, or cargo check
, you can pass the --locked
flag to prevent Cargo from updating the Cargo.lock
file.
This is particularly useful for CI operations since you save the time to update dependencies. Typically you want to test the exact dependency versions specified in your lock file anyway.
On top of that, it ensures reproducible builds, which is crucial for CI. From the Cargo documentation:
The
--locked
flag can be used to force Cargo to use the packagedCargo.lock
file if it is available. This may be useful for ensuring reproducible builds, to use the exact same set of dependencies that were available when the package was published.
Here’s how you can use it in your GitHub Actions workflow:
- run: cargo check --locked
- run: cargo test --locked
cargo-chef
For Docker BuildsFor Rust Docker images, cargo-chef
can significantly speed up the build process by leveraging Docker’s layer caching:
FROM lukemathwalker/cargo-chef:latest-rust-1 AS chef
WORKDIR /app
FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM chef AS builder
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --recipe-path recipe.json
# Build application
COPY . .
RUN cargo build --release --bin app
# We do not need the Rust toolchain to run the binary!
FROM debian:bookworm-slim AS runtime
WORKDIR /app
COPY --from=builder /app/target/release/app /usr/local/bin
ENTRYPOINT ["/usr/local/bin/app"]
Alternatively, if you don’t mind a little extra typing, you can write your own Dockerfile without cargo-chef
:
FROM rust:1.81-slim-bookworm AS builder
WORKDIR /usr/src/app
# Copy the Cargo files to cache dependencies
COPY Cargo.toml Cargo.lock ./
# Create a dummy main.rs to build dependencies
RUN mkdir src && \
echo 'fn main() { println!("Dummy") }' > src/main.rs && \
cargo build --release && \
rm src/main.rs
# Now copy the actual source code
COPY src ./src
# Build for release
RUN touch src/main.rs && cargo build --release
# Runtime stage
FROM debian:bookworm-slim
# Install minimal runtime dependencies
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates && rm -rf /var/lib/apt/lists/*
# Copy the build artifact from the build stage
COPY --from=builder /usr/src/app/target/release/your-app /usr/local/bin/
# Set the startup command to run our binary
CMD ["your-app"]
Rust provides environment flags to disable incremental compilation. While incremental compilation speeds up local development builds, in CI it can actually slow down the process due to dependency tracking overhead and negatively impact caching. So it’s better to switch it off:
name: Build
on:
pull_request:
push:
branches:
- main
env:
# Disable incremental compilation for faster from-scratch builds
CARGO_INCREMENTAL: 0
jobs:
build:
runs-on: ...
steps:
...
While debug info is valuable for debugging, it significantly increases the size of the ./target
directory, which can harm caching efficiency.
It’s easy to switch off:
env:
CARGO_PROFILE_TEST_DEBUG: 0
cargo nextest
cargo nextest
enables parallel test execution, which can substantially speed up your CI process.
While they claim a 3x speedup over cargo test
, in CI environments I typically observe around 40%
because the runners don’t have as many cores as a developer machine.
It’s still a nice speedup.
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: taiki-e/install-action@nextest
- uses: Swatinem/rust-cache@v2
- name: Compile
run: cargo check --locked
- name: Test
run: cargo nextest
Cargo.toml
SettingsThese release profile settings can significantly improve build times and binary size:
[]
= true
= 1
codegen-units = 1
trades parallel compilation for better optimization opportunities. While this might make local builds slower, it often speeds up CI builds by reducing memory pressure on resource-constrained runners.If you only want to apply these settings in CI, you can use the CARGO_PROFILE_RELEASE_LTO
and CARGO_PROFILE_RELEASE_CODEGEN_UNITS
environment variables:
jobs:
build:
runs-on: ubuntu-latest
env:
CARGO_PROFILE_RELEASE_LTO: true
CARGO_PROFILE_RELEASE_CODEGEN_UNITS: 1
steps:
- uses: actions/checkout@v4
- name: Build
run: cargo build --release --locked
GitHub Actions has recently announced that Linux ARM64 hosted runners are now available for free in public repositories. Here’s the announcement.
Switching to ARM64 provides up to 40% performance improvement and is straightforward. Simply replace ubuntu-latest
with ubuntu-latest-arm64
in your workflow file:
jobs:
test:
runs-on: ubuntu-latest-arm64
However, in my tests, the downside was that it took a long time until a runner was allocated to the job. The waiting time dwarfed the actual build time. I assume GitHub will add more runners in the future to mitigate this issue.
If you are using Rust for production workloads, it’s worth looking into dedicated VMs. These are not free, but in comparison to the small GitHub runners, you can get a significant uplift on build times.
Any provider will do, as long as you get a VM with a decent amount of CPU cores (16+ is recommended) and a good amount of RAM (32GB+). Hetzner Cloud is a popular choice for this purpose because of its competitive pricing. Spot instances or server auctions can be a good way to save money. Here are some setup resources to get you started:
There are services like Depot, which host runners for you. They promise large speedups for Rust builds, but I haven’t tested them myself.
Implement dependabot or Renovate to automate dependency updates. Instead of manually creating PRs for updates and waiting for CI, these bots handle this automatically, creating PRs that you can merge when ready.
Renovate has a bit of an edge over dependabot in terms of configurability and features.
release-plz
automates release creation when PRs are merged.
This GitHub action eliminates the manual work of creating releases and is highly recommended for maintaining a smooth workflow.
If you’ve implemented all these optimizations and your builds are still slow, it’s time to optimize the Rust code itself. I’ve compiled many tips in my other blog post here.
Remember that each project is unique.
Start with the easier wins like Swatinem’s cache action and --locked
flag, then progressively implement more advanced optimizations as needed. Monitor your CI metrics to ensure the changes are having the desired effect.
Need Professional Support?
Is your Rust CI still too slow despite implementing these optimizations? I can help you identify and fix performance bottlenecks in your build pipeline. Book a free consultation to discuss your specific needs.
2025-01-27 08:00:00
It’s 2019, and Hubstaff’s engineering team is sketching out plans for their new webhook system. The new system needs to handle millions of events and work reliably at scale. The safe choice would be to stick with their trusty Ruby on Rails stack – after all, it had served them well so far. But that’s not the path they chose.
About Hubstaff
Hubstaff helps distributed teams track time and manage their workforce. With 500,000+ active users across 112,000+ businesses, they needed their systems to scale reliably. As a remote-first company with 120 team members, they understand the importance of robust, efficient software.
When I sat down with Alex, Hubstaff’s CTO, he painted a vivid picture of that moment. “Our entire application stack was powered by Ruby and JavaScript,” he told me. “It worked, but we knew we needed something different for this new challenge.”
The team stood at a crossroads. Go, with its simplicity and familiar patterns, felt like as a safe harbor. But there was another path – one less traveled at the time:
“We chose to proceed with Rust,” Alex recalled. “Not just because it was efficient, but because it would push us to think in fundamentally different ways.”
Fast-forward to today.
The webhook system is processing ten times the initial load without breaking a sweat. Of course, the team had to make adjustments along the way, not to their Rust code, but to their SQL queries.
“Since its launch, we’ve had to optimize SQL queries multiple times to keep up with demand,” Alex shared, “but we’ve never faced any issues with the app’s memory or CPU consumption. Not once.”
Over time, more and more microservices got ported to Rust.
Instead of going all-in on Rust, Hubstaff found wisdom in balance.
Here’s their reasoning:
But what about Rust’s infamous learning curve?
“Once developers are up to speed,” Alex noted, “there’s no noticeable slowdown in development. The Rust ecosystem has matured to the point where we’re not constantly reinventing the wheel.”
Once the team gained enough confidence in Rust, they started rewriting their desktop application. This was an area of the business, that was traditionally governed by C++, but the team was already sold on the idea:
The transition to Rust was surprisingly smooth. I think a big reason for that was the collective frustration with our existing C++ codebase. Rust felt like a breath of fresh air, and the idea naturally resonated with the team.
This quote is from Artur Jakubiec, Technical Lead at Hubstaff, who was leading the desktop app migration.
But Rust wasn’t an obvious choice for their desktop app. The easy path would have been Electron – the tried-and-true choice for companies looking to provide a desktop client from their web app. However, Hubstaff had learned to trust that Rust would get the job done.
“Electron simply wasn’t an option,” Artur stated firmly. “We needed something lightweight, something that could bridge our future with our past. That’s why we chose Tauri.”
“It’s still early days for this approach, as we’re currently in the process of migrating our desktop app. However, we’re already envisioning some compelling synergies emerging from this setup. For example, many of the APIs used by our desktop and mobile apps are high-load systems, and following our strategy, they’re slated to be migrated to Rust soon. With the desktop team already familiarizing themselves with Rust during this transition, they’ll be better equipped to make contributions or changes to these APIs, which will reduce reliance on the server team.” added Alex.
Today, Hubstaff’s architecture is a mix of Ruby on Rails, Rust, and JavaScript. Their webhooks system, backend services, and desktop app are all powered by Rust and they keep expanding their Rust footprint across the stack for heavy-load operations.
Of course, there were moments of doubt. Adding a new language to an already complex tech stack isn’t a decision teams make lightly.
“There was skepticism,” Artur Jakubiec, their Desktop Tech Lead, admitted. “Not about Rust itself, but about balancing our ecosystem.”
But instead of letting doubt win, Artur took action. He spent weeks building prototypes, gathering data, and crafting a vision of what could be. It wasn’t just about convincing management – it was about showing his team a glimpse of the future they could build together.
Especially the build system caused some headaches:
One thing I really wish existed when we started was better C++-Rust integration, not just at the language level but especially in the build systems. Oddly enough, integrating Rust into CMake/C++ workflows (using tools like Corrosion) was relatively straightforward, but going the other way — embedding C++ into Rust—proved much more challenging. A more seamless and standardized approach for bidirectional integration would have saved us a lot of time and effort.
Artur adds:
Of course, challenges remain, particularly in ensuring seamless knowledge transfer and establishing best practices across teams. But the potential for closer collaboration and a unified stack makes this an exciting step forward.
For developers coming from interpreted languages like Ruby, two main insights stood out from our conversation:
Initially, switching to a compiled language felt like a hustle, but the “aha” moments made it worthwhile. The first came when we realized just how many edge cases the Rust compiler catches for you — it’s like having an additional safety net during development. The second came after deploying Rust applications to production. Seeing how much more resource-efficient the Rust app was compared to its Ruby counterpart was a real eye-opener. It demonstrated the tangible benefits of Rust’s focus on performance, reinforcing why it was worth tackling the learning curve.
But what about the C++ developers, which worked on the desktop app? What helped was that the team had prior experience with lower-level concepts from C++.
I believe the team’s strong C++ background made the transition to Rust almost seamless. Many of Rust’s more challenging low-level concepts have parallels in C++, such as the memory model, RAII, move semantics, pointers, references, and even aspects of ADTs (achievable in C++ with tools like
std::optional
andstd::variant
). Similarly, Rust’s ownership system and concepts like lifetimes echo patterns familiar to anyone experienced in managing resources in C++.
Let’s look at the facts:
But perhaps the biggest change is confidence in the codebase:
“With C++, there’s a constant sense of paranoia about every line you write,” Artur revealed. “Rust transformed that fear into confidence. It changed not just how we code, but how we feel about coding.” […] “it’s like stepping into a world where you can trust your tools”
On top of that, Alex added that using Rust across the stack has also opened up new collaboration opportunities across the teams.
Artur adds that the onboarding experience has also been smoother than expected:
So far, onboarding hasn’t been an issue at all. Honestly, there’s no secret sauce — it’s all about getting new team members working on the code as soon as possible.
Today, Hubstaff’s journey continues. Their Rust footprint grows steadily: 4,200 lines of mission-critical server code, 2,000 lines in their desktop app, and a team of passionate Rustaceans that’s still growing.
But when I asked Alex and Artur what they’re most proud of, it wasn’t the technical achievements that topped their list. It was how they got there: thoughtfully, methodically, and together as a team.
What would Alex and Artur recommend to teams standing at their own crossroads?
Here’s what they shared:
Special thanks to Alex Yarotsky, CTO and Artur Jakubiec, Technical Lead at Hubstaff for sharing their journey with Rust.
Want to learn more about Hubstaff? Check out their website.
2025-01-23 08:00:00
The car industry is not known for its rapid adoption of new technologies. Therefore, it’s even more exciting to see a company like Volvo Cars embracing Rust for core components of their software stack.