2025-02-06 08:00:00
You know the drill by now. It’s time for another recap!
Sit back, get a warm beverage and look back at the highlights of Season 3 with us.
We’ve been at this for a while now (three seasons, one year, and 24 episodes to be exact). We had guests from a wide range of industries: from automotive to CAD software, and from developer tooling to systems programming.
Our focus this time around was on the technical details of Rust in production, especially integration of Rust into existing codebases and ecosystem deep dives. Thanks to everyone who participated in the survey last season, which helped us dial in our content. Let us know if we hit the mark or missed it!
2025-01-28 08:00:00
I’ve been working with many clients lately who host their Rust projects on GitHub. CI is typically a bottleneck in the development process since it can significantly slow down feedback loops. However, there are several effective ways to speed up your GitHub Actions workflows!
Want a Real-World Example?
Check out this production-ready GitHub Actions workflow that implements all the tips from this article: click here.
Also see Arpad Borsos’ workflow templates for Rust projects.
This is easily my most important recommendation on this list.
My friend Arpad Borsos, also known as Swatinem, has created a cache action specifically tailored for Rust projects. It’s an excellent way to speed up any Rust CI build and requires no code changes to your project.
name: CI
on:
push:
branches:
- main
pull_request:
jobs:
build:
runs-on: ubuntu-latest-arm64
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
# The secret sauce!
- uses: Swatinem/rust-cache@v2
- run: |
cargo check
cargo test
cargo build --release
The action requires no additional configuration and works out of the box. There’s no need for a separate step to store the cache — this happens automatically through a post-action. This approach ensures that broken builds aren’t cached, and for subsequent builds, you can save several minutes of build time.
Here’s the documentation where you can learn more.
--locked
flagWhen running cargo build
, cargo test
, or cargo check
, you can pass the --locked
flag to prevent Cargo from updating the Cargo.lock
file.
This is particularly useful for CI operations since you save the time to update dependencies. Typically you want to test the exact dependency versions specified in your lock file anyway.
On top of that, it ensures reproducible builds, which is crucial for CI. From the Cargo documentation:
The
--locked
flag can be used to force Cargo to use the packagedCargo.lock
file if it is available. This may be useful for ensuring reproducible builds, to use the exact same set of dependencies that were available when the package was published.
Here’s how you can use it in your GitHub Actions workflow:
- run: cargo check --locked
- run: cargo test --locked
cargo-chef
For Docker BuildsFor Rust Docker images, cargo-chef
can significantly speed up the build process by leveraging Docker’s layer caching:
FROM lukemathwalker/cargo-chef:latest-rust-1 AS chef
WORKDIR /app
FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM chef AS builder
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --recipe-path recipe.json
# Build application
COPY . .
RUN cargo build --release --bin app
# We do not need the Rust toolchain to run the binary!
FROM debian:bookworm-slim AS runtime
WORKDIR /app
COPY --from=builder /app/target/release/app /usr/local/bin
ENTRYPOINT ["/usr/local/bin/app"]
Alternatively, if you don’t mind a little extra typing, you can write your own Dockerfile without cargo-chef
:
FROM rust:1.81-slim-bookworm AS builder
WORKDIR /usr/src/app
# Copy the Cargo files to cache dependencies
COPY Cargo.toml Cargo.lock ./
# Create a dummy main.rs to build dependencies
RUN mkdir src && \
echo 'fn main() { println!("Dummy") }' > src/main.rs && \
cargo build --release && \
rm src/main.rs
# Now copy the actual source code
COPY src ./src
# Build for release
RUN touch src/main.rs && cargo build --release
# Runtime stage
FROM debian:bookworm-slim
# Install minimal runtime dependencies
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates && rm -rf /var/lib/apt/lists/*
# Copy the build artifact from the build stage
COPY --from=builder /usr/src/app/target/release/your-app /usr/local/bin/
# Set the startup command to run our binary
CMD ["your-app"]
Rust provides environment flags to disable incremental compilation. While incremental compilation speeds up local development builds, in CI it can actually slow down the process due to dependency tracking overhead and negatively impact caching. So it’s better to switch it off:
name: Build
on:
pull_request:
push:
branches:
- main
env:
# Disable incremental compilation for faster from-scratch builds
CARGO_INCREMENTAL: 0
jobs:
build:
runs-on: ...
steps:
...
While debug info is valuable for debugging, it significantly increases the size of the ./target
directory, which can harm caching efficiency.
It’s easy to switch off:
env:
CARGO_PROFILE_TEST_DEBUG: 0
cargo nextest
cargo nextest
enables parallel test execution, which can substantially speed up your CI process.
While they claim a 3x speedup over cargo test
, in CI environments I typically observe around 40%
because the runners don’t have as many cores as a developer machine.
It’s still a nice speedup.
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: taiki-e/install-action@nextest
- uses: Swatinem/rust-cache@v2
- name: Compile
run: cargo check --locked
- name: Test
run: cargo nextest
Cargo.toml
SettingsThese release profile settings can significantly improve build times and binary size:
[]
= true
= 1
codegen-units = 1
trades parallel compilation for better optimization opportunities. While this might make local builds slower, it often speeds up CI builds by reducing memory pressure on resource-constrained runners.If you only want to apply these settings in CI, you can use the CARGO_PROFILE_RELEASE_LTO
and CARGO_PROFILE_RELEASE_CODEGEN_UNITS
environment variables:
jobs:
build:
runs-on: ubuntu-latest
env:
CARGO_PROFILE_RELEASE_LTO: true
CARGO_PROFILE_RELEASE_CODEGEN_UNITS: 1
steps:
- uses: actions/checkout@v4
- name: Build
run: cargo build --release --locked
GitHub Actions has recently announced that Linux ARM64 hosted runners are now available for free in public repositories. Here’s the announcement.
Switching to ARM64 provides up to 40% performance improvement and is straightforward. Simply replace ubuntu-latest
with ubuntu-latest-arm64
in your workflow file:
jobs:
test:
runs-on: ubuntu-latest-arm64
However, in my tests, the downside was that it took a long time until a runner was allocated to the job. The waiting time dwarfed the actual build time. I assume GitHub will add more runners in the future to mitigate this issue.
If you are using Rust for production workloads, it’s worth looking into dedicated VMs. These are not free, but in comparison to the small GitHub runners, you can get a significant uplift on build times.
Any provider will do, as long as you get a VM with a decent amount of CPU cores (16+ is recommended) and a good amount of RAM (32GB+). Hetzner Cloud is a popular choice for this purpose because of its competitive pricing. Spot instances or server auctions can be a good way to save money. Here are some setup resources to get you started:
There are services like Depot, which host runners for you. They promise large speedups for Rust builds, but I haven’t tested them myself.
Implement dependabot or Renovate to automate dependency updates. Instead of manually creating PRs for updates and waiting for CI, these bots handle this automatically, creating PRs that you can merge when ready.
Renovate has a bit of an edge over dependabot in terms of configurability and features.
release-plz
automates release creation when PRs are merged.
This GitHub action eliminates the manual work of creating releases and is highly recommended for maintaining a smooth workflow.
If you’ve implemented all these optimizations and your builds are still slow, it’s time to optimize the Rust code itself. I’ve compiled many tips in my other blog post here.
Remember that each project is unique.
Start with the easier wins like Swatinem’s cache action and --locked
flag, then progressively implement more advanced optimizations as needed. Monitor your CI metrics to ensure the changes are having the desired effect.
Need Professional Support?
Is your Rust CI still too slow despite implementing these optimizations? I can help you identify and fix performance bottlenecks in your build pipeline. Book a free consultation to discuss your specific needs.
2025-01-27 08:00:00
It’s 2019, and Hubstaff’s engineering team is sketching out plans for their new webhook system. The new system needs to handle millions of events and work reliably at scale. The safe choice would be to stick with their trusty Ruby on Rails stack – after all, it had served them well so far. But that’s not the path they chose.
About Hubstaff
Hubstaff helps distributed teams track time and manage their workforce. With 500,000+ active users across 112,000+ businesses, they needed their systems to scale reliably. As a remote-first company with 120 team members, they understand the importance of robust, efficient software.
When I sat down with Alex, Hubstaff’s CTO, he painted a vivid picture of that moment. “Our entire application stack was powered by Ruby and JavaScript,” he told me. “It worked, but we knew we needed something different for this new challenge.”
The team stood at a crossroads. Go, with its simplicity and familiar patterns, felt like as a safe harbor. But there was another path – one less traveled at the time:
“We chose to proceed with Rust,” Alex recalled. “Not just because it was efficient, but because it would push us to think in fundamentally different ways.”
Fast-forward to today.
The webhook system is processing ten times the initial load without breaking a sweat. Of course, the team had to make adjustments along the way, not to their Rust code, but to their SQL queries.
“Since its launch, we’ve had to optimize SQL queries multiple times to keep up with demand,” Alex shared, “but we’ve never faced any issues with the app’s memory or CPU consumption. Not once.”
Over time, more and more microservices got ported to Rust.
Instead of going all-in on Rust, Hubstaff found wisdom in balance.
Here’s their reasoning:
But what about Rust’s infamous learning curve?
“Once developers are up to speed,” Alex noted, “there’s no noticeable slowdown in development. The Rust ecosystem has matured to the point where we’re not constantly reinventing the wheel.”
Once the team gained enough confidence in Rust, they started rewriting their desktop application. This was an area of the business, that was traditionally governed by C++, but the team was already sold on the idea:
The transition to Rust was surprisingly smooth. I think a big reason for that was the collective frustration with our existing C++ codebase. Rust felt like a breath of fresh air, and the idea naturally resonated with the team.
This quote is from Artur Jakubiec, Technical Lead at Hubstaff, who was leading the desktop app migration.
But Rust wasn’t an obvious choice for their desktop app. The easy path would have been Electron – the tried-and-true choice for companies looking to provide a desktop client from their web app. However, Hubstaff had learned to trust that Rust would get the job done.
“Electron simply wasn’t an option,” Artur stated firmly. “We needed something lightweight, something that could bridge our future with our past. That’s why we chose Tauri.”
“It’s still early days for this approach, as we’re currently in the process of migrating our desktop app. However, we’re already envisioning some compelling synergies emerging from this setup. For example, many of the APIs used by our desktop and mobile apps are high-load systems, and following our strategy, they’re slated to be migrated to Rust soon. With the desktop team already familiarizing themselves with Rust during this transition, they’ll be better equipped to make contributions or changes to these APIs, which will reduce reliance on the server team.” added Alex.
Today, Hubstaff’s architecture is a mix of Ruby on Rails, Rust, and JavaScript. Their webhooks system, backend services, and desktop app are all powered by Rust and they keep expanding their Rust footprint across the stack for heavy-load operations.
Of course, there were moments of doubt. Adding a new language to an already complex tech stack isn’t a decision teams make lightly.
“There was skepticism,” Artur Jakubiec, their Desktop Tech Lead, admitted. “Not about Rust itself, but about balancing our ecosystem.”
But instead of letting doubt win, Artur took action. He spent weeks building prototypes, gathering data, and crafting a vision of what could be. It wasn’t just about convincing management – it was about showing his team a glimpse of the future they could build together.
Especially the build system caused some headaches:
One thing I really wish existed when we started was better C++-Rust integration, not just at the language level but especially in the build systems. Oddly enough, integrating Rust into CMake/C++ workflows (using tools like Corrosion) was relatively straightforward, but going the other way — embedding C++ into Rust—proved much more challenging. A more seamless and standardized approach for bidirectional integration would have saved us a lot of time and effort.
Artur adds:
Of course, challenges remain, particularly in ensuring seamless knowledge transfer and establishing best practices across teams. But the potential for closer collaboration and a unified stack makes this an exciting step forward.
For developers coming from interpreted languages like Ruby, two main insights stood out from our conversation:
Initially, switching to a compiled language felt like a hustle, but the “aha” moments made it worthwhile. The first came when we realized just how many edge cases the Rust compiler catches for you — it’s like having an additional safety net during development. The second came after deploying Rust applications to production. Seeing how much more resource-efficient the Rust app was compared to its Ruby counterpart was a real eye-opener. It demonstrated the tangible benefits of Rust’s focus on performance, reinforcing why it was worth tackling the learning curve.
But what about the C++ developers, which worked on the desktop app? What helped was that the team had prior experience with lower-level concepts from C++.
I believe the team’s strong C++ background made the transition to Rust almost seamless. Many of Rust’s more challenging low-level concepts have parallels in C++, such as the memory model, RAII, move semantics, pointers, references, and even aspects of ADTs (achievable in C++ with tools like
std::optional
andstd::variant
). Similarly, Rust’s ownership system and concepts like lifetimes echo patterns familiar to anyone experienced in managing resources in C++.
Let’s look at the facts:
But perhaps the biggest change is confidence in the codebase:
“With C++, there’s a constant sense of paranoia about every line you write,” Artur revealed. “Rust transformed that fear into confidence. It changed not just how we code, but how we feel about coding.” […] “it’s like stepping into a world where you can trust your tools”
On top of that, Alex added that using Rust across the stack has also opened up new collaboration opportunities across the teams.
Artur adds that the onboarding experience has also been smoother than expected:
So far, onboarding hasn’t been an issue at all. Honestly, there’s no secret sauce — it’s all about getting new team members working on the code as soon as possible.
Today, Hubstaff’s journey continues. Their Rust footprint grows steadily: 4,200 lines of mission-critical server code, 2,000 lines in their desktop app, and a team of passionate Rustaceans that’s still growing.
But when I asked Alex and Artur what they’re most proud of, it wasn’t the technical achievements that topped their list. It was how they got there: thoughtfully, methodically, and together as a team.
What would Alex and Artur recommend to teams standing at their own crossroads?
Here’s what they shared:
Special thanks to Alex Yarotsky, CTO and Artur Jakubiec, Technical Lead at Hubstaff for sharing their journey with Rust.
Want to learn more about Hubstaff? Check out their website.
2025-01-23 08:00:00
The car industry is not known for its rapid adoption of new technologies. Therefore, it’s even more exciting to see a company like Volvo Cars embracing Rust for core components of their software stack.
2025-01-21 08:00:00
Rust’s focus on expressions is an underrated aspect of the language. Code feels more natural to compose once you embrace expressions as a core mechanic in Rust. I would go as far as to say that expressions shaped the way I think about control flow in general.
“Everything is an expression” is a bit of an exaggeration, but it’s a useful mental model while you internalize the concept.1
But what’s so special about them?
The difference between expressions and statements can easily be dismissed as a minor detail. Underneath the surface, though, the fact that expressions return values has a profound impact on the ergonomics of a language.
In Rust, most things produce a value: literals, variables, function calls, blocks, and control flow statements like if
, match
, and loop
.
Even &
and *
are expressions in Rust.
Rust inherits expressions from its functional roots in the ML family of languages; they are not so common in other languages. Go, C++, Java, and TypeScript have them, but they pale in comparison to Rust.
In Go, for example, an if
statement is… well, a statement and not an expression. This has some surprising side-effects. For example, you can’t use if
statements in a ternary expression like you would in Rust:
// This is not valid Go code!
var x = if condition else ;
Instead, you’d have to write a full-blown if
statement
along with a slightly unfortunate upfront variable declaration:
var x int
if condition else
Since if
is an expression in Rust, using it in a ternary expression is perfectly normal.
let x = if condition else ;
That explains the absence of the ternary operator in Rust (i.e. there is no syntax like x = condition ? 1 : 2;
).
No special syntax is needed because if
is comparably concise.
Also note that in comparison to Go, our variable x
does not need to be mutable.
As we will see, Rust’s expressions often lead to less mutable code.
In combination with pattern matching, expressions in Rust become even more powerful:
let = if condition else ;
Here, the left side of the assignment (a, b) is a pattern that destructures the tuple returned by the if-else
expression.
What if you deal with more complex control flow?
That’s not a problem. match
is an expression, too.
It is common to assign the result of a match
expression to a variable.
let color = match duck ;
match
and if
ExpressionsLet’s say you want to return a duck’s color, but you want to return the correct color based on the year. (In the early Disney comics, the nephews were wearing different colors.)
let color = match duck ;
Neat, right? You can combine match
and if
expressions to create complex logic in a few lines of code.
Note: those if
s are called match arm guards, and they really are full-fledged if
expressions.
You can put anything in there just like in a regular if
.
break
is an expressionYou can return a value from a loop with break
:
let foo = loop ;
// foo is 1
More commonly, you’d use it like this:
let result = loop ;
// result is 20
dbg!()
returns the value of the inner expressionYou can wrap any expression with dbg!()
without changing the behavior of your code (aside from the debug output).
let x = dbg!;
So far, I showed you some fancy expression tricks, but how do you apply this in practice?
To illustrate this, imagine you have a Config
struct that reads a configuration file from a given path:
/// Configuration for the application
/// Creates a new Config with the given path
///
/// The path is resolved against the home directory if relative.
/// Validates that the path exists and has the correct extension.
Here’s how you might implement the with_config_path
method in an imperative style:
There are a few things we can improve here:
mut
is_none()
/unwrap()
It’s always a good idea to examine unwrap()
calls and find safer alternatives.
While we “only” have two unwrap()
calls here, both point at flaws in our design.
Here’s the first one:
let mut config_path;
if path.is_absolute else
We know that home
is not None
when we unwrap
it, because we checked it right above.
But what if we refactor the code? We might forget the check and introduce a bug.
This can be rewritten as:
let config_path = if path.is_absolute else ;
Or, if we introduce a custom error type:
let config_path = if path.is_absolute else ;
The other unwrap
is also unnecessary and makes the happy path harder to read.
Here is the original code:
if config_path.is_file
We can rewrite this as:
if config_path.is_file
Or we return early to avoid the nested if
:
if !config_path.is_file
let Some = config_path.extension else
if ext != "conf"
mut
sUsually, my next step is to get rid of as many mut
variables as possible.
Note how there are no more mut
keywords after our first refactoring.
This is a typical pattern in Rust: often when we get rid of an unwrap()
, we can remove a mut
as well.
Nevertheless, it is always a good idea to look for all mut
variables and see if they are really necessary.
The last expression in a block is implicitly returned
and that return
is an expression itself, so you can often get rid of explicit return
statements.
In our case that means:
return Ok;
becomes
Ok
Another simple heuristic is to hunt for returns
and semicolons in the middle of your code.
These are like “seams” in our program; stop signs, which break the natural data flow.
Almost effortlessly, removing these blockers often improves the flow; it’s like magic.
For example, the above validation code can also be written without returns:
match config_path
I like that, because we avoid one error message duplication and all conditions start on the left.
Whether you prefer that over let-else
is a matter of taste. 2
Remember when I said “everything is an expression”? Don’t take this too far or people will stop inviting you to dinner parties.
It’s fun to know that you could use then_some
, unwrap_or_else
, and map_or
to chain expressions together, but don’t use them just to show off.
Warning
The below code is correct, but the combinators get in the way of readability. It feels more like a Lisp program than Rust code.
Keep your friends and colleagues in mind when writing code. Find a balance between expressiveness and readability.
If you find that your code doesn’t feel idiomatic, see if expressions can help. They tend to guide you towards more ergonomic Rust code.
Once you find the right balance, expressions are a joy to use – especially in smaller context where data flow is key. The “trifecta” of iterators, expressions, and pattern matching is the foundation of data transformations in Rust. I wrote a complementary article about iterators here.
Of course, it’s not forbidden to mix expressions and statements!
For example, I personally like to use let-else
statements when it makes my code easier to understand.
If you’re unsure about whether using an expression is worth it, seek feedback from someone less familiar with Rust.
If they look confused, you probably tried to be too clever.
Now, try to refactor some code to train that muscle.
2025-01-15 08:00:00
Programming is an iterative process - as much as we would like to come up with the perfect solution from the start, it rarely works that way.
Good programs often start as quick prototypes. The bad ones stay prototypes, but the best ones evolve into production code.
Whether you’re writing games, CLI tools, or designing library APIs, prototyping helps tremendously in finding the best approach before committing to a design. It helps reveal the patterns behind more idiomatic code.
For all its explicitness, Rust is surprisingly ergonomic when iterating on ideas. Contrary to popular belief, it is a joy for building prototypes.
You don’t need to be a Rust expert to be productive - in fact, many of the techniques we’ll discuss specifically help you sidestep Rust’s more advanced features. If you focus on simple patterns and make use of Rust’s excellent tooling, even less experienced Rust developers can quickly bring their ideas to life.
Things you’ll learn
The common narrative goes like this:
When you start writing a program, you don’t know what you want and you change your mind pretty often. Rust pushes back when you change your mind because the type system is very strict. On top of that, getting your idea to compile takes longer than in other languages, so the feedback loop is slower.
I’ve found that developers not yet too familiar with Rust often share this preconception. These developers stumble over the strict type system and the borrow checker while trying to sketch out a solution. They believe that with Rust you’re either at 0% or 100% done (everything works and has no undefined behavior) and there’s nothing in between.
Here are some typical misbeliefs:
These are all common misconceptions and they are not true.
It turns out you can avoid all of these pitfalls and still get a lot of value from prototyping in Rust.
If you’re happy with a scripting language like Python, why bother with Rust?
That’s a fair question! After all, Python is known for its quick feedback loop and dynamic type system, and you can always rewrite the code in Rust later.
Yes, Python is a great choice for prototyping. But I’ve been a Python developer for long enough to know that I’ll very quickly grow out of the “prototype” phase -– which is when the language falls apart for me.
One thing I found particularly challenging in Python was hardening my prototype into a robust, production-ready codebase. I’ve found that the really hard bugs in Python are often type-related: deep down in your call chain, the program crashes because you just passed the wrong type to a function. Because of that, I find myself wanting to switch to something more robust as soon as my prototype starts to take shape.
The problem is that switching languages is a huge undertaking – especially mid-project. Maybe you’ll have to maintain two codebases simultaneously for a while. On top of that, Rust follows different idioms than Python, so you might have to rethink the software architecture. And to add insult to injury, you have to change build systems, testing frameworks, and deployment pipelines as well.
Wouldn’t it be nice if you could use a single language for prototyping and production?
Using a single language across your entire project lifecycle is great for productivity. Rust scales from proof-of-concept to production deployment and that eliminates costly context switches and rewrites. Rust’s strong type system catches design flaws early, but we will see how it also provides pragmatic escape hatches if needed. This means prototypes can naturally evolve into production code; even the first version is often production-ready.
But don’t take my word for it. Here’s what Discord had to say about migrating from Go to Rust:
Remarkably, we had only put very basic thought into optimization as the Rust version was written. Even with just basic optimization, Rust was able to outperform the hyper hand-tuned Go version. This is a huge testament to how easy it is to write efficient programs with Rust compared to the deep dive we had to do with Go. – From Why Discord is switching from Go to Rust
If you start with Rust, you get a lot of benefits out of the box: a robust codebase, a strong type system, and built-in linting.
All without having to change languages mid-project! It saves you the context switch between languages once you’re done with the prototype.
Python has a few good traits that we can learn from:
The goal is to get as close to that experience in Rust as possible while staying true to Rust’s core principles. Let’s make changes quick and painless and rapidly iterate on our design without painting ourselves into a corner. (And yes, there will still be a compilation step, but hopefully, a quick one.)
Even while prototyping, the type system is not going away. There are ways to make this a blessing rather than a curse.
Use simple types like i32
, String
, Vec
in the beginning.
We can always make things more complex later if we have to – the reverse is much harder.
Here’s a quick reference for common prototype-to-production type transitions:
Prototype | Production | When to switch |
---|---|---|
String |
&str |
When you need to avoid allocations or store string data with a clear lifetime |
Vec<T> |
&[T] |
When the owned vector becomes too expensive to clone or you can’t afford the heap |
Box<T> |
&T or &mut T
|
When Box becomes a bottleneck or you don’t want to deal with heap allocations |
Rc<T> |
&T |
When the reference counting overhead becomes too expensive or you need mutability |
Arc<Mutex<T>> |
&mut T |
When you can guarantee exclusive access and don’t need thread safety |
These owned types sidestep most ownership and lifetime issues, but they do it by allocating memory on the heap - just like Python or JavaScript would.
You can always refactor when you actually need the performance or tighter resource usage, but chances are you won’t.1
Rust is a statically, strongly typed language. It would be a deal-breaker to write out all the types all the time if it weren’t for Rust’s type inference.
You can often omit (“elide”) the types and let the compiler figure it out from the context.
let x = 42;
let y = "hello";
let z = vec!;
This is a great way to get started quickly and defer the decision about types to later. The system scales well with more complex types, so you can use this technique even in larger projects.
let x: = vec!;
let y: = vec!;
// From the context, Rust knows that `z` needs to be a `Vec<i32>`
// The `_` is a placeholder for the type that Rust will infer
let z = x.into_iter.chain.;
Here’s a more complex example which shows just how powerful Rust’s type inference can be:
use HashMap;
// Start with some nested data
let data = vec!;
// Let Rust figure out this complex transformation
// Can you tell what the type of `categorized` is?
let categorized = data
.into_iter
.flat_map
.;
// categorized is now a HashMap<&str, &str> mapping items to their categories
println!;
It’s not easy to visualize the structure of categorized
in your head, but Rust can figure it out.
You probably already know about the Rust Playground. The playground doesn’t support auto-complete, but it’s still great when you’re on the go or you’d like to share your code with others.
I find it quite useful for quickly jotting down a bunch of functions or types to test out a design idea.
unwrap
LiberallyIt’s okay to use unwrap
in the early stages of your project.
An explicit unwrap
is like a stop sign that tells you “here’s something you need to fix later.”
You can easily grep for unwrap
and replace it with proper error handling later when you polish your code.
This way, you get the best of both worlds: quick iteration cycles and a clear path to robust error handling.
There’s also a clippy lint that points out all the unwrap
s in your code.
use fs;
use PathBuf;
See all those unwraps? To more experienced Rustaceans, they stand out like a sore thumb – and that’s a good thing!
Compare that to languages like JavaScript which can throw exceptions your way at any time. It’s much harder to ensure that you handle all the edge-cases correctly. At the very least, it costs time. Time you could spend on more important things.
While prototyping with Rust, you can safely ignore error handling and focus on the happy path without losing track of improvement areas.
anyhow
to your prototypesI like to add anyhow
pretty early during the prototyping phase,
to get more fine-grained control over my error handling.
This way, I can use bail!
and with_context
to quickly add more context to my errors without losing momentum.
Later on, I can revisit each error case and see if I can handle it more gracefully.
use ;
// Here's how to use `with_context` to add more context to an error
let home = var
.with_context?;
// ...alternatively, use `bail` to return an error immediately
let Ok = var else ;
The great thing about anyhow
is that it’s a solid choice for error handling in production code as well,
so you don’t have to rewrite your error handling logic later on.
There is great IDE support for Rust.
IDEs can help you with code completion and refactoring, which keep you in the flow and help you write code faster. Autocompletion is so much better with Rust than with dynamic languages because the type system gives the IDE a lot more information to work with.
As a corollary to the previous section, be sure to use enable inlay hints (or inline type hints) in your editor. This way, you can quickly see the inferred types right inside your IDE and make sure the types match your expectations. There’s support for this in most Rust IDEs, including RustRover and Visual Studio Code.
bacon
for quick feedback cyclesRust is not a scripting language; there is a compile step!
However, for small projects, the compile times are negligible.
Unfortunately, you have to manually run cargo check
every time you make a change
or use rust-analyzer in your editor to get instant feedback.
To fill the gap, you can use external tools like bacon
which automatically recompiles and runs your code whenever you make a change.
This way, you can get almost the same experience as with a REPL in, say, Python or Ruby.
The setup is simple:
# Install bacon
# Run bacon in your project directory
And just like that, you can get some pretty compilation output alongside your code editor.
Oh, and in case you were wondering, cargo-watch
was another popular tool for
this purpose, but it’s since been deprecated.
cargo-script
is awesomeDid you know that cargo can also run scripts?
For example, put this into a file called script.rs
:
#!/usr/bin/env cargo +nightly -Zscript
Now you can make the file executable with chmod +x script.rs
and run it with ./script.rs
which it will compile and execute your code!
This allows you to quickly test out ideas without having to create a new project.
There is support for dependencies as well.
At the moment, cargo-script
is a nightly feature, but it will be released soon on stable Rust.
You can read more about it in the RFC.
You have to try really really hard to write slow code in Rust. Use that to your advantage: during the prototype phase, try to keep the code as simple as possible.
I gave a talk titled “The Four Horsemen of Bad Rust Code” where I argue that premature optimization is one of the biggest sins in Rust.
Especially experienced developers coming from C or C++ are tempted to optimize too early.
Rust makes code perform well by default - you get memory safety at virtually zero runtime cost. When developers try to optimize too early, they often run up against the borrow checker by using complex lifetime annotations and intricate reference patterns in pursuit of better performance. This leads to harder-to-maintain code that may not actually run faster.
Resist the urge to optimize too early! You will thank yourself later. 2
println!
and dbg!
for debuggingI find that printing values is pretty handy while prototyping. It’s one less context switch to make compared to starting a debugger.
Most people use println!
for that, but dbg!
has a few advantages:
println!
; e.g. dbg!(x)
vs. println!("{x:?}")
.Where dbg!
really shines is in recursive functions or when you want to see the intermediate values during an iteration:
dbg!;
The output is nice and tidy:
<= 1 = false
n <= 1 = false
n <= 1 = false
n <= 1 = true
n 1 = 1
* factorial = 2
n * factorial = 6
n * factorial = 24
n factorial = 24
Note that you should not keep the dbg!
calls in your final code as they will also be executed in release mode.
If you’re interested, here are more details on how to use the dbg!
macro.
Quite frankly, the type system is one of the main reasons I love Rust. It feels great to express my ideas in types and see them come to life. I would encourage you to heavily lean into the type system during the prototyping phase.
In the beginning, you won’t have a good idea of the types in your system. That’s fine! Start with something and quickly sketch out solutions and gradually add constraints to model the business requirements. Don’t stop until you find a version that feels just right. You know you’ve found a good abstraction when your types “click” with the rest of the code. 3 Try to build up a vocabulary of concepts and own types which describe your system.
Wrestling with Rust’s type system might feel slower at first compared to more dynamic languages, but it often leads to fewer iterations overall. Think of it this way: in a language like Python, each iteration might be quicker since you can skip type definitions, but you’ll likely need more iterations as you discover edge cases and invariants that weren’t immediately obvious. In Rust, the type system forces you to think through these relationships up front. Although each iteration takes longer, you typically need fewer of them to arrive at a robust solution.
This is exactly what we’ll see in the following example.
Say you’re modeling course enrollments in a student system. You might start with something simple:
But then requirements come in: some courses are very popular. More students want to enroll than there are spots available, so the school decides to add a waitlist.
Easy, let’s just add another boolean flag!
The problem is that both boolean flags could be set to true
!
This design allows invalid states where a student could be both enrolled and waitlisted.
Think for a second how we could leverage Rust’s type system to make this impossible…
Here’s one attempt:
Now we have a clear distinction between an active enrollment and a waitlisted enrollment. What’s better is that we encapsulate the details of each state in the enum variants. We can never have someone on the waitlist without a position in said list.
Just think about how much more complicated this would be in a dynamic language or a language that doesn’t support tagged unions like Rust does.
In summary, iterating on your data model is the crucial part of any prototyping phase. The result of this phase is not the code, but a deeper understanding of the problem domain itself. You can harvest this knowledge to build a more robust and maintainable solution.
It turns out you can model a surprisingly large system in just a few lines of code.
So, never be afraid to play around with types and refactor your code as you go.
todo!
MacroOne of the cornerstones of prototyping is that you don’t have to have all the answers right away.
In Rust, I find myself reaching for the todo!
macro to
express that idea.
I routinely just scaffold out the functions or a module and then fill in the blanks later.
// We don't know yet how to process the data
// but we're pretty certain that we need a function
// that takes a Vec<i32> and returns an i32
// There exists a function that loads the data and returns a Vec<i32>
// How exactly it does that is not important right now
We did not do much here, but we have a clear idea of what the program should do.
Now we can go and iterate on the design.
For example, should process_data
take a reference to the data?
Should we create a struct to hold the data and the processing logic?
How about using an iterator instead of a vector?
Should we introduce a trait to support algorithms for processing the data?
These are all helpful questions that we can answer without having to worry about the details of the implementation. And yet our code is typesafe and compiles, and it is ready for refactoring.
unreachable!
for unreachable branchesOn a related note, you can use the unreachable!
macro to mark branches of your code that should never be reached.
This is a great way to document your assumptions about the code.
The result is the same as if you had used todo!
, but it’s more explicit about the fact that this branch should never be reached:
thread 'main' panicked at src/main.rs:6:18:
internal error: entered unreachable code: Witchcraft!
Note that we added a message to the unreachable!
macro to make it clear what the assumption is.
assert!
for invariantsAnother way to document your assumptions is to use the assert!
macro.
This is especially useful for invariants that should hold true at runtime.
For example, the above code could be rewritten like this:
During prototyping, this can be helpful to catch logic bugs early on without having to write a lot of tests and you can safely carry them over to your production code.
Consider using
debug_assert!
for
expensive invariant checks that should only run in test/debug builds.
Chances are, you won’t know which parts of your application should be generic in the beginning. Therefore it’s better to be conservative and use concrete types instead of generics until necessary.
So instead of writing this:
Write this:
If you need the same function for a different type, feel free to just copy and paste the function and change the type. This way, you avoid the trap of settling on the wrong kind of abstraction too early. Maybe the two functions only differ by type signature for now, but they might serve a completely different purpose. If the function is not generic from the start, it’s easier to remove the duplication later.
Only introduce generics when you see a clear pattern emerge in multiple places. I personally avoid generics up until the very last moment. I want to feel the “pain” of duplication logic before I abstract it away. In 50% of the cases, I find that the problem is not missing generics, but that there’s a better algorithm or data structure that solves the problem more elegantly.
Also avoid “fancy” generic type signatures:
Yes, this allows you to pass in a &str
or a String
, but at the cost of readability.
Just use an owned type for your first implementation:
Chances are, you won’t need the flexibility after all.
In summary, generics are powerful, but they can make the code harder to read and write. Avoid them until you have a clear idea of what you’re doing.
One major blocker for rapid prototyping is Rust’s ownership system. If the compiler constantly reminds you of borrows and lifetimes it can ruin your flow. For example, it’s cumbersome to deal with references when you’re just trying to get something to work.
// First attempt with references - compiler error!
This code doesn’t compile because the references are not valid outside of the function.
Compiling playground v0.0.1
error: missing lifetime specifier
-/lib.rs:7:26
|
7 |
A simple way around that is to avoid lifetimes altogether.
They are not necessary in the beginning.
Use owned types like String
and Vec
.
Just .clone()
wherever you need to pass data around.
// Much simpler with owned types
If you have a type that you need to move between threads (i.e. it needs to be Send
), you can use an Arc<Mutex<T>>
to get around the borrow checker.
If you’re worried about performance, remember that other languages like Python or Java do this implicitly behind your back.
use ;
use thread;
let note = new;
let note_clone = clone;
spawn;
If you feel like you have to use Arc<Mutex<T>>
too often, there might be a design issue.
For example, you might be able to avoid sharing state between threads.
main.rs
is your best friend while prototyping.
Stuff your code in there – no need for modules or complex organization yet. This makes it easy to experiment and move things around.
Once you have a better feel for your code’s structure, Rust’s mod
keyword becomes a handy tool for sketching out potential organization. You can nest modules right in your main file.
This inline module structure lets you quickly test different organizational patterns. You can easily move code between scopes with cut and paste, and experiment with different APIs and naming conventions. Once a particular structure feels right, you can move modules into their own files.
The key is to keep things simple until it calls for more complexity. Start flat, then add structure incrementally as your understanding of the problem grows.
See also Matklad’s article on large Rust workspaces.
Allow yourself to ignore some of the best practices for production code for a while.
It’s possible, but you need to switch off your inner critic who always wants to write perfect code from the beginning. Rust enables you to comfortably defer perfection. You can make the rough edges obvious so that you can sort them out later. Don’t let perfect be the enemy of good.
One of the biggest mistakes I observe is an engineer’s perfectionist instinct to jump on minor details which don’t have a broad enough impact to warrant the effort. It’s better to have a working prototype with a few rough edges than a perfect implementation of a small part of the system.
Remember: you are exploring! Use a coarse brush to paint the landscape first. Try to get into a flow state where you can quickly iterate. Don’t get distracted by the details too early. During this phase, it’s also fine to throw away a lot failed attempts.
There’s some overlap between prototyping and “easy Rust.”
The beauty of prototyping in Rust is that your “rough drafts” have the same memory safety and performance as polished code.
Even when I liberally use unwrap()
, stick everything in main.rs
, and reach for owned types everywhere, the resulting code
is on-par with a Python prototype in reliability, but outperforms it easily.
This makes it perfect for experimenting with real-world workloads, even before investing time in proper error handling.
Let’s see how Rust stacks up against Python for prototyping:
Aspect | Python | Rust |
---|---|---|
Initial Development Speed | ✓ Very quick to write initial code ✓ No compilation step ✓ Dynamic typing speeds up prototyping ✓ File watchers available |
⚠️ Slightly slower initial development ✓ Type inference helps ✓ Tools like bacon provide quick feedback |
Standard Library | ✓ Batteries included ✓ Rich ecosystem |
❌ Smaller standard library ✓ Growing ecosystem of high-quality crates |
Transition to Production | ❌ Need extensive testing to catch type errors ❌ Bad performance might require extra work or rewrite in another language |
✓ Minimal changes needed beyond error handling ✓ Already has good performance ✓ Memory safety guaranteed |
Maintenance | ❌ Type errors surface during runtime ❌ Refactoring is risky |
✓ Compiler catches most issues ✓ Safe refactoring with type system |
Code Evolution | ❌ Hard to maintain large codebases ❌ Type issues compound |
✓ Compiler guides improvements ✓ Types help manage complexity |
Quite frankly, Rust makes for an excellent prototyping language if you embrace its strengths. Yes, the type system will make you think harder about your design up front - but that’s actually a good thing! Each iteration might take a bit longer than in Python or JavaScript, but you’ll typically need fewer iterations from prototype to production.
I’ve found that my prototypes in other languages often hit a wall where I need to switch to something more robust. With Rust, I can start simple and gradually turn that proof-of-concept into production code, all while staying in the same language and ecosystem.
If you have any more tips or tricks for prototyping in Rust, get in touch and I’ll add them to the list!