MoreRSS

site iconMatthias EndlerModify

A Rust developer and open-source maintainer with 20 years of experience in software development.
Please copy the RSS to your reader, or quickly subscribe to:

Inoreader Feedly Follow Feedbin Local Reader

Rss preview of Blog of Matthias Endler

Season 3 - Finale

2025-02-06 08:00:00

You know the drill by now. It’s time for another recap!

Sit back, get a warm beverage and look back at the highlights of Season 3 with us.

We’ve been at this for a while now (three seasons, one year, and 24 episodes to be exact). We had guests from a wide range of industries: from automotive to CAD software, and from developer tooling to systems programming.

Our focus this time around was on the technical details of Rust in production, especially integration of Rust into existing codebases and ecosystem deep dives. Thanks to everyone who participated in the survey last season, which helped us dial in our content. Let us know if we hit the mark or missed it!

Tips for Faster Rust CI Builds

2025-01-28 08:00:00

I’ve been working with many clients lately who host their Rust projects on GitHub. CI is typically a bottleneck in the development process since it can significantly slow down feedback loops. However, there are several effective ways to speed up your GitHub Actions workflows!

Want a Real-World Example?

Check out this production-ready GitHub Actions workflow that implements all the tips from this article: click here.

Also see Arpad Borsos’ workflow templates for Rust projects.

Use Swatinem’s cache action

This is easily my most important recommendation on this list.

My friend Arpad Borsos, also known as Swatinem, has created a cache action specifically tailored for Rust projects. It’s an excellent way to speed up any Rust CI build and requires no code changes to your project.

name: CI

on: 
  push:
    branches:
      - main
  pull_request:

jobs:
  build:
    runs-on: ubuntu-latest-arm64

    env:
      CARGO_TERM_COLOR: always

    steps:
      - uses: actions/checkout@v4
      - uses: dtolnay/rust-toolchain@stable

      # The secret sauce!
      - uses: Swatinem/rust-cache@v2

      - run: |
          cargo check
          cargo test
          cargo build --release

The action requires no additional configuration and works out of the box. There’s no need for a separate step to store the cache — this happens automatically through a post-action. This approach ensures that broken builds aren’t cached, and for subsequent builds, you can save several minutes of build time.

Here’s the documentation where you can learn more.

Use the --locked flag

When running cargo build, cargo test, or cargo check, you can pass the --locked flag to prevent Cargo from updating the Cargo.lock file.

This is particularly useful for CI operations since you save the time to update dependencies. Typically you want to test the exact dependency versions specified in your lock file anyway.

On top of that, it ensures reproducible builds, which is crucial for CI. From the Cargo documentation:

The --locked flag can be used to force Cargo to use the packaged Cargo.lock file if it is available. This may be useful for ensuring reproducible builds, to use the exact same set of dependencies that were available when the package was published.

Here’s how you can use it in your GitHub Actions workflow:

- run: cargo check --locked
- run: cargo test --locked

Use cargo-chef For Docker Builds

For Rust Docker images, cargo-chef can significantly speed up the build process by leveraging Docker’s layer caching:

FROM lukemathwalker/cargo-chef:latest-rust-1 AS chef
WORKDIR /app

FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json

FROM chef AS builder 
COPY --from=planner /app/recipe.json recipe.json
# Build dependencies - this is the caching Docker layer!
RUN cargo chef cook --release --recipe-path recipe.json
# Build application
COPY . .
RUN cargo build --release --bin app

# We do not need the Rust toolchain to run the binary!
FROM debian:bookworm-slim AS runtime
WORKDIR /app
COPY --from=builder /app/target/release/app /usr/local/bin
ENTRYPOINT ["/usr/local/bin/app"]

Alternatively, if you don’t mind a little extra typing, you can write your own Dockerfile without cargo-chef:

Click to expand
FROM rust:1.81-slim-bookworm AS builder

WORKDIR /usr/src/app

# Copy the Cargo files to cache dependencies
COPY Cargo.toml Cargo.lock ./

# Create a dummy main.rs to build dependencies
RUN mkdir src && \
    echo 'fn main() { println!("Dummy") }' > src/main.rs && \
    cargo build --release && \
    rm src/main.rs

# Now copy the actual source code
COPY src ./src

# Build for release
RUN touch src/main.rs && cargo build --release

# Runtime stage
FROM debian:bookworm-slim

# Install minimal runtime dependencies
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates && rm -rf /var/lib/apt/lists/*

# Copy the build artifact from the build stage
COPY --from=builder /usr/src/app/target/release/your-app /usr/local/bin/

# Set the startup command to run our binary
CMD ["your-app"]

Environment Flags To Disable Incremental Compilation

Rust provides environment flags to disable incremental compilation. While incremental compilation speeds up local development builds, in CI it can actually slow down the process due to dependency tracking overhead and negatively impact caching. So it’s better to switch it off:

name: Build

on:
  pull_request:
  push:
    branches:
      - main

env:
  # Disable incremental compilation for faster from-scratch builds
  CARGO_INCREMENTAL: 0

jobs:
  build:
    runs-on: ... 
    steps:
      ...

Disable Debug Info

While debug info is valuable for debugging, it significantly increases the size of the ./target directory, which can harm caching efficiency. It’s easy to switch off:

env:
  CARGO_PROFILE_TEST_DEBUG: 0

Use cargo nextest

cargo nextest enables parallel test execution, which can substantially speed up your CI process. While they claim a 3x speedup over cargo test, in CI environments I typically observe around 40% because the runners don’t have as many cores as a developer machine. It’s still a nice speedup.

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: dtolnay/rust-toolchain@stable
      - uses: taiki-e/install-action@nextest
      - uses: Swatinem/rust-cache@v2
      - name: Compile
        run: cargo check --locked
      - name: Test
        run: cargo nextest 

Cargo.toml Settings

These release profile settings can significantly improve build times and binary size:

[profile.release]
lto = true
codegen-units = 1
  • LTO (Link Time Optimization) performs optimizations across module boundaries, which can reduce binary size and improve runtime performance.
  • Setting codegen-units = 1 trades parallel compilation for better optimization opportunities. While this might make local builds slower, it often speeds up CI builds by reducing memory pressure on resource-constrained runners.

If you only want to apply these settings in CI, you can use the CARGO_PROFILE_RELEASE_LTO and CARGO_PROFILE_RELEASE_CODEGEN_UNITS environment variables:

jobs:
  build:
    runs-on: ubuntu-latest
    env:
      CARGO_PROFILE_RELEASE_LTO: true
      CARGO_PROFILE_RELEASE_CODEGEN_UNITS: 1
    steps:
      - uses: actions/checkout@v4
      - name: Build
        run: cargo build --release --locked

Use Beefier Runners

GitHub Actions has recently announced that Linux ARM64 hosted runners are now available for free in public repositories. Here’s the announcement.

Switching to ARM64 provides up to 40% performance improvement and is straightforward. Simply replace ubuntu-latest with ubuntu-latest-arm64 in your workflow file:

jobs:
  test:
    runs-on: ubuntu-latest-arm64

However, in my tests, the downside was that it took a long time until a runner was allocated to the job. The waiting time dwarfed the actual build time. I assume GitHub will add more runners in the future to mitigate this issue.

If you are using Rust for production workloads, it’s worth looking into dedicated VMs. These are not free, but in comparison to the small GitHub runners, you can get a significant uplift on build times.

Any provider will do, as long as you get a VM with a decent amount of CPU cores (16+ is recommended) and a good amount of RAM (32GB+). Hetzner Cloud is a popular choice for this purpose because of its competitive pricing. Spot instances or server auctions can be a good way to save money. Here are some setup resources to get you started:

There are services like Depot, which host runners for you. They promise large speedups for Rust builds, but I haven’t tested them myself.

Automate Dependency Updates

Implement dependabot or Renovate to automate dependency updates. Instead of manually creating PRs for updates and waiting for CI, these bots handle this automatically, creating PRs that you can merge when ready.

Renovate has a bit of an edge over dependabot in terms of configurability and features.

Streamline Release Creation

release-plz automates release creation when PRs are merged. This GitHub action eliminates the manual work of creating releases and is highly recommended for maintaining a smooth workflow.

release-plz website screenshot

Optimize Your Rust Code

If you’ve implemented all these optimizations and your builds are still slow, it’s time to optimize the Rust code itself. I’ve compiled many tips in my other blog post here.

Conclusion

Remember that each project is unique. Start with the easier wins like Swatinem’s cache action and --locked flag, then progressively implement more advanced optimizations as needed. Monitor your CI metrics to ensure the changes are having the desired effect.

Need Professional Support?

Is your Rust CI still too slow despite implementing these optimizations? I can help you identify and fix performance bottlenecks in your build pipeline. Book a free consultation to discuss your specific needs.

Hubstaff - From Rails to Rust

2025-01-27 08:00:00

It’s 2019, and Hubstaff’s engineering team is sketching out plans for their new webhook system. The new system needs to handle millions of events and work reliably at scale. The safe choice would be to stick with their trusty Ruby on Rails stack – after all, it had served them well so far. But that’s not the path they chose.

About Hubstaff

Hubstaff helps distributed teams track time and manage their workforce. With 500,000+ active users across 112,000+ businesses, they needed their systems to scale reliably. As a remote-first company with 120 team members, they understand the importance of robust, efficient software.

Why Hubstaff Chose Rust When Rails Was Working Fine

When I sat down with Alex, Hubstaff’s CTO, he painted a vivid picture of that moment. “Our entire application stack was powered by Ruby and JavaScript,” he told me. “It worked, but we knew we needed something different for this new challenge.”

The team stood at a crossroads. Go, with its simplicity and familiar patterns, felt like as a safe harbor. But there was another path – one less traveled at the time:

“We chose to proceed with Rust,” Alex recalled. “Not just because it was efficient, but because it would push us to think in fundamentally different ways.”

Alex Yarotsky, CTO at Hubstaff
Alex Yarotsky, CTO at Hubstaff

A Webhook System That Scaled 10x

Fast-forward to today.

The webhook system is processing ten times the initial load without breaking a sweat. Of course, the team had to make adjustments along the way, not to their Rust code, but to their SQL queries.

“Since its launch, we’ve had to optimize SQL queries multiple times to keep up with demand,” Alex shared, “but we’ve never faced any issues with the app’s memory or CPU consumption. Not once.”

Over time, more and more microservices got ported to Rust.

Hubstaff's Webhook System
A screenshot of Hubstaff's Webhook System

When to Use Rust And When to Stick With Rails

Instead of going all-in on Rust, Hubstaff found wisdom in balance.

Here’s their reasoning:

  1. High-Load Operations → Rust
  2. Lightweight APIs and Dashboard Backend → Rails
  3. Communication through standardized APIs and message queues

But what about Rust’s infamous learning curve?

“Once developers are up to speed,” Alex noted, “there’s no noticeable slowdown in development. The Rust ecosystem has matured to the point where we’re not constantly reinventing the wheel.”

From Server to Desktop

Once the team gained enough confidence in Rust, they started rewriting their desktop application. This was an area of the business, that was traditionally governed by C++, but the team was already sold on the idea:

The transition to Rust was surprisingly smooth. I think a big reason for that was the collective frustration with our existing C++ codebase. Rust felt like a breath of fresh air, and the idea naturally resonated with the team.

This quote is from Artur Jakubiec, Technical Lead at Hubstaff, who was leading the desktop app migration.

Artur Jakubiec, Technical Lead at Hubstaff
Artur Jakubiec, Technical Lead at Hubstaff

But Rust wasn’t an obvious choice for their desktop app. The easy path would have been Electron – the tried-and-true choice for companies looking to provide a desktop client from their web app. However, Hubstaff had learned to trust that Rust would get the job done.

Electron simply wasn’t an option,” Artur stated firmly. “We needed something lightweight, something that could bridge our future with our past. That’s why we chose Tauri.”

“It’s still early days for this approach, as we’re currently in the process of migrating our desktop app. However, we’re already envisioning some compelling synergies emerging from this setup. For example, many of the APIs used by our desktop and mobile apps are high-load systems, and following our strategy, they’re slated to be migrated to Rust soon. With the desktop team already familiarizing themselves with Rust during this transition, they’ll be better equipped to make contributions or changes to these APIs, which will reduce reliance on the server team.” added Alex.

The Current Architecture

Today, Hubstaff’s architecture is a mix of Ruby on Rails, Rust, and JavaScript. Their webhooks system, backend services, and desktop app are all powered by Rust and they keep expanding their Rust footprint across the stack for heavy-load operations.

Hubstaff's Architecture
Simplified Overview of Hubstaff's Current Architecture

Was It All Flowers And Sunshine?

Of course, there were moments of doubt. Adding a new language to an already complex tech stack isn’t a decision teams make lightly.

“There was skepticism,” Artur Jakubiec, their Desktop Tech Lead, admitted. “Not about Rust itself, but about balancing our ecosystem.”

But instead of letting doubt win, Artur took action. He spent weeks building prototypes, gathering data, and crafting a vision of what could be. It wasn’t just about convincing management – it was about showing his team a glimpse of the future they could build together.

Especially the build system caused some headaches:

One thing I really wish existed when we started was better C++-Rust integration, not just at the language level but especially in the build systems. Oddly enough, integrating Rust into CMake/C++ workflows (using tools like Corrosion) was relatively straightforward, but going the other way — embedding C++ into Rust—proved much more challenging. A more seamless and standardized approach for bidirectional integration would have saved us a lot of time and effort.

Artur adds:

Of course, challenges remain, particularly in ensuring seamless knowledge transfer and establishing best practices across teams. But the potential for closer collaboration and a unified stack makes this an exciting step forward.

Was It All Worth It?

For developers coming from interpreted languages like Ruby, two main insights stood out from our conversation:

Initially, switching to a compiled language felt like a hustle, but the “aha” moments made it worthwhile. The first came when we realized just how many edge cases the Rust compiler catches for you — it’s like having an additional safety net during development. The second came after deploying Rust applications to production. Seeing how much more resource-efficient the Rust app was compared to its Ruby counterpart was a real eye-opener. It demonstrated the tangible benefits of Rust’s focus on performance, reinforcing why it was worth tackling the learning curve.

But what about the C++ developers, which worked on the desktop app? What helped was that the team had prior experience with lower-level concepts from C++.

I believe the team’s strong C++ background made the transition to Rust almost seamless. Many of Rust’s more challenging low-level concepts have parallels in C++, such as the memory model, RAII, move semantics, pointers, references, and even aspects of ADTs (achievable in C++ with tools like std::optional and std::variant). Similarly, Rust’s ownership system and concepts like lifetimes echo patterns familiar to anyone experienced in managing resources in C++.

Let’s look at the facts:

  • Desktop developers now contribute to backend services, breaking down old silos.
  • Five years without a single memory-related issue in production.
  • Their C++ developers are on-board with Rust’s safety guarantees as well.
  • Infrastructure costs stayed flat despite 10x growth.

But perhaps the biggest change is confidence in the codebase:

“With C++, there’s a constant sense of paranoia about every line you write,” Artur revealed. “Rust transformed that fear into confidence. It changed not just how we code, but how we feel about coding.” […] “it’s like stepping into a world where you can trust your tools”

On top of that, Alex added that using Rust across the stack has also opened up new collaboration opportunities across the teams.

Artur adds that the onboarding experience has also been smoother than expected:

So far, onboarding hasn’t been an issue at all. Honestly, there’s no secret sauce — it’s all about getting new team members working on the code as soon as possible.

Hubstaff Team
Parts of the Hubstaff Team at their 2022 team offsite in Punta Cana

Where They Are Today

Today, Hubstaff’s journey continues. Their Rust footprint grows steadily: 4,200 lines of mission-critical server code, 2,000 lines in their desktop app, and a team of passionate Rustaceans that’s still growing.

But when I asked Alex and Artur what they’re most proud of, it wasn’t the technical achievements that topped their list. It was how they got there: thoughtfully, methodically, and together as a team.

  • 2019: First steps into Rust - Webhook system prototype. 🌱
  • 2020: Webhook system processes first million events. 🌿
  • 2024: Started desktop app migration to Rust/Tauri. 🪴
  • 2025: Expanding Rust across their entire stack. Public release of new Rust-powered desktop app. 🌳

Key Lessons for Teams Considering Rust

What would Alex and Artur recommend to teams standing at their own crossroads?
Here’s what they shared:

  1. Start with a clear mission, not just a technical preference
  2. Invest in your team’s journey through learning and support
  3. Make data-driven decisions
  4. Build bridges between the old and the new
  5. Look for opportunities for collaboration and knowledge sharing

Thanks

Special thanks to Alex Yarotsky, CTO and Artur Jakubiec, Technical Lead at Hubstaff for sharing their journey with Rust.

Want to learn more about Hubstaff? Check out their website.

Volvo

2025-01-23 08:00:00

The car industry is not known for its rapid adoption of new technologies. Therefore, it’s even more exciting to see a company like Volvo Cars embracing Rust for core components of their software stack.

Thinking in Expressions

2025-01-21 08:00:00

Rust’s focus on expressions is an underrated aspect of the language. Code feels more natural to compose once you embrace expressions as a core mechanic in Rust. I would go as far as to say that expressions shaped the way I think about control flow in general.

“Everything is an expression” is a bit of an exaggeration, but it’s a useful mental model while you internalize the concept.1

But what’s so special about them?

Expressions produce values, statements do not.

The difference between expressions and statements can easily be dismissed as a minor detail. Underneath the surface, though, the fact that expressions return values has a profound impact on the ergonomics of a language.

In Rust, most things produce a value: literals, variables, function calls, blocks, and control flow statements like if, match, and loop. Even & and * are expressions in Rust.

Expressions In Rust vs other languages

Rust inherits expressions from its functional roots in the ML family of languages; they are not so common in other languages. Go, C++, Java, and TypeScript have them, but they pale in comparison to Rust.

In Go, for example, an if statement is… well, a statement and not an expression. This has some surprising side-effects. For example, you can’t use if statements in a ternary expression like you would in Rust:

// This is not valid Go code!
var x = if condition { 1 } else { 2 };

Instead, you’d have to write a full-blown if statement along with a slightly unfortunate upfront variable declaration:

var x int
if condition {
  x = 1
} else {
  x = 2
}

Since if is an expression in Rust, using it in a ternary expression is perfectly normal.

let x = if condition { 1 } else { 2 };

That explains the absence of the ternary operator in Rust (i.e. there is no syntax like x = condition ? 1 : 2;). No special syntax is needed because if is comparably concise.

Also note that in comparison to Go, our variable x does not need to be mutable. As we will see, Rust’s expressions often lead to less mutable code.

In combination with pattern matching, expressions in Rust become even more powerful:

let (a, b) = if condition { ("first", true) } else { ("second", false) };

Here, the left side of the assignment (a, b) is a pattern that destructures the tuple returned by the if-else expression.

What if you deal with more complex control flow? That’s not a problem. match is an expression, too. It is common to assign the result of a match expression to a variable.

let color = match duck {
    Duck::Huey => "Red",
    Duck::Dewey => "Blue",
    Duck::Louie => "Green",
};

Combining match and if Expressions

Let’s say you want to return a duck’s color, but you want to return the correct color based on the year. (In the early Disney comics, the nephews were wearing different colors.)

let color = match duck {
    // In early comic books, the
    // ducks were colored randomly
    _ if year < 1980 => random_color(),
    
    // In the early 80s, Huey's cap was pink
    Duck::Huey if year < 1982 => "Pink",
    
    // Since 1982, the ducks have dedicated colors
    Duck::Huey => "Red",
    Duck::Dewey => "Blue",
    Duck::Louie => "Green",
};

Neat, right? You can combine match and if expressions to create complex logic in a few lines of code.

Note: those ifs are called match arm guards, and they really are full-fledged if expressions. You can put anything in there just like in a regular if.

Lesser known facts about expressions

break is an expression

You can return a value from a loop with break:

let foo = loop { break 1 };
// foo is 1

More commonly, you’d use it like this:

let result = loop {
    counter += 1;
    if counter == 10 {
        break counter * 2;
    }
};
// result is 20

dbg!() returns the value of the inner expression

You can wrap any expression with dbg!() without changing the behavior of your code (aside from the debug output).

let x = dbg!(compute_complex_value());

Real-World Refactoring With Expressions

So far, I showed you some fancy expression tricks, but how do you apply this in practice?

To illustrate this, imagine you have a Config struct that reads a configuration file from a given path:

/// Configuration for the application
pub struct Config {
    config_path: PathBuf,
}

/// Creates a new Config with the given path
///
/// The path is resolved against the home directory if relative.
/// Validates that the path exists and has the correct extension.
impl Config {
    pub fn with_config_path(path: PathBuf) -> Result<Self, std::io::Error> {
        todo!()
    }
}

Here’s how you might implement the with_config_path method in an imperative style:

impl Config {
    pub fn with_config_path(path: PathBuf) -> Result<Self, std::io::Error> {
        // First determine the base path
        let mut config_path;
        if path.is_absolute() {
            config_path = path;
        } else {
            let home = get_home_dir();
            if home.is_none() {
                return Err(io::Error::new(
                    io::ErrorKind::NotFound,
                    "Home directory not found",
                ));
            }
            config_path = home.unwrap().join(path);
        }

        // Do validation
        if !config_path.exists() {
            return Err(io::Error::new(
                io::ErrorKind::NotFound,
                "Config path does not exist",
            ));
        }

        if config_path.is_file() {
            let ext = config_path.extension();
            if ext.is_none() {
                return Err(io::Error::new(
                    io::ErrorKind::InvalidInput,
                    "Config file must have .conf extension",
                ));
            }
            if ext.unwrap().to_str() != Some("conf") {
                return Err(io::Error::new(
                    io::ErrorKind::InvalidInput,
                    "Config file must have .conf extension",
                ));
            }
        }

        return Ok(Self { config_path });
    }
}

There are a few things we can improve here:

  • The code is quite imperative
  • Lots of temporary variables
  • Explicit mutation with mut
  • Nested if statements
  • Manual unwrapping with is_none()/unwrap()

Tip 1: Remove the unwraps

It’s always a good idea to examine unwrap() calls and find safer alternatives. While we “only” have two unwrap() calls here, both point at flaws in our design. Here’s the first one:

let mut config_path;
if path.is_absolute() {
    config_path = path;
} else {
    let home = get_home_dir();
    if home.is_none() {
        return Err(io::Error::new(
            io::ErrorKind::NotFound,
            "Home directory not found",
        ));
    }
    config_path = home.unwrap().join(path);
}

We know that home is not None when we unwrap it, because we checked it right above. But what if we refactor the code? We might forget the check and introduce a bug.

This can be rewritten as:

let config_path = if path.is_absolute() {
    path
} else {
    let home = get_home_dir().ok_or_else(|| io::Error::new(
        io::ErrorKind::NotFound,
        "Home directory not found",
    ))?;
    home.join(path)
};

Or, if we introduce a custom error type:

let config_path = if path.is_absolute() {
    path
} else {
    let home = get_home_dir().ok_or_else(|| ConfigError::HomeDirNotFound)?;
    home.join(path)
};

The other unwrap is also unnecessary and makes the happy path harder to read. Here is the original code:

if config_path.is_file() {
    let ext = config_path.extension();
    if ext.is_none() {
        return Err(io::Error::new(
            io::ErrorKind::InvalidInput,
            "Config file must have .conf extension",
        ));
    }
    if ext.unwrap().to_str() != Some("conf") {
        return Err(io::Error::new(
            io::ErrorKind::InvalidInput,
            "Config file must have .conf extension",
        ));
    }
}

We can rewrite this as:

if config_path.is_file() {
    let Some(ext) = config_path.extension() else {
        return Err(...);
    }
    if ext != "conf" {
        return Err(...);
    }
}

Or we return early to avoid the nested if:

if !config_path.is_file() {
    return Err(...);
}

let Some(ext) = config_path.extension() else {
    return Err(...);
}

if ext != "conf" {
    return Err(io::Error::new(...));
}

(Playground)

Tip 2: Remove the muts

Usually, my next step is to get rid of as many mut variables as possible.

Note how there are no more mut keywords after our first refactoring. This is a typical pattern in Rust: often when we get rid of an unwrap(), we can remove a mut as well.

Nevertheless, it is always a good idea to look for all mut variables and see if they are really necessary.

Tip 3: Remove the explicit return statements

The last expression in a block is implicitly returned and that return is an expression itself, so you can often get rid of explicit return statements. In our case that means:

return Ok(Self { config_path });

becomes

Ok(Self { config_path })

Another simple heuristic is to hunt for returns and semicolons in the middle of your code. These are like “seams” in our program; stop signs, which break the natural data flow. Almost effortlessly, removing these blockers often improves the flow; it’s like magic.

For example, the above validation code can also be written without returns:

match config_path {
    path if !path.is_file() => Err(io::Error::new(
        io::ErrorKind::InvalidInput,
        "Config path is not a file",
    )),
    path if path.extension() != Some(OsStr::new("conf")) => Err(io::Error::new(
        io::ErrorKind::InvalidInput,
        "Config file must have .conf extension",
    )),
    _ => Ok(())
}

I like that, because we avoid one error message duplication and all conditions start on the left. Whether you prefer that over let-else is a matter of taste. 2

Don’t take it too far

Remember when I said “everything is an expression”? Don’t take this too far or people will stop inviting you to dinner parties.

It’s fun to know that you could use then_some, unwrap_or_else, and map_or to chain expressions together, but don’t use them just to show off.

Warning

The below code is correct, but the combinators get in the way of readability. It feels more like a Lisp program than Rust code.

impl Config {
    pub fn with_config_path(path: PathBuf) -> Result<Self, io::Error> {
        (if path.is_absolute() {
            Ok(path)
        } else {
            get_home_dir()
                .ok_or_else(/* error */)
                .map(|home| home.join(path))
        })
        .and_then(|config_path| {
            (!config_path.exists())
                .then_some(Err(/* error */))
                .unwrap_or_else(|| {
                    config_path
                        .is_file()
                        .then(|| {
                            (!config_path
                                .extension()
                                .map_or(false, |ext| ext == "conf"))
                                .then_some(Err(/* error */))
                                .unwrap_or(Ok(()))
                        })
                        .unwrap_or(Ok(()))
                        .map(|_| config_path)
                })
        })
        .map(|config_path| Self { config_path })
    }
}

Keep your friends and colleagues in mind when writing code. Find a balance between expressiveness and readability.

Conclusion

If you find that your code doesn’t feel idiomatic, see if expressions can help. They tend to guide you towards more ergonomic Rust code.

Once you find the right balance, expressions are a joy to use – especially in smaller context where data flow is key. The “trifecta” of iterators, expressions, and pattern matching is the foundation of data transformations in Rust. I wrote a complementary article about iterators here.

Of course, it’s not forbidden to mix expressions and statements! For example, I personally like to use let-else statements when it makes my code easier to understand. If you’re unsure about whether using an expression is worth it, seek feedback from someone less familiar with Rust. If they look confused, you probably tried to be too clever.

Now, try to refactor some code to train that muscle.

  1. The Rust Reference puts it like this: “Rust is primarily an expression language. This means that most forms of value-producing or effect-causing evaluation are directed by the uniform syntax category of expressions. Each kind of expression can typically nest within each other kind of expression, and rules for evaluation of expressions involve specifying both the value produced by the expression and the order in which its sub-expressions are themselves evaluated.”

  2. By the way, let-else is not an expression, but a statement. That’s because the else branch doesn’t produce a value. Instead, it moves the “failure” case into the body block, while allowing the “success” case to continue in the surrounding context without additional nesting. I recommend reading the RFC for more details.

Prototyping in Rust

2025-01-15 08:00:00

Programming is an iterative process - as much as we would like to come up with the perfect solution from the start, it rarely works that way.

Good programs often start as quick prototypes. The bad ones stay prototypes, but the best ones evolve into production code.

Whether you’re writing games, CLI tools, or designing library APIs, prototyping helps tremendously in finding the best approach before committing to a design. It helps reveal the patterns behind more idiomatic code.

For all its explicitness, Rust is surprisingly ergonomic when iterating on ideas. Contrary to popular belief, it is a joy for building prototypes.

You don’t need to be a Rust expert to be productive - in fact, many of the techniques we’ll discuss specifically help you sidestep Rust’s more advanced features. If you focus on simple patterns and make use of Rust’s excellent tooling, even less experienced Rust developers can quickly bring their ideas to life.

Things you’ll learn

  • How to prototype rapidly in Rust while keeping its safety guarantees
  • Practical techniques to maintain a quick feedback loop
  • Patterns that help you evolve prototypes into production code

Why People Think Rust Is Not Good For Prototyping

The common narrative goes like this:

When you start writing a program, you don’t know what you want and you change your mind pretty often. Rust pushes back when you change your mind because the type system is very strict. On top of that, getting your idea to compile takes longer than in other languages, so the feedback loop is slower.

I’ve found that developers not yet too familiar with Rust often share this preconception. These developers stumble over the strict type system and the borrow checker while trying to sketch out a solution. They believe that with Rust you’re either at 0% or 100% done (everything works and has no undefined behavior) and there’s nothing in between.

Here are some typical misbeliefs:

  1. “Memory safety and prototyping just don’t go together.”
  2. “Ownership and borrowing take the fun out of prototyping.”
  3. “You have to get all the details right from the beginning.”
  4. “Rust always requires you to handle errors.”

These are all common misconceptions and they are not true.

It turns out you can avoid all of these pitfalls and still get a lot of value from prototyping in Rust.

Problems with Prototyping in Other Languages

If you’re happy with a scripting language like Python, why bother with Rust?

That’s a fair question! After all, Python is known for its quick feedback loop and dynamic type system, and you can always rewrite the code in Rust later.

Yes, Python is a great choice for prototyping. But I’ve been a Python developer for long enough to know that I’ll very quickly grow out of the “prototype” phase -– which is when the language falls apart for me.

One thing I found particularly challenging in Python was hardening my prototype into a robust, production-ready codebase. I’ve found that the really hard bugs in Python are often type-related: deep down in your call chain, the program crashes because you just passed the wrong type to a function. Because of that, I find myself wanting to switch to something more robust as soon as my prototype starts to take shape.

The problem is that switching languages is a huge undertaking – especially mid-project. Maybe you’ll have to maintain two codebases simultaneously for a while. On top of that, Rust follows different idioms than Python, so you might have to rethink the software architecture. And to add insult to injury, you have to change build systems, testing frameworks, and deployment pipelines as well.

Wouldn’t it be nice if you could use a single language for prototyping and production?

What Makes Rust Great for Prototyping?

Using a single language across your entire project lifecycle is great for productivity. Rust scales from proof-of-concept to production deployment and that eliminates costly context switches and rewrites. Rust’s strong type system catches design flaws early, but we will see how it also provides pragmatic escape hatches if needed. This means prototypes can naturally evolve into production code; even the first version is often production-ready.

But don’t take my word for it. Here’s what Discord had to say about migrating from Go to Rust:

Remarkably, we had only put very basic thought into optimization as the Rust version was written. Even with just basic optimization, Rust was able to outperform the hyper hand-tuned Go version. This is a huge testament to how easy it is to write efficient programs with Rust compared to the deep dive we had to do with Go. – From Why Discord is switching from Go to Rust

What A Solid Rust Prototyping Workflow Looks Like

If you start with Rust, you get a lot of benefits out of the box: a robust codebase, a strong type system, and built-in linting.

All without having to change languages mid-project! It saves you the context switch between languages once you’re done with the prototype.

flow

Python has a few good traits that we can learn from:

  • fast feedback loop
  • changing your mind is easy
  • it’s simple to use (if you ignore the edge cases)
  • very little boilerplate
  • it’s easy to experiment and refactor
  • you can do something useful in just a few lines
  • no compilation step

The goal is to get as close to that experience in Rust as possible while staying true to Rust’s core principles. Let’s make changes quick and painless and rapidly iterate on our design without painting ourselves into a corner. (And yes, there will still be a compilation step, but hopefully, a quick one.)

Tips And Tricks For Prototyping In Rust

Use simple types

Even while prototyping, the type system is not going away. There are ways to make this a blessing rather than a curse.

Use simple types like i32, String, Vec in the beginning. We can always make things more complex later if we have to – the reverse is much harder.

Here’s a quick reference for common prototype-to-production type transitions:

Prototype Production When to switch
String &str When you need to avoid allocations or store string data with a clear lifetime
Vec<T> &[T] When the owned vector becomes too expensive to clone or you can’t afford the heap
Box<T> &T or &mut T When Box becomes a bottleneck or you don’t want to deal with heap allocations
Rc<T> &T When the reference counting overhead becomes too expensive or you need mutability
Arc<Mutex<T>> &mut T When you can guarantee exclusive access and don’t need thread safety

These owned types sidestep most ownership and lifetime issues, but they do it by allocating memory on the heap - just like Python or JavaScript would.

You can always refactor when you actually need the performance or tighter resource usage, but chances are you won’t.1

Make use of type inference

Rust is a statically, strongly typed language. It would be a deal-breaker to write out all the types all the time if it weren’t for Rust’s type inference.

You can often omit (“elide”) the types and let the compiler figure it out from the context.

let x = 42;
let y = "hello";
let z = vec![1, 2, 3];

This is a great way to get started quickly and defer the decision about types to later. The system scales well with more complex types, so you can use this technique even in larger projects.

let x: Vec<i32> = vec![1, 2, 3];
let y: Vec<i32> = vec![4, 5, 6];

// From the context, Rust knows that `z` needs to be a `Vec<i32>`
// The `_` is a placeholder for the type that Rust will infer
let z = x.into_iter().chain(y.into_iter()).collect::<Vec<_>>();

Here’s a more complex example which shows just how powerful Rust’s type inference can be:

use std::collections::HashMap;

// Start with some nested data
let data = vec![
    ("fruits", vec!["apple", "banana"]),
    ("vegetables", vec!["carrot", "potato"]),
];

// Let Rust figure out this complex transformation
// Can you tell what the type of `categorized` is?
let categorized = data
    .into_iter()
    .flat_map(|(category, items)| {
        items.into_iter().map(move |item| (item, category))
    })
    .collect::<HashMap<_, _>>();

// categorized is now a HashMap<&str, &str> mapping items to their categories
println!("What type is banana? {}", categorized.get("banana").unwrap());

(Playground)

It’s not easy to visualize the structure of categorized in your head, but Rust can figure it out.

Use the Rust playground

You probably already know about the Rust Playground. The playground doesn’t support auto-complete, but it’s still great when you’re on the go or you’d like to share your code with others.

I find it quite useful for quickly jotting down a bunch of functions or types to test out a design idea.

Use unwrap Liberally

It’s okay to use unwrap in the early stages of your project. An explicit unwrap is like a stop sign that tells you “here’s something you need to fix later.” You can easily grep for unwrap and replace it with proper error handling later when you polish your code. This way, you get the best of both worlds: quick iteration cycles and a clear path to robust error handling. There’s also a clippy lint that points out all the unwraps in your code.

use std::fs;
use std::path::PathBuf;

fn main() {
    // Quick and dirty path handling during prototyping
    let home = std::env::var("HOME").unwrap();
    let config_path = PathBuf::from(home).join(".config").join("myapp");
    
    // Create config directory if it doesn't exist
    fs::create_dir_all(&config_path).unwrap();
    
    // Read the config file, defaulting to empty string if it doesn't exist
    let config_file = config_path.join("config.json");
    let config_content = fs::read_to_string(&config_file)
        .unwrap_or_default();
    
    // Parse the JSON config
    let config: serde_json::Value = if !config_content.is_empty() {
        serde_json::from_str(&config_content).unwrap()
    } else {
        serde_json::json!({})
    };
    
    println!("Loaded config: {:?}", config);
}

See all those unwraps? To more experienced Rustaceans, they stand out like a sore thumb – and that’s a good thing!

Compare that to languages like JavaScript which can throw exceptions your way at any time. It’s much harder to ensure that you handle all the edge-cases correctly. At the very least, it costs time. Time you could spend on more important things.

While prototyping with Rust, you can safely ignore error handling and focus on the happy path without losing track of improvement areas.

Add anyhow to your prototypes

I like to add anyhow pretty early during the prototyping phase, to get more fine-grained control over my error handling. This way, I can use bail! and with_context to quickly add more context to my errors without losing momentum. Later on, I can revisit each error case and see if I can handle it more gracefully.

use anyhow::{bail, Context, Result};

// Here's how to use `with_context` to add more context to an error
let home = std::env::var("HOME")
    .with_context(|| "Could not read HOME environment variable")?;

// ...alternatively, use `bail` to return an error immediately 
let Ok(home) = std::env::var("HOME") else {
    bail!("Could not read HOME environment variable");
};

The great thing about anyhow is that it’s a solid choice for error handling in production code as well, so you don’t have to rewrite your error handling logic later on.

Use a good IDE

There is great IDE support for Rust.

IDEs can help you with code completion and refactoring, which keep you in the flow and help you write code faster. Autocompletion is so much better with Rust than with dynamic languages because the type system gives the IDE a lot more information to work with.

As a corollary to the previous section, be sure to use enable inlay hints (or inline type hints) in your editor. This way, you can quickly see the inferred types right inside your IDE and make sure the types match your expectations. There’s support for this in most Rust IDEs, including RustRover and Visual Studio Code.

Inlay hints in Rust Rover

Use bacon for quick feedback cycles

Rust is not a scripting language; there is a compile step!

However, for small projects, the compile times are negligible. Unfortunately, you have to manually run cargo check every time you make a change or use rust-analyzer in your editor to get instant feedback.

To fill the gap, you can use external tools like bacon which automatically recompiles and runs your code whenever you make a change. This way, you can get almost the same experience as with a REPL in, say, Python or Ruby.

The setup is simple:

# Install bacon
cargo install --locked bacon

# Run bacon in your project directory
bacon

And just like that, you can get some pretty compilation output alongside your code editor.

bacon

Oh, and in case you were wondering, cargo-watch was another popular tool for this purpose, but it’s since been deprecated.

cargo-script is awesome

Did you know that cargo can also run scripts?

For example, put this into a file called script.rs:

#!/usr/bin/env cargo +nightly -Zscript

fn main() {
    println!("Hello prototyping world");
}

Now you can make the file executable with chmod +x script.rs and run it with ./script.rs which it will compile and execute your code! This allows you to quickly test out ideas without having to create a new project. There is support for dependencies as well.

At the moment, cargo-script is a nightly feature, but it will be released soon on stable Rust. You can read more about it in the RFC.

Don’t worry about performance

You have to try really really hard to write slow code in Rust. Use that to your advantage: during the prototype phase, try to keep the code as simple as possible.

I gave a talk titled “The Four Horsemen of Bad Rust Code” where I argue that premature optimization is one of the biggest sins in Rust.

Especially experienced developers coming from C or C++ are tempted to optimize too early.

Rust makes code perform well by default - you get memory safety at virtually zero runtime cost. When developers try to optimize too early, they often run up against the borrow checker by using complex lifetime annotations and intricate reference patterns in pursuit of better performance. This leads to harder-to-maintain code that may not actually run faster.

Resist the urge to optimize too early! You will thank yourself later. 2

Use println! and dbg! for debugging

I find that printing values is pretty handy while prototyping. It’s one less context switch to make compared to starting a debugger.

Most people use println! for that, but dbg! has a few advantages:

  • It prints the file name and line number where the macro is called. This helps you quickly find the source of the output.
  • It outputs the expression as well as its value.
  • It’s less syntax-heavy than println!; e.g. dbg!(x) vs. println!("{x:?}").

Where dbg! really shines is in recursive functions or when you want to see the intermediate values during an iteration:

fn factorial(n: u32) -> u32 {
    // `dbg!` returns the argument, 
    // so you can use it in the middle of an expression
    if dbg!(n <= 1) {
        dbg!(1)
    } else {
        dbg!(n * factorial(n - 1))
    }
}

dbg!(factorial(4));

The output is nice and tidy:

[src/main.rs:2:8] n <= 1 = false
[src/main.rs:2:8] n <= 1 = false
[src/main.rs:2:8] n <= 1 = false
[src/main.rs:2:8] n <= 1 = true
[src/main.rs:3:9] 1 = 1
[src/main.rs:7:9] n * factorial(n - 1) = 2
[src/main.rs:7:9] n * factorial(n - 1) = 6
[src/main.rs:7:9] n * factorial(n - 1) = 24
[src/main.rs:9:1] factorial(4) = 24

Note that you should not keep the dbg! calls in your final code as they will also be executed in release mode. If you’re interested, here are more details on how to use the dbg! macro.

Design through types

Quite frankly, the type system is one of the main reasons I love Rust. It feels great to express my ideas in types and see them come to life. I would encourage you to heavily lean into the type system during the prototyping phase.

In the beginning, you won’t have a good idea of the types in your system. That’s fine! Start with something and quickly sketch out solutions and gradually add constraints to model the business requirements. Don’t stop until you find a version that feels just right. You know you’ve found a good abstraction when your types “click” with the rest of the code. 3 Try to build up a vocabulary of concepts and own types which describe your system.

Wrestling with Rust’s type system might feel slower at first compared to more dynamic languages, but it often leads to fewer iterations overall. Think of it this way: in a language like Python, each iteration might be quicker since you can skip type definitions, but you’ll likely need more iterations as you discover edge cases and invariants that weren’t immediately obvious. In Rust, the type system forces you to think through these relationships up front. Although each iteration takes longer, you typically need fewer of them to arrive at a robust solution.

This is exactly what we’ll see in the following example.

Say you’re modeling course enrollments in a student system. You might start with something simple:

struct Enrollment {
    student: StudentId,
    course: CourseId,
    is_enrolled: bool,
}

But then requirements come in: some courses are very popular. More students want to enroll than there are spots available, so the school decides to add a waitlist.

Easy, let’s just add another boolean flag!

struct Enrollment {
    student: StudentId,
    course: CourseId,
    is_enrolled: bool,
    is_waitlisted: bool, // 🚩 uh oh
}

The problem is that both boolean flags could be set to true! This design allows invalid states where a student could be both enrolled and waitlisted.

Think for a second how we could leverage Rust’s type system to make this impossible…

Here’s one attempt:

enum EnrollmentStatus {
    Active {
        date: DateTime<Utc>,
    },
    Waitlisted {
        position: u32,
    },
}

struct Enrollment {
    student: StudentId,
    course: CourseId,
    status: EnrollmentStatus,
}

Now we have a clear distinction between an active enrollment and a waitlisted enrollment. What’s better is that we encapsulate the details of each state in the enum variants. We can never have someone on the waitlist without a position in said list.

Just think about how much more complicated this would be in a dynamic language or a language that doesn’t support tagged unions like Rust does.

In summary, iterating on your data model is the crucial part of any prototyping phase. The result of this phase is not the code, but a deeper understanding of the problem domain itself. You can harvest this knowledge to build a more robust and maintainable solution.

It turns out you can model a surprisingly large system in just a few lines of code.

So, never be afraid to play around with types and refactor your code as you go.

The todo! Macro

One of the cornerstones of prototyping is that you don’t have to have all the answers right away. In Rust, I find myself reaching for the todo! macro to express that idea.

I routinely just scaffold out the functions or a module and then fill in the blanks later.

// We don't know yet how to process the data
// but we're pretty certain that we need a function
// that takes a Vec<i32> and returns an i32
fn process_data(data: Vec<i32>) -> i32 {
    todo!()
}

// There exists a function that loads the data and returns a Vec<i32>
// How exactly it does that is not important right now
fn load_data() -> Vec<i32> {
    todo!()
}

fn main() {
    // Given that we have a function to load the data
    let data = load_data();
    // ... and a function to process it
    let result = process_data(data);
    // ... we can print the result
    println!("Result: {}", result);
}

We did not do much here, but we have a clear idea of what the program should do. Now we can go and iterate on the design. For example, should process_data take a reference to the data? Should we create a struct to hold the data and the processing logic? How about using an iterator instead of a vector? Should we introduce a trait to support algorithms for processing the data?

These are all helpful questions that we can answer without having to worry about the details of the implementation. And yet our code is typesafe and compiles, and it is ready for refactoring.

unreachable! for unreachable branches

On a related note, you can use the unreachable! macro to mark branches of your code that should never be reached.

fn main() {
    let age: u8 = 170;
    
    match age {
        0..150 => println!("Normal human age"),
        150.. => unreachable!("Witchcraft!"),
     }
}

This is a great way to document your assumptions about the code. The result is the same as if you had used todo!, but it’s more explicit about the fact that this branch should never be reached:

thread 'main' panicked at src/main.rs:6:18:
internal error: entered unreachable code: Witchcraft!

Note that we added a message to the unreachable! macro to make it clear what the assumption is.

Use assert! for invariants

Another way to document your assumptions is to use the assert! macro. This is especially useful for invariants that should hold true at runtime.

For example, the above code could be rewritten like this:

fn main() {
    let age: u8 = 170;
    
    assert!(age < 150, "This is very unlikely to be a human age");
    
    println!("Normal human age");
}

During prototyping, this can be helpful to catch logic bugs early on without having to write a lot of tests and you can safely carry them over to your production code.

Consider using debug_assert! for expensive invariant checks that should only run in test/debug builds.

Avoid generics

Chances are, you won’t know which parts of your application should be generic in the beginning. Therefore it’s better to be conservative and use concrete types instead of generics until necessary.

So instead of writing this:

fn foo<T>(x: T) -> T {
    // ...
}

Write this:

fn foo(x: i32) -> i32 {
    // ...
}

If you need the same function for a different type, feel free to just copy and paste the function and change the type. This way, you avoid the trap of settling on the wrong kind of abstraction too early. Maybe the two functions only differ by type signature for now, but they might serve a completely different purpose. If the function is not generic from the start, it’s easier to remove the duplication later.

Only introduce generics when you see a clear pattern emerge in multiple places. I personally avoid generics up until the very last moment. I want to feel the “pain” of duplication logic before I abstract it away. In 50% of the cases, I find that the problem is not missing generics, but that there’s a better algorithm or data structure that solves the problem more elegantly.

Also avoid “fancy” generic type signatures:

fn foo<T: AsRef<str>>(x: T) -> String {
    // ...
}

Yes, this allows you to pass in a &str or a String, but at the cost of readability.

Just use an owned type for your first implementation:

fn foo(x: String) -> String {
    // ...
}

Chances are, you won’t need the flexibility after all.

In summary, generics are powerful, but they can make the code harder to read and write. Avoid them until you have a clear idea of what you’re doing.

Avoid Lifetimes

One major blocker for rapid prototyping is Rust’s ownership system. If the compiler constantly reminds you of borrows and lifetimes it can ruin your flow. For example, it’s cumbersome to deal with references when you’re just trying to get something to work.

// First attempt with references - compiler error!
struct Note<'a> {
    title: &'a str,
    content: &'a str,
}

fn create_note() -> Note<'_> {  // ❌ lifetime error
    let title = String::from("Draft");
    let content = String::from("My first note");
    Note {
        title: &title,
        content: &content
    }
}

This code doesn’t compile because the references are not valid outside of the function.

   Compiling playground v0.0.1 (/playground)
error[E0106]: missing lifetime specifier
 --> src/lib.rs:7:26
  |
7 | fn create_note() -> Note<'_> {  // ❌ lifetime error
  |                          ^^ expected named lifetime parameter
  |
  = help: this function's return type contains a borrowed value, but there is no value for it to be borrowed from
help: consider using the `'static` lifetime, but this is uncommon unless you're returning a borrowed value from a `const` or a `static`, or if you will only have owned values
  |

(Playground)

A simple way around that is to avoid lifetimes altogether. They are not necessary in the beginning. Use owned types like String and Vec. Just .clone() wherever you need to pass data around.

// Much simpler with owned types
struct Note {
    title: String,
    content: String,
}

fn create_note() -> Note {  // ✓ just works
    Note {
        title: String::from("Draft"),
        content: String::from("My first note")
    }
}

If you have a type that you need to move between threads (i.e. it needs to be Send), you can use an Arc<Mutex<T>> to get around the borrow checker. If you’re worried about performance, remember that other languages like Python or Java do this implicitly behind your back.

use std::sync::{Arc, Mutex};
use std::thread;

let note = Arc::new(Mutex::new(Note {
    title: String::from("Draft"),
    content: String::from("My first note")
}));

let note_clone = Arc::clone(&note);
thread::spawn(move || {
    let mut note = note_clone.lock().unwrap();
    note.content.push_str(" with additions");
});

(Playground)

If you feel like you have to use Arc<Mutex<T>> too often, there might be a design issue. For example, you might be able to avoid sharing state between threads.

Keep a flat hierarchy

main.rs is your best friend while prototyping.

Stuff your code in there – no need for modules or complex organization yet. This makes it easy to experiment and move things around.

First draft: everything in main.rs

struct Config {
    port: u16,
}
fn load_config() -> Config {
    Config { port: 8080 }
}
struct Server {
    config: Config,
}
impl Server {
    fn new(config: Config) -> Self {
        Server { config }
    }
    fn start(&self) {
        println!("Starting server on port {}", self.config.port);
    }
}
fn main() {
    let config = load_config();
    let server = Server::new(config);
    server.start();
}

Once you have a better feel for your code’s structure, Rust’s mod keyword becomes a handy tool for sketching out potential organization. You can nest modules right in your main file.

Later: experiment with module structure in the same file

mod config {
    pub struct Config {
        pub port: u16,
    }
    pub fn load() -> Config {
        Config { port: 8080 }
    }
}

mod server {
    use crate::config;
    pub struct Server {
        config: config::Config,
    }
    impl Server {
        pub fn new(config: config::Config) -> Self {
            Server { config }
        }
        pub fn start(&self) {
            println!("Starting server on port {}", self.config.port);
        }
    }
}

This inline module structure lets you quickly test different organizational patterns. You can easily move code between scopes with cut and paste, and experiment with different APIs and naming conventions. Once a particular structure feels right, you can move modules into their own files.

The key is to keep things simple until it calls for more complexity. Start flat, then add structure incrementally as your understanding of the problem grows.

See also Matklad’s article on large Rust workspaces.

Start small

Allow yourself to ignore some of the best practices for production code for a while.

It’s possible, but you need to switch off your inner critic who always wants to write perfect code from the beginning. Rust enables you to comfortably defer perfection. You can make the rough edges obvious so that you can sort them out later. Don’t let perfect be the enemy of good.

One of the biggest mistakes I observe is an engineer’s perfectionist instinct to jump on minor details which don’t have a broad enough impact to warrant the effort. It’s better to have a working prototype with a few rough edges than a perfect implementation of a small part of the system.

Remember: you are exploring! Use a coarse brush to paint the landscape first. Try to get into a flow state where you can quickly iterate. Don’t get distracted by the details too early. During this phase, it’s also fine to throw away a lot failed attempts.

There’s some overlap between prototyping and “easy Rust.”

Summary

The beauty of prototyping in Rust is that your “rough drafts” have the same memory safety and performance as polished code. Even when I liberally use unwrap(), stick everything in main.rs, and reach for owned types everywhere, the resulting code is on-par with a Python prototype in reliability, but outperforms it easily. This makes it perfect for experimenting with real-world workloads, even before investing time in proper error handling.

Let’s see how Rust stacks up against Python for prototyping:

Aspect Python Rust
Initial Development Speed ✓ Very quick to write initial code
✓ No compilation step
✓ Dynamic typing speeds up prototyping
✓ File watchers available
⚠️ Slightly slower initial development
✓ Type inference helps
✓ Tools like bacon provide quick feedback
Standard Library ✓ Batteries included
✓ Rich ecosystem
❌ Smaller standard library
✓ Growing ecosystem of high-quality crates
Transition to Production ❌ Need extensive testing to catch type errors
❌ Bad performance might require extra work or rewrite in another language
✓ Minimal changes needed beyond error handling
✓ Already has good performance
✓ Memory safety guaranteed
Maintenance ❌ Type errors surface during runtime
❌ Refactoring is risky
✓ Compiler catches most issues
✓ Safe refactoring with type system
Code Evolution ❌ Hard to maintain large codebases
❌ Type issues compound
✓ Compiler guides improvements
✓ Types help manage complexity

Quite frankly, Rust makes for an excellent prototyping language if you embrace its strengths. Yes, the type system will make you think harder about your design up front - but that’s actually a good thing! Each iteration might take a bit longer than in Python or JavaScript, but you’ll typically need fewer iterations from prototype to production.

I’ve found that my prototypes in other languages often hit a wall where I need to switch to something more robust. With Rust, I can start simple and gradually turn that proof-of-concept into production code, all while staying in the same language and ecosystem.

If you have any more tips or tricks for prototyping in Rust, get in touch and I’ll add them to the list!

  1. More experienced Rust developers might find themselves reaching for an impl IntoIterator<Item=T> where &[T]/Vec<T> would do. Keep it simple!

  2. In the talk, I show an example where early over-optimization led to the wrong abstraction and made the code slower. The actual bottleneck was elsewhere and hard to uncover without profiling.

  3. I usually know when I found a good abstraction once I can use all of Rust’s features like expression-oriented programming and pattern matching together with my own types.