← Back to Blog

Python to Rust for Performance-Critical Code: Where It Makes Sense (and Where It Doesn't)

Rust is fast. Python is not. Migration solved, right?

Not quite. The Python-to-Rust conversation has gotten louder since Discord's blog post about moving their Read States service from Python to Rust and dropping tail latencies from 60ms to 5ms. But Discord's situation was specific: a CPU-bound service with tight latency requirements and a small, well-defined surface area. Most Python codebases don't look like that.

Before you start rewriting anything, you need to answer one question honestly: what's actually slow?

Profile First, Migrate Never (Maybe)

Most Python performance problems aren't Python problems. They're I/O problems, database query problems, or algorithmic problems wearing Python's clothes.

# This is slow because of the N+1 query, not because of Python
for user in users:
    orders = db.query(f"SELECT * FROM orders WHERE user_id = {user.id}")
    process(orders)

Moving this to Rust gives you a faster loop around the same slow database calls. You'd get a 10x improvement by batching the query in Python.

Run py-spy, cProfile, or scalene before you write a single line of Rust. You're looking for CPU-bound hotspots: tight numerical loops, data serialization, image processing, parsing. If the flame graph shows 80% of time in socket.recv() or cursor.execute(), Rust won't help.

What Translates Cleanly

Pure computation. Number crunching, data transformation pipelines, parsing — anything where the logic is input-in, output-out with minimal external dependencies. Pydantic's decision to rewrite their core validation in Rust (pydantic-core) is the canonical example: a CPU-bound hot path with a clean interface boundary.

Data processing pipelines. Polars replaced Pandas' performance-critical internals with Rust. If you're building data processing that needs to handle millions of records with low latency, Rust is genuinely transformative.

# Python hot path — candidate for Rust
def compute_risk_scores(positions: list[Position]) -> list[float]:
    return [
        sum(p.weight * factor.value for factor in risk_factors)
        / p.notional
        for p in positions
    ]
// Rust equivalent — 10-50x faster for large position lists
pub fn compute_risk_scores(positions: &[Position], risk_factors: &[Factor]) -> Vec<f64> {
    positions.iter().map(|p| {
        risk_factors.iter()
            .map(|f| p.weight * f.value)
            .sum::<f64>() / p.notional
    }).collect()
}

What Requires Fundamental Redesign

Python's duck typing → Rust's ownership model. This isn't a syntax swap. Python objects are reference-counted heap allocations that can be shared freely. Rust's ownership and borrowing rules force you to rethink data flow. Code that casually passes mutable references around in Python needs architectural changes in Rust.

Async models. Python's asyncio and Rust's tokio are conceptually similar but structurally different. Python coroutines are single-threaded by default; Rust's async runtime is multi-threaded. The conversion isn't mechanical — it's an architectural decision.

The Hybrid Approach: PyO3 and FFI

For most teams, full conversion isn't the right call. The sweet spot is keeping Python for the application layer and dropping to Rust for performance-critical modules via PyO3:

use pyo3::prelude::*;

#[pyfunction]
fn fast_compute(data: Vec<f64>) -> PyResult<Vec<f64>> {
    Ok(data.iter().map(|x| x.powi(2) + x.sqrt()).collect())
}

This gives you Rust's performance where it matters and Python's productivity everywhere else. Dropbox took this approach with their sync engine, and it's the pattern behind libraries like ruff (Python linter written in Rust) and uv (package manager).

When Full Conversion Makes Sense

Full Python → Rust conversion is justified when: the entire service is CPU-bound (not just a hot path), latency requirements are strict (sub-millisecond), you need predictable performance without GC pauses, or the service is small and self-contained.

AI-assisted conversion can generate a compilable Rust skeleton from Python source, saving days of boilerplate. But be honest about the state of automated tooling here — Python → Rust is an experimental conversion pair because the semantic gap is wide. Use the output as a starting point, not a finished product. B&G CodeFoundry rates this pair at quality level 1 (experimental) and generates a working skeleton with automated syntax verification.

The right answer for most teams: profile, identify the hot path, rewrite that module in Rust with PyO3, and keep the rest in Python.


References: Discord's Rust migration blog post; Dropbox's Zulip sync engine migration; pydantic-core architecture; Polars documentation; PyO3 project.