Mar 2026 • 5 min read
Rewriting the Silver Searcher in Rust
A first benchmark pass on a Rust rewrite of ag: roughly 2× faster than ag on one measured workload, but still behind rg and ugrep.
Rewrite study
First benchmark passFirst pass on the literal-simple workload. rust-ag roughly halves ag's median runtime, but ripgrep and ugrep still lead.
ag → rust-ag
1.96×–2.03× faster
Parity
8/8 smoke scenarios
Relative rank
Still behind rg and ugrep
Coverage
1 of 38 scenarios
Claim gate
Fails on reproducibility
- On the only measured performance scenario, rust-ag cut the local median from 19.64 ms to 9.69 ms.
- rg and ugrep were still faster on the same workload, at 7.74 ms and 4.02 ms median.
- This is a narrow result: one workload, three measured samples per tool, one Apple M4 machine, and an overall claim gate that still fails on reproducibility bundle validation.
I rewrote the core search path of The Silver Searcher in Rust and then measured it against the original tool. The first result is real: on the only measured performance workload, rust-ag was roughly 2× faster than ag.
That is the honest headline, but not the whole story. rg and ugrep were still faster, the performance data covers only 1 of 38 registered scenarios, and the overall claim gate still returned fail because the reproducibility bundle validation step is not finished. So this is a promising first result, not a victory lap.
The result in one view
The measured workload was the literal-simple scenario: search for "foo" across the repository working tree.
| Tool | Local median | Relative to ag | What to keep in mind |
|---|---|---|---|
ag | 19.64 ms | 1.00× | Baseline |
rust-ag | 9.69 ms | 2.03× faster | Apples-to-apples comparison target |
rg | 7.74 ms | 2.54× faster | Faster here, but it searches a different file set by default |
ugrep | 4.02 ms | 4.89× faster | Fastest here, also with different default traversal semantics |
The key comparison is ag versus rust-ag. That pair was checked for output parity. The cross-tool numbers are still useful context, but they are not the same kind of comparison because rg and ugrep make different default choices about traversal and ignore handling.
What was actually measured
This first pass is intentionally narrow.
- Scenario:
literal-simple - Query: search for
"foo" - Corpus: the repository working tree
- Sampling: 1 warmup iteration and 3 measured samples per tool
- Environment: Apple M4, 16 GiB RAM, macOS Darwin 25.3.0
- Statistics: median runtime, IQR, and 95% percentile bootstrap confidence intervals
- Bias control: tools ran in interleaved-random order
That setup is enough to say something useful about this workload. It is not enough to claim a universal ranking for every kind of search.
Correctness came first
Before looking at speed, I checked whether the rewrite still behaves like ag on the smoke scenarios that matter most for day-to-day use.
rust-ag matched ag across all 8 smoke scenarios that were checked:
- literal simple
- literal word
- literal no match
- regex simple
- literal nocase
- context before/after
- count matches
- files with matches
That matters because it makes the ag versus rust-ag speedup meaningful. rg and ugrep landed in different output clusters during the correctness gate, which is expected: their default traversal and ignore rules select different file sets.
Speedup ranges across local, nightly, and manual runs
The 2× result was not a one-off. Across the three run types, rust-ag stayed between 1.96× and 2.03× faster than ag.
| Pair | Local | Nightly | Manual |
|---|---|---|---|
rust-ag vs ag | 2.03× | 1.96× | 1.96× |
rg vs ag | 2.54× | 2.45× | 2.48× |
ugrep vs ag | 4.89× | 4.76× | 4.95× |
rg vs rust-ag | 1.25× | 1.25× | 1.27× |
ugrep vs rust-ag | 2.41× | 2.43× | 2.52× |
The improvement over ag looks stable inside this test window. The ranking also stayed stable: ugrep → rg → rust-ag → ag.
Scope limits that matter
The right way to read this study is: the rewrite looks promising, and the limits are real.
- Only 1 of 38 registered scenarios has measured performance data. A regex-heavy or output-heavy workload could change the shape of the ranking.
- Each tool only has 3 measured samples after 1 warmup. That is enough for a first read, but not enough to stop worrying about variance.
- Everything ran on one Apple M4 machine. Linux, x86_64, or a different filesystem could move the numbers.
- The overall claim gate still failed. The parity checks, threshold checks, and cross-run agreement passed individually, but the full gate stayed red because the reproducibility bundle validation step is still incomplete.
That last point matters. I do not want to quietly skip it just because the speedup looks good.
Bottom line
This rewrite clears a useful bar: on the measured workload, it preserves ag's behavior in the smoke checks and cuts median runtime roughly in half.
It does not clear the stronger bar of being the fastest tool in the comparison set. rg and ugrep are still ahead, and this benchmark window is much too small to pretend otherwise.
That is why this is worth publishing as a rewrite study: the interesting result is not just that Rust helped, but that the evidence still forces a careful conclusion.