OpenChainBench
Standing note

Methodology

How every benchmark is measured, reported and reproduced.

Design principles

  1. 01

    Identical inputs. Every provider sees the same request — same pair, same notional, same destination — submitted at the same moment from the same region. If inputs differ, we say so.

  2. 02

    Honest aggregates. We report p50, p90 and p99 latency along with success rate. Means are reported but never used as a headline — tail behaviour is what users feel.

  3. 03

    Auditable runs. Raw metrics are stored in Prometheus and exposed publicly. Anyone can re-run the harness against the same endpoints and verify the numbers match.

  4. 04

    No cherry-picking. The benchmark plan is committed before each run: providers, routes, cadence, timeout. Adding or removing providers after seeing results requires a published correction.

  5. 05

    Live leader. The leader on every page is computed at render time from the lowest p50. No spec marks a winner ahead of time.

Statistical conventions

Latency aggregates
Reported as p50, p90, p99 and arithmetic mean over the run window. Failed requests (timeout, 5xx, malformed response) are excluded from latency aggregates and counted toward success rate.
Success rate
Share of requests returning a usable result within the published timeout. The only metric that includes failures.
Region normalisation
Wherever a benchmark is multi-region, the headline figure is the cross-region median. Per-region figures appear in Fig. 3 of each report.
Significance
Differences smaller than the within-provider standard deviation are flagged as inside the noise envelope and reported without a leader.

Reproducing a result

  1. Clone the harness from the link at the bottom of any benchmark report.
  2. Set API keys for the providers you want to include. Public endpoints work for most aggregators; some bridges require allow-listing.
  3. Run the harness for at least 24 hours to get a comparable sample size (n typically ≥ 1,000 per provider per region).
  4. Compare your aggregates to the published numbers. If they diverge, open an issue — we'll publish a correction or refine the methodology.

Corrections

Found a number you can't reproduce? Open an issue at github.com/OpenChainBench/OpenChainBench/issues. Material errors are corrected in place with a dated note.