Numerai Medals and Grandmasters: How the Ranking Works

How Numerai distributes gold, silver, and bronze medals across rounds, what MMC you need to earn them, and why consistency beats peak performance for Grandmaster rank.

Numerai awards gold, silver, and bronze medals to top-performing models in each round. Medals accumulate over time and feed the Grandmasters ranking on the leaderboard, which sorts participants into tiers from Newcomer through Grandmaster. The system rewards sustained performance rather than one-off lucky rounds.

What does the data say? How are medals distributed across rounds and models, what MMC (meta-model contribution) thresholds do you need, and does consistency predict medal counts better than peak scores?

A note on methodology: the stored medal columns in our database are entirely NULL, so the charts below compute medals directly from raw MMC values using per-round percentile thresholds — gold for the top 1%, silver for the top 3%, bronze for the top 15%. These cutoffs match Numerai's published tiering and let us reconstruct medal counts from the underlying score data.

How many medals per round?

Total medals per round rise and fall with field size.

Stacked area chart of gold, silver, and bronze medals awarded per Numerai round
Stacked area chart of gold, silver, and bronze medals awarded per Numerai round

Bronze medals dominate each round, with silver and gold forming thinner bands on top. Totals climbed from near zero around round 200 to roughly 2,000 medals per round by round 800, then dropped sharply to about 500 around round 820 — likely a field-size contraction — before recovering back toward 1,800 by round 1200. Because the cutoffs are fixed percentiles (1% / 3% / 15%), the proportional split between tiers stays constant; only the absolute count moves with the number of submitting models.

What score do you need?

The minimum MMC for each tier moves with the field's performance distribution.

Line chart showing the minimum MMC threshold required for gold, silver, and bronze medals by round
Line chart showing the minimum MMC threshold required for gold, silver, and bronze medals by round

The percentile cutoffs are fixed, but the raw MMC values they correspond to swing round by round. Gold (top 1%) typically sits between about 0.02 and 0.04 MMC, silver (top 3%) hovers near 0.015 to 0.025, and bronze (top 15%) tracks closer to 0.01 to 0.02. Early rounds (before round 300) were much noisier, with gold thresholds spiking to 0.06. A medal is always a relative achievement: you have to beat your peers, not hit an absolute bar.

The gap between bronze and gold is narrower than you might expect — often just 0.01 to 0.02 MMC separates a bronze-qualifying score from a gold one. Small differences in MMC translate into very different medal outcomes.

The medal distribution across models

How are total medal counts spread across all models?

Log-scale histogram of total medals earned per model, showing a heavy right-skewed distribution
Log-scale histogram of total medals earned per model, showing a heavy right-skewed distribution

The distribution is heavily right-skewed on a log scale. The median model earns just 7 medals, and nearly 10,000 models sit in the lowest bucket with only one or two. The tail thins quickly: only a handful of models clear 80 medals, and the maximum observed is around 120. Two forces drive the shape — longevity (more rounds means more chances) and skill (consistently strong models win round after round). You can see both at work in any individual model's performance history.

Consistency vs brilliance

Does the system reward steady above-average performance or occasional exceptional rounds?

Scatter plot of average MMC vs total medals per model, colored by rounds played
Scatter plot of average MMC vs total medals per model, colored by rounds played

Total medal count rises with average MMC: models clustered near zero average MMC rarely clear 20 medals, while the 80-plus medal tier is almost entirely made up of models with positive average MMC (roughly 0.005 and above). The color gradient — rounds played, from purple (low) up to yellow (300+) — shows the biggest hauls come from long-tenured models that also maintain a positive mean. High rounds played without a positive average still produces middling medal counts at best.

A model that consistently lands just inside the top 15% will accumulate bronze medals steadily; one that alternates between the top 1% and the bottom half will earn fewer medals overall, because below-average rounds contribute nothing. Percentile thresholds turn reliable above-average performance into steady medal accumulation, which is why reducing variance tends to pay off more than chasing peak scores.

Takeaways

Medals are relative, not absolute. Thresholds shift with the field. You earn medals by outperforming peers.

Longevity drives medal accumulation. More rounds means more medals, a pattern reinforced by the data on model survival.

Consistency beats peak performance. Average MMC predicts medal count better than maximum MMC. This connects to MMC's role in payouts and the meta-model's need for stable inputs.

The ranking aligns incentives. Rewarding consistency and longevity pushes participants toward the kind of signal a good meta-model contributor provides — the same bar benchmark models are measured against. For the round-level data behind these charts, browse the rounds list or the trends dashboard.