Skip to main content

Quality Consensus Mechanism

Note: For a complete and up-to-date table of slashing penalties and offenses for both miners and validators, see the Slashing (Negative Incentives) section. This avoids redundancy and ensures consistency across the documentation. Quality Consensus is Orbinum's proprietary mechanism for reaching agreement on the quality of AI outputs. It is the bedrock of the network's Economic Security, ensuring that the value of the $ON token is backed by tangible, high-quality utility.

Note: This page provides technical implementation details. For a user-friendly overview, see Quality Evaluation.

The Challenge

In a decentralized AI network, how do you objectively measure and reward quality when:

  • Miners can't self-report (they would cheat)
  • No central authority exists (defeats decentralization)
  • Token holders lack technical expertise (can't evaluate AI quality)
  • Quality is often subjective (what's "good" varies by use case)

Orbinum's Solution

Stake-Weighted Validator Assessment

Quality Consensus delegates evaluation to Validators, who are economically incentivized to be honest through:

  1. Stake Requirements: Validators must lock significant capital ($ON tokens).
  2. Consensus Alignment: Validators earn more when they agree with the majority.
  3. Slashing Risk: Dishonest validators lose their stake.

The Process

1. Weight Matrix (W)

Each validator evaluates miners and produces a weight matrix where W_ij represents Validator i's score for Miner j.

W=[w1,1w1,2w1,3w2,1w2,2w2,3w3,1w3,2w3,3]W = \begin{bmatrix} w_{1,1} & w_{1,2} & w_{1,3} & \dots \\ w_{2,1} & w_{2,2} & w_{2,3} & \dots \\ w_{3,1} & w_{3,2} & w_{3,3} & \dots \\ \vdots & \vdots & \vdots & \ddots \end{bmatrix}

2. Stake Weighting (S)

Not all validators have equal influence. Each validator's opinion is weighted by their stake S_i.

Higher stake = More influence on consensus

This ensures that validators with more "skin in the game" have proportionally more say.

3. Consensus Calculation

The network calculates a Consensus Score C_j for each miner j:

Cj=iSiWijiSiC_j = \frac{\sum_{i} S_i \cdot W_{ij}}{\sum_{i} S_i}

This is the stake-weighted average of all validator assessments.

4. Reward Distribution

Miners receive emissions proportional to their consensus score:

Miner j’s Reward=Total Emissions×(CjCj)\text{Miner } j\text{'s Reward} = \text{Total Emissions} \times \left( \frac{C_j}{\sum C_j} \right)

Top-ranked miners receive the majority of rewards.

5. Validator Incentives

Validators are rewarded for honest evaluation:

  • Validators whose scores align with the consensus earn high dividends.
  • Validators who diverge (outliers) earn reduced rewards.
  • Extreme divergence triggers slashing.

Game Theory

Schelling Point

Quality Consensus creates a focal point where rational validators converge on honest evaluation:

Why validators cooperate:

  1. Prediction Game: Best strategy is to predict what other high-stake validators will score.
  2. Truth as Focal Point: "Accurate quality" is the most obvious prediction.
  3. Economic Punishment: Deviating from consensus reduces earnings.

Why collusion is expensive:

  • Requires >50% of total stake.
  • Slashing destroys colluding validators' capital.
  • Other validators can detect and report manipulation.

Nash Equilibrium

For Validators:

  • Honest Strategy: Evaluate accurately → Align with consensus → Earn maximum dividends.
  • Dishonest Strategy: Accept bribes/manipulate → Diverge from consensus → Earn reduced rewards + risk slashing.
  • Equilibrium: Honesty is the dominant strategy.

For Miners:

  • Quality Strategy: Optimize AI models → High scores → Maximum emissions.
  • Gaming Strategy: Try to fool validators → Low scores → Deregistration.
  • Equilibrium: Quality is the dominant strategy.

Technical Implementation

Evaluation Frequency

Quality assessments occur every Tempo (e.g., 360 blocks ≈ 36 minutes).

Moving Averages

To prevent volatility, scores use Exponential Moving Averages (EMA):

Score_new = α × Current_Evaluation + (1 - α) × Score_previous

Where α (smoothing factor) is typically 0.1-0.3.

Immunity Period

New miners receive a grace period (7,200 blocks ≈ 12 hours) where they:

  • Cannot be deregistered.
  • Build initial reputation.
  • Learn network dynamics.

Quality Metrics

Validators evaluate miners across four key dimensions:

MetricWeightDescription
Output Quality40%Accuracy, relevance, coherence, domain-specific correctness
Latency30%Response time from request to result submission
Availability20%Uptime percentage and request success rate
Cost-Effectiveness10%Competitive pricing relative to Orbit average

These weights are configurable per Orbit and can be adjusted through governance.

Security Guarantees

Sybil Resistance

  • Stake requirement prevents cheap identity creation.
  • Multiple identities don't increase influence without proportional stake.

Collusion Resistance

  • Requires majority stake acquisition (expensive).
  • Slashing makes attacks economically irrational.
  • Transparent on-chain scores enable community monitoring.

Censorship Resistance

  • No single validator can block a miner.
  • Consensus requires majority agreement.
  • Miners can appeal to governance if unfairly scored.

Next Steps