Skip to main content

Quality Evaluation

In Orbinum Network, quality determines rewards. Unlike traditional Proof-of-Work (where hash power earns rewards) or simple Proof-of-Stake (where capital earns rewards), Orbinum implements Proof-of-Intelligence: miners are rewarded based on the value of their AI inference outputs, not just the quantity of work performed.

Why Quality Matters

Quality evaluation ensures that:

  • Users receive accurate AI inference from high-performing miners
  • Rewards flow to the best miners, incentivizing excellence
  • Low-quality miners are filtered out, maintaining network standards
  • The network remains competitive with centralized AI providers

How Evaluation Works

The Process

1. Request & Response

  • Miner receives an inference request (synthetic test or real user request)
  • Miner processes the request and returns:
    • The inference result (text, image, prediction, etc.)
    • Cryptographic proof of execution
    • Metadata (processing time, model version)

2. Validator Assessment

  • Multiple validators independently evaluate the miner's response
  • Each validator scores the miner across multiple quality dimensions
  • Validators submit their scores to the blockchain

3. Consensus

  • The network aggregates validator scores using stake-weighted consensus
  • Higher-stake validators have more influence on final scores
  • Final consensus score determines miner ranking and rewards

4. Rewards Distribution

  • Higher-ranked miners receive more inference requests
  • Block emissions distributed proportionally by quality scores
  • User fees flow primarily to top-performing miners

Quality Dimensions

Validators evaluate miners across four key dimensions:

Output Quality (40%)

Does the output meet the request requirements?

  • NLP Orbits: Semantic coherence, factual accuracy, prompt adherence
  • Vision Orbits: Image fidelity, prompt alignment, artifact-free generation
  • Prediction Orbits: Accuracy against ground truth or benchmarks
  • Code Orbits: Correctness, security, efficiency

Latency (30%)

How fast was the response?

  • Measured from request timestamp to response timestamp
  • Orbit-specific latency thresholds
  • Faster responses earn higher latency scores
  • Encourages infrastructure optimization

Availability (20%)

Is the miner reliable?

  • Uptime: Percentage of time the miner is reachable
  • Success Rate: Percentage of requests successfully processed without error
  • Consistent availability required for top rankings

Cost-Effectiveness (10%)

Is the service priced competitively?

  • Comparison against Orbit average pricing
  • Encourages operational efficiency
  • Balances quality with affordability

Ranking & Request Routing

Miners are ranked within each Orbit based on their consensus quality score:

Top 25%: Receive 70% of all requests 25-50%: Receive 20% of all requests 50-75%: Receive 8% of all requests Bottom 25%: Receive 2% of all requests

Higher rankings translate directly to more earnings through both request volume and emission rewards.

Protections for New Miners

Immunity Period

New miners receive a 12-hour grace period (7,200 blocks) upon registration:

  • Protection: Cannot be deregistered during this period
  • Initial Score: Assigned median quality score for request routing
  • Purpose: Allows time to prove performance before facing full competition

This prevents the "cold start" problem where new miners can't build reputation without receiving requests.

Staying Competitive

For Miners

Optimize Infrastructure:

  • Match GPU/CPU resources to Orbit requirements
  • Minimize latency through geographic proximity and network optimization
  • Maintain high uptime with redundant systems

Monitor Performance:

  • Track quality scores via Orbit Explorer dashboard
  • Watch for failed requests or timeouts
  • Compare metrics against top-ranked miners

Update Models:

  • Keep AI models updated to latest versions
  • Fine-tune for Orbit-specific tasks
  • Benchmark against competitor performance

Diversify:

  • Participate in multiple Orbits to spread risk
  • Balance specialization with portfolio diversity

Consequences of Low Quality

Reduced Earnings:

  • Lower rankings mean fewer requests
  • Lower emission share based on quality score

Deregistration Risk:

  • When an Orbit is full, lowest-ranked miners are replaced
  • New miners can claim slots from bottom performers
  • Competitive "survival of the fittest" dynamic

Technical Deep Dive

For detailed information on the mathematical consensus algorithm, stake-weighting mechanics, and game theory analysis, see:

Quality Consensus Mechanism - Technical implementation details including:

  • Stake-weighted validator consensus algorithm
  • Weight matrices and consensus calculations
  • Game theory and Nash equilibrium analysis
  • Security guarantees (Sybil resistance, collusion resistance)
  • Moving averages and score smoothing

Next Steps