Skip to main content

Node Operations

Orbinum is built on Substrate with Frontier for EVM compatibility. This means node operation follows standard Substrate patterns — if you've run a Polkadot, Kusama, or any Substrate-based node, the concepts are familiar.

This guide covers Orbinum-specific configuration and operational considerations beyond standard Substrate node management.

For Complete Setup Instructions

See the Getting Started section for installation, build steps, and quick start guides.


Node Types

Orbinum supports three standard Substrate node types, each serving different network roles.

Full Node

Validates all blocks and stores current state. Does not participate in consensus.

Requirements:
4+ cores | 16 GB RAM | 500 GB SSD

Archive Node

Maintains complete historical state for every block since genesis. Required for explorers and analytics.

Requirements:
8+ cores | 32 GB RAM | 2+ TB NVMe

Validator Node

Participates in consensus (Aura + GRANDPA) by producing blocks and finalizing chains. Requires staking.

Requirements:
8+ cores | 32 GB RAM | 1 TB NVMe | 99%+ uptime


Orbinum-Specific Configuration

Verification Keys (Built-in)

Unlike standard Substrate nodes, Orbinum includes ZK circuit verification keys directly compiled into the runtime. These are embedded as Rust code in the primitives/zk-verifier crate.

Verification keys are automatically updated:

  • Generated from circuit compilation in orbinum/circuits
  • Embedded in primitives/zk-verifier/src/infrastructure/storage/verification_keys/
  • No external files or downloads required at runtime
  • VKs are part of the binary and WASM runtime

Supported circuits:

  • transfer.rs - Private transfers (2 inputs → 2 outputs)
  • unshield.rs - Withdraw from shielded pool
  • disclosure.rs - Selective disclosure proofs
No External Artifacts Needed

The node does not read circuit artifacts from disk. Verification keys are compiled into the runtime bytecode. Client-side tooling (wallets, dApps) may need circuit artifacts for proof generation.

RPC Configuration for Privacy Operations

Orbinum extends standard Substrate RPC with privacy-specific methods for querying the shielded pool.

Enable shielded pool RPC:

--rpc-methods Safe  # Already includes shieldedPool_* methods

Available privacy RPCs:

  • shieldedPool_getMerkleTreeInfo() - Get current Merkle tree state
  • shieldedPool_getMerkleProof(leaf_index) - Get proof for specific commitment
  • shieldedPool_getMerkleProofForCommitment(commitment) - Find proof by commitment

EVM RPC (optional for dApp integration):

--ethapi=debug,trace,txpool  # Enable Ethereum debug APIs

Data Directory Structure

Orbinum follows standard Substrate conventions:

/data/orbinum/
├── chains/
│ └── mainnet/
│ ├── db/ # RocksDB (state + blocks)
│ │ ├── full/ # State database
│ │ └── parachains/ # (unused - Orbinum is not a parachain)
│ ├── keystore/ # Session keys (validators only)
│ └── network/ # P2P node identity and DHT data
Testnet Configuration

For testnet setup, see docker/testnet/ in the repository. Includes Docker Compose configuration and chain spec generation scripts.


System Service Configuration

Production nodes should run as systemd services. The configuration is standard Substrate with Orbinum-specific flags.

Example systemd Unit

Create /etc/systemd/system/orbinum.service:

[Unit]
Description=Orbinum Node
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=orbinum
Group=orbinum
WorkingDirectory=/home/orbinum

ExecStart=/usr/local/bin/orbinum-node \
--chain mainnet \
--base-path /data/orbinum \
--name "MyOrbinumNode" \
--port 30333 \
--rpc-port 9944 \
--rpc-external \
--rpc-methods Safe \
--rpc-cors "https://myapp.com" \
--prometheus-port 9615 \
--prometheus-external \
--pruning 1000

# Resource limits
LimitNOFILE=65536
LimitNPROC=4096

# Restart policy
Restart=on-failure
RestartSec=10s
KillSignal=SIGINT
TimeoutStopSec=90s

# Logging
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target

Enable and manage:

sudo systemctl daemon-reload
sudo systemctl enable orbinum
sudo systemctl start orbinum
sudo systemctl status orbinum

# View logs
sudo journalctl -u orbinum -f

Monitoring

Substrate Metrics

Orbinum exposes standard Substrate Prometheus metrics plus privacy-specific metrics.

Standard metrics:

  • substrate_block_height{status="best"} - Latest block received
  • substrate_block_height{status="finalized"} - Latest finalized block
  • substrate_sub_libp2p_peers_count - Connected peers
  • process_cpu_seconds_total - CPU usage
  • process_resident_memory_bytes - Memory usage

Privacy-specific metrics:

  • orbinum_shielded_pool_commitments - Total commitments in Merkle tree
  • orbinum_shielded_pool_nullifiers - Total spent nullifiers
  • orbinum_merkle_tree_depth - Current tree depth

Prometheus scrape config:

scrape_configs:
- job_name: 'orbinum'
scrape_interval: 15s
static_configs:
- targets: ['localhost:9615']

Health Checks

RPC health endpoint:

curl -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method": "system_health"}' \
http://localhost:9944

Expected response:

{
"result": {
"isSyncing": false,
"peers": 23,
"shouldHavePeers": true
}
}

Troubleshooting

ipts/sync-circuit-artifacts.sh v0.1.0


#### Sync Stalled

**Symptoms:** Block height not increasing, `isSyncing: true`

**Diagnosis:**
```bash
# Check peer count
curl -s http://localhost:9944 -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method": "system_health"}' | jq '.result.peers'

# Check sync state
curl -s http://localhost:9944 -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method": "system_syncState"}' | jq

Solutions:

# Add bootnodes if peer count is low
--bootnodes /dns/bootnode1.orbinum.io/tcp/30333/p2p/12D3KooW...

# Open P2P port in firewall
sudo ufw allow 30333/tcp

High Memory Usage

Symptoms: Node killed by OOM, high swap usage

Solutions:

# Reduce state cache
--state-cache-size 536870912 # 512 MB

# Enable pruning (if not archive node)
--pruning 256

# Increase system swap temporarily
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Security Considerations

Network Security

Firewall configuration:

# P2P port (required)
sudo ufw allow 30333/tcp

# RPC port (restrict to trusted IPs)
sudo ufw allow from 192.168.1.0/24 to any port 9944

# Prometheus (monitoring server only)
sudo ufw allow from 10.0.1.50 to any port 9615

RPC Security

Production setup:

  • Use --rpc-methods Safe (never Unsafe in production)
  • Restrict CORS with --rpc-cors "https://specific-domain.com"
  • Use reverse proxy (nginx) with rate limiting for public endpoints
  • Enable SSL/TLS termination at proxy level

Validator Security

Key management:

  • Store session keys in encrypted keystore directory
  • Never expose author_* RPC methods on validator nodes
  • Use sentry node architecture (validators behind private network)
  • Regular keystore backups with GPG encryption

Generate session keys:

curl -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method": "author_rotateKeys"}' \
http://localhost:9944

Maintenance

Software Updates

Update node binary:

# Stop node
sudo systemctl stop orbinum

# Update repository
cd node
git fetch --tags
git checkout v0.2.0

# Rebuild
cargo build --release

# Replace binary
sudo cp target/release/orbinum-node /usr/local/bin/

# Restart
sudo systemctl start orbinum

Runtime upgrades:

  • Runtime updates occur via on-chain governance
  • Nodes automatically apply new runtime when finalized
  • No manual intervention required
  • Monitor logs for ✨ Imported #X with runtime upgrade

Database Backups

For full nodes:

sudo systemctl stop orbinum
tar -czf orbinum-backup-$(date +%Y%m%d).tar.gz /data/orbinum/chains/mainnet/db
sudo systemctl start orbinum

For validators (critical):

# Backup keystore
tar -czf keystore-$(date +%Y%m%d).tar.gz /data/orbinum/chains/mainnet/keystore
gpg --symmetric --cipher-algo AES256 keystore-*.tar.gz
Validator Keys

Loss of validator keystore means loss of validator status and potential slashing. Store encrypted backups offline.


Learn More

For detailed Substrate node operation:


Configuration Best Practices

Data Directory Structure

When running a node, Orbinum organizes data in the following structure:

/data/orbinum/
├── chains/
│ └── mainnet/ # Chain-specific data
│ ├── db/ # RocksDB database (state + blocks)
│ ├── keystore/ # Local keys (validators only)
│ └── network/ # P2P network state
└── artifacts/ # Circuit artifacts (optional local copy)

Location recommendations:

  • Development: Use --tmp for temporary storage (auto-deleted)
  • Production: Use dedicated SSD/NVMe partition mounted at /data
  • Validators: Separate partition for keystore with restricted permissions

Essential Command-Line Flags

./target/release/orbinum-node \
--chain mainnet \ # Network: mainnet | testnet | dev
--base-path /data/orbinum \ # Data directory
--name "MyNode" \ # Node name (visible on telemetry)
--port 30333 \ # P2P libp2p port
--rpc-port 9944 \ # WebSocket RPC port
--prometheus-port 9615 \ # Metrics endpoint
--pruning 1000 \ # Keep last 1000 blocks (use 'archive' for full history)
--state-cache-size 1073741824 # 1GB state cache (adjust based on RAM)

RPC Configuration Modes

Production RPC (public-facing):

--rpc-external \                       # Allow external connections
--rpc-methods Safe \ # Only safe methods (no state writes)
--rpc-cors "https://myapp.com" \ # Restrict origin
--rpc-max-connections 100 # Connection limit

Development RPC (local only):

--rpc-methods Unsafe \                 # Enable author_* methods
--rpc-cors all # Allow all origins
Production Security

Never use --rpc-methods Unsafe in production. This enables methods like author_submitExtrinsic that can modify state without signatures.


Running as a System Service

Production nodes should run as systemd services for automatic startup, restart on failure, and centralized logging.

systemd Service Configuration

Create /etc/systemd/system/orbinum.service:

[Unit]
Description=Orbinum Full Node
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=orbinum
Group=orbinum
WorkingDirectory=/home/orbinum
ExecStart=/usr/local/bin/orbinum-node \
--chain mainnet \
--base-path /data/orbinum \
--name "MyNode" \
--port 30333 \
--rpc-port 9944 \
--rpc-external \
--rpc-methods Safe \
--rpc-cors "https://myapp.com" \
--prometheus-port 9615 \
--prometheus-external \
--pruning 1000

# Resource limits
LimitNOFILE=65536
LimitNPROC=4096

# Restart policy
Restart=on-failure
RestartSec=10s
KillSignal=SIGINT
TimeoutStopSec=90s

# Logging
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target

Enable and manage:

# Create dedicated user
sudo useradd -m -s /bin/bash orbinum
sudo chown -R orbinum:orbinum /data/orbinum

# Enable service
sudo systemctl daemon-reload
sudo systemctl enable orbinum
sudo systemctl start orbinum

# Check status
sudo systemctl status orbinum

# View logs
sudo journalctl -u orbinum -f --since "10 minutes ago"

Monitoring and Observability

Prometheus Metrics

Orbinum exposes Substrate and custom metrics on the --prometheus-port (default: 9615).

Key metrics to monitor:

MetricDescriptionAlert Threshold
substrate_block_height{status="best"}Latest block received-
substrate_block_height{status="finalized"}Latest finalized blockLag > 100 blocks
substrate_sub_libp2p_peers_countConnected peers< 5 peers
substrate_sub_txpool_validations_finishedProcessed transactions-
process_cpu_seconds_totalCPU usage> 80% sustained
process_resident_memory_bytesRAM usage> 90% of available

Prometheus scrape configuration:

scrape_configs:
- job_name: 'orbinum-node'
scrape_interval: 15s
static_configs:
- targets: ['localhost:9615']
labels:
instance: 'mainnet-node-1'

Health Check Endpoints

System health:

curl -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method": "system_health"}' \
http://localhost:9944

Expected response:

{
"jsonrpc": "2.0",
"result": {
"isSyncing": false,
"peers": 23,
"shouldHavePeers": true
},
"id": 1
}

Sync status:

curl -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method": "system_syncState"}' \
http://localhost:9944

Alerting Rules

Example Prometheus alerts (/etc/prometheus/alerts/orbinum.yml):

groups:
- name: orbinum-critical
interval: 30s
rules:
- alert: NodeOffline
expr: up{job="orbinum-node"} == 0
for: 2m
labels:
severity: critical
annotations:
summary: "Orbinum node {{ $labels.instance }} is offline"

- alert: LowPeerCount
expr: substrate_sub_libp2p_peers_count < 5
for: 5m
labels:
severity: warning
annotations:
summary: "Node has only {{ $value }} peers"

- alert: FinalityStalled
expr: |
(substrate_block_height{status="best"} -
substrate_block_height{status="finalized"}) > 100
for: 5m
labels:
severity: critical
annotations:
summary: "Finality lagging {{ $value }} blocks behind"

- alert: HighMemoryUsage
expr: |
(process_resident_memory_bytes /
node_memory_MemTotal_bytes) > 0.9
for: 10m
labels:
severity: warning
annotations:
summary: "Memory usage at {{ $value | humanizePercentage }}"

Maintenance Operations

Database Management

Pruning Strategies

State pruning (default):

--pruning 1000  # Keeps last 1000 blocks, saves ~80% disk space

Archive mode (no pruning):

--pruning archive  # Keeps all historical state, requires 2+ TB

Check database size:

du -sh /data/orbinum/chains/mainnet/db/

Database Backups

For full nodes (state backup):

# Stop node gracefully
sudo systemctl stop orbinum

# Backup database
tar -czf orbinum-db-backup-$(date +%Y%m%d).tar.gz \
-C /data/orbinum/chains/mainnet db/

# Upload to secure storage
aws s3 cp orbinum-db-backup-*.tar.gz s3://my-backups/

# Restart node
sudo systemctl start orbinum

For validators (keystore backup):

# Backup keys (CRITICAL - keep secure!)
tar -czf keystore-$(date +%Y%m%d).tar.gz \
-C /data/orbinum/chains/mainnet keystore/

# Encrypt before storing
gpg --symmetric --cipher-algo AES256 keystore-*.tar.gz
Validator Key Security

Never expose validator keys over RPC or network. Store backups in encrypted, offline storage. Loss of keys means loss of validator status and potential slashing.

Software Updates

Update node binary:

# Stop node
sudo systemctl stop orbinum

# Backup current binary
sudo cp /usr/local/bin/orbinum-node /usr/local/bin/orbinum-node.backup

# Update repository
cd node
git fetch --tags
git checkout v0.2.0 # Specific version tag

# Rebuild
cargo build --release

# Replace binary
sudo cp target/release/orbinum-node /usr/local/bin/

# Restart node
sudo systemctl start orbinum

# Verify version
orbinum-node --version

# Monitor logs
sudo journalctl -u orbinum -f

Runtime upgrades:

  • Runtime upgrades occur via on-chain governance
  • Nodes automatically apply new runtimes when finalized
  • No manual intervention required
  • Monitor logs for ✨ Imported #X with runtime upgrade messages

Troubleshooting

Common Issues

Node Won't Start

Symptoms: Service fails immediately after start

Diagnosis:

# Check logs
sudo journalctl -u orbinum -n 50 --no-pager

# Common errors:
# - "Address already in use" → Port conflict
# - "Permission denied" → Data directory permissions
# - "Circuit artifacts not found" → Missing artifacts

Solutions:

# Fix port conflict
--port 30334 --rpc-port 9945

# Fix permissions
sudo chown -R orbinum:orbinum /data/orbinum

# Download artifacts
./scripts/sync-circuit-artifacts.sh v0.1.0

Sync Stalled or Slow

Symptoms: Block height not increasing, isSyncing: true for extended period

Diagnosis:

# Check sync status
curl -s http://localhost:9944 -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method": "system_syncState"}' | jq

# Check peer count
curl -s http://localhost:9944 -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method": "system_health"}' | jq '.result.peers'

Solutions:

# Add bootnodes if peer count < 5
--bootnodes /dns/bootnode1.orbinum.io/tcp/30333/p2p/12D3KooW...

# Open firewall port
sudo ufw allow 30333/tcp

# Force database rebuild (LAST RESORT - removes local state)
sudo systemctl stop orbinum
sudo rm -rf /data/orbinum/chains/mainnet/db
sudo systemctl start orbinum

High Memory Usage

Symptoms: OOM killer terminating node, swap usage high

Diagnosis:

# Check memory usage
free -h
sudo journalctl -u orbinum | grep -i "out of memory"

Solutions:

# Reduce state cache
--state-cache-size 536870912 # 512 MB instead of 1 GB

# Enable aggressive pruning
--pruning 256 # Keep only 256 blocks

# Increase system swap (temporary fix)
sudo fallocate -l 8G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Disk Space Exhausted

Symptoms: Node crashes with write errors, database corruption

Diagnosis:

df -h /data/orbinum
du -sh /data/orbinum/chains/mainnet/db/

Solutions:

# Enable pruning (requires resync)
sudo systemctl stop orbinum
sudo rm -rf /data/orbinum/chains/mainnet/db
# Edit systemd service to add: --pruning 1000
sudo systemctl daemon-reload
sudo systemctl start orbinum

# Or expand disk partition
sudo lvextend -L +100G /dev/mapper/vg-data
sudo resize2fs /dev/mapper/vg-data

Security Hardening

Network Security

Firewall configuration (UFW):

# Default policies
sudo ufw default deny incoming
sudo ufw default allow outgoing

# P2P port (required for all nodes)
sudo ufw allow 30333/tcp comment 'Orbinum P2P'

# RPC port (restrict to trusted IPs only)
sudo ufw allow from 192.168.1.0/24 to any port 9944 comment 'RPC'

# Prometheus (restrict to monitoring server)
sudo ufw allow from 10.0.1.50 to any port 9615 comment 'Metrics'

# SSH (change default port)
sudo ufw allow 2222/tcp comment 'SSH'

sudo ufw enable

DDoS protection with fail2ban:

# /etc/fail2ban/jail.d/orbinum.conf
[orbinum-rpc]
enabled = true
port = 9944
filter = orbinum-rpc
logpath = /var/log/syslog
maxretry = 10
bantime = 3600

RPC Endpoint Protection

Use reverse proxy (nginx):

# /etc/nginx/sites-available/orbinum-rpc
upstream orbinum_rpc {
server 127.0.0.1:9944;
}

limit_req_zone $binary_remote_addr zone=rpc_limit:10m rate=100r/m;

server {
listen 443 ssl http2;
server_name rpc.mynode.com;

ssl_certificate /etc/letsencrypt/live/rpc.mynode.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rpc.mynode.com/privkey.pem;

location / {
limit_req zone=rpc_limit burst=20 nodelay;

proxy_pass http://orbinum_rpc;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

# Timeouts
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
}
}

Validator-Specific Security

Session key management:

# Generate new session keys
curl -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method": "author_rotateKeys"}' \
http://localhost:9944

# Backup keystore immediately
tar -czf keystore-backup.tar.gz /data/orbinum/chains/mainnet/keystore/
gpg --symmetric --cipher-algo AES256 keystore-backup.tar.gz

Sentry node architecture (recommended for validators):

  • Run validator behind private network
  • Expose sentry nodes (full nodes) to public internet
  • Validator only connects to trusted sentry nodes
  • Reduces validator exposure to DDoS attacks

Performance Optimization

Hardware Tuning

NVMe optimization:

# Check current I/O scheduler
cat /sys/block/nvme0n1/queue/scheduler

# Set to none for NVMe (best for SSDs)
echo none | sudo tee /sys/block/nvme0n1/queue/scheduler

# Make permanent in /etc/udev/rules.d/60-scheduler.rules
ACTION=="add|change", KERNEL=="nvme[0-9]n[0-9]", ATTR{queue/scheduler}="none"

TCP network tuning:

# Add to /etc/sysctl.conf
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_congestion_control = bbr

# Apply
sudo sysctl -p

File descriptor limits:

# Add to /etc/security/limits.conf
orbinum soft nofile 65536
orbinum hard nofile 65536

# Verify
sudo -u orbinum bash -c 'ulimit -n'

Database Tuning

RocksDB cache size:

# Increase for archive nodes with available RAM
--db-cache 4096 # 4 GB database cache

Offchain worker limits:

--offchain-worker Never  # Disable if not needed

Learn More