Skip to main content

Runtime Upgrades

Pilier uses forkless upgrades to update blockchain logic without requiring validators to coordinate a hard fork.

This guide explains how runtime upgrades work, how validators should prepare, and what to do if something goes wrong.


What is a Runtime Upgrade?

Overview

The runtime is the blockchain's state transition function (STF). It defines:

What the runtime controls:
├─ Transaction validation rules
├─ Block production logic
├─ Account balances
├─ Governance mechanisms
├─ Staking & rewards
├─ Custom pallets (DPP, agents, etc.)
└─ Fees, limits, parameters

Analogy:

Runtime = Operating system kernel
Node binary = Computer hardware
Upgrade = Installing a kernel update without rebooting

Traditional Upgrades (Hard Forks)

How other blockchains upgrade:

1. Developers release new node software
2. Validators coordinate upgrade time
3. All validators stop old software
4. All validators start new software simultaneously
5. If timing mismatch → chain splits (fork)

Risks:
├─ Coordination overhead (timezones, availability)
├─ Downtime (network halts during upgrade)
├─ Fork risk (if validators don't upgrade in sync)
└─ Requires validator action (manual intervention)

Example (Bitcoin, Ethereum pre-merge):

  • "Upgrade by block height 123,456"
  • Validators must update before that block
  • If some validators don't upgrade → chain splits

Forkless Upgrades (Substrate/Polkadot)

How Pilier upgrades:

1. New runtime code stored in blockchain state
2. Governance approves upgrade (or sudo on testnet)
3. Upgrade executed on-chain (specific block number)
4. All nodes download new runtime from chain
5. Nodes execute new runtime immediately

Benefits:
├─ Zero coordination (automatic)
├─ Zero downtime (seamless transition)
├─ Zero fork risk (all nodes upgrade at same block)
└─ Validators do nothing (just keep node running)

Key insight: Runtime is stored on-chain (not in node binary).


How It Works (Technical)

Block N-1 (old runtime):
├─ Node executes transactions using runtime v1.0
├─ Upgrade proposal on-chain: "At block N, switch to runtime v2.0"
└─ Runtime v2.0 code stored in chain state (wasm blob)

Block N (upgrade block):
├─ Node detects: "New runtime available"
├─ Node loads runtime v2.0 from chain state
├─ Node compiles runtime v2.0 (or uses cached)
├─ Node executes block N using runtime v2.0
└─ All validators switch simultaneously ✓

Block N+1 (new runtime):
├─ All nodes now using runtime v2.0
├─ New features/fixes active
└─ Chain continues without interruption

Execution:

  • Runtime is WebAssembly (Wasm) bytecode
  • Node binary includes Wasm interpreter
  • Node downloads Wasm from chain → interprets → executes

Runtime vs Node Binary

What's in Each?

Runtime (on-chain, upgradeable):

✅ Contains:
├─ Business logic (pallets)
├─ Transaction validation
├─ State transitions
├─ Governance rules
├─ Fee calculation
└─ Custom features (DPP, agents)

🔄 Upgradeable: Via governance (no validator action)
📦 Format: WebAssembly (Wasm) blob
📍 Stored: On-chain (in chain state)

Node Binary (off-chain, manual update):

✅ Contains:
├─ Networking (P2P)
├─ Consensus (AURA, GRANDPA)
├─ RPC server
├─ Database (storage layer)
├─ Wasm interpreter/executor
└─ Native runtime (optional, for performance)

🔄 Upgradeable: Manual (download new binary)
📦 Format: Compiled binary (ELF, Mach-O)
📍 Stored: Validator server (/usr/local/bin/pilier-node)

Upgrade Matrix

ComponentHow to UpgradeValidator ActionDowntime
RuntimeOn-chain governanceNone (automatic)Zero
Node binaryDownload new version, restartManual (systemctl restart)~30 seconds

When Each Needs Updating

Runtime upgrade needed when:

✅ Adding new pallets (e.g., pallet-dpp)
✅ Fixing runtime bugs
✅ Changing transaction fees
✅ Updating governance rules
✅ Modifying staking logic

Node binary upgrade needed when:

✅ New runtime features require new host functions
✅ Consensus changes (AURA/GRANDPA updates)
✅ Performance improvements (database, networking)
✅ Security patches (RPC vulnerabilities)
✅ Substrate framework updates (major versions)

Both needed when:

Major upgrades:
├─ Runtime adds feature requiring new host function
├─ Node binary provides new host function
└─ Upgrade sequence:
1. Validators update node binary first (manual)
2. Then runtime upgrade happens (automatic)

Governance-Driven Upgrades (Mainnet)

Proposal Process

Step 1: Proposal submission

Who: Any token holder (requires bond)
What: Propose new runtime Wasm blob
How: Governance pallet (democracy.propose or council.propose)

Example:
├─ Developer compiles new runtime
├─ Generates Wasm blob (runtime.compact.compressed.wasm)
├─ Submits on-chain proposal: "Upgrade to runtime v1.1.0"
├─ Includes hash of Wasm blob (for verification)
└─ Pays proposal bond (returned if approved)

Step 2: Voting period

Testnet: 7 days (fast iteration)
Mainnet: 28 days (thorough review)

Voters:
├─ tPIL token holders (governance token)
├─ Voting weight = tokens locked for voting
└─ Conviction voting (longer lock = more weight)

Outcome:
├─ Approval threshold: >50% (simple majority)
├─ OR: Council fast-track (unanimous consent)
└─ OR: Emergency upgrade (2/3 council + root)

Step 3: Enactment

After approval:
├─ Enactment delay: 7 days (testnet), 28 days (mainnet)
├─ Purpose: Give validators time to update node binary if needed
├─ Runtime stored in chain state (system.setCode)
└─ Automatic execution at specified block height

Example timeline:
├─ Block 10,000: Proposal submitted
├─ Block 50,400: Voting ends (7 days * 14,400 blocks/day)
├─ Block 151,200: Enactment (7 more days)
└─ All nodes switch to new runtime at block 151,200 ✓

Validator Responsibilities

For most runtime upgrades: Do nothing!

Automatic runtime upgrades (no action needed):
├─ Bug fixes
├─ Parameter changes (fees, limits)
├─ New pallets (that don't require new host functions)
└─ Governance updates

Your node automatically:
1. Detects upgrade proposal on-chain
2. Downloads new runtime Wasm from chain state
3. Compiles/caches new runtime
4. Switches at upgrade block

For upgrades requiring node binary update:

Notification channels:
├─ Governance forum post (forum.pilier.net)
├─ Validator Telegram group (validators group)
├─ Email alert (validators@pilier.net mailing list)
└─ GitHub release announcement

Timeline:
├─ Announcement: ~30 days before upgrade
├─ Binary release: ~14 days before upgrade
├─ Upgrade deadline: Upgrade block height
└─ Grace period: Usually works for a few days after

Action required:
1. Download new node binary
2. Verify checksum (security!)
3. Stop node (systemctl stop pilier)
4. Replace binary (/usr/local/bin/pilier-node)
5. Start node (systemctl start pilier)
6. Verify node syncing with correct runtime version

Downtime: ~30 seconds (binary restart only)

Example: Major Upgrade (Node + Runtime)

Scenario: Pilier v0.2.0 upgrade (adds pallet-dpp)

Timeline:

Day 0 (Block 100,000): Proposal submitted
├─ Governance forum post: "Proposal: Pilier v0.2.0"
├─ Includes: Runtime changes, new pallet-dpp
├─ Requires: Node binary v0.2.0 (provides new host functions)

Day 7 (Block 200,800): Voting ends
├─ Result: Approved (65% in favor)
├─ Email sent to validators: "Binary update required by Day 35"

Day 14 (Block 302,400): Binary release
├─ GitHub release: pilier-node v0.2.0
├─ Email reminder: "Update node binary in next 14 days"
├─ Validators begin updating (no rush, 14 days available)

Day 28 (Block 503,600): Final reminder
├─ Email: "Upgrade in 7 days - update now if not done"
├─ Most validators already updated (checked via telemetry)

Day 35 (Block 604,800): Upgrade block!
├─ Runtime upgrade executed on-chain
├─ All nodes switch to runtime v0.2.0
├─ Validators with old binary: May fall out of sync (warning: "host function missing")
├─ Validators with new binary: Seamless ✓

Day 36+: Laggards update
├─ Validators who missed deadline update node binary
├─ Sync back to tip, rejoin validator set
├─ Minor penalty (missed block rewards during downtime)

Sudo-Driven Upgrades (Testnet)

What is Sudo?

Sudo (superuser) is a temporary governance mechanism for testnet.

Purpose: Fast iteration during development
Holder: Pilier Foundation (testnet only)
Powers: Can execute ANY runtime call (including upgrades)

Mainnet: NO sudo (governance only)
Testnet: Sudo enabled (for rapid upgrades)

Why sudo on testnet:

  • Fixes bugs quickly (no 7-day voting period)
  • Tests upgrade process (dry run for mainnet)
  • Experiments with new features (fail fast)

Sudo Upgrade Process

Step 1: Announcement

Channel: Validator Telegram group
Notice: 24-48 hours (testnet), 7 days (for breaking changes)

Example message:
"📢 Testnet Upgrade: v0.1.2
🗓️ Date: Feb 15, 2026, 14:00 UTC (Block ~50,000)
🔧 Changes: Fix fee calculation bug
📦 Binary update: Not required (runtime-only change)
⏱️ Downtime: Zero (forkless upgrade)"

Step 2: Execution

Sudo holder (Pilier Foundation):
1. Compile new runtime (runtime.compact.compressed.wasm)
2. Submit sudo transaction: sudo.sudo(system.setCode(new_runtime))
3. Transaction included in block
4. Runtime upgrades immediately (or at specified block)

Validators: Do nothing (automatic)

Step 3: Verification

Validators should check:
├─ Node logs: "Using runtime version v0.1.2"
├─ Telemetry: All validators on same runtime version
├─ Block production: No interruption
└─ Finality: GRANDPA continuing normally

Check runtime version:
curl -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method":"state_getRuntimeVersion"}' \
http://127.0.0.1:9933/

Response:
{
"specName": "pilier",
"specVersion": 102, ← Version number (v0.1.2)
"transactionVersion": 1
}

Preparing for Upgrades

Monitoring Announcements

Subscribe to upgrade notifications:

Essential channels (subscribe to ALL):
├─ GitHub releases: https://github.com/pilier-org/pilier-node/releases
│ └─ Watch → Custom → Releases
├─ Governance forum: https://forum.pilier.net/governance
│ └─ Subscribe to "Governance Proposals" category
├─ Telegram: https://t.me/pilier_validators
│ └─ Enable notifications
└─ Email list: validators@pilier.net
└─ Send email to subscribe

Pre-Upgrade Checklist

When binary update required:

  • Read release notes (GitHub release page)

    • Breaking changes?
    • New dependencies?
    • Migration steps?
  • Download new binary

  wget https://github.com/pilier-org/pilier-node/releases/download/v0.2.0/pilier-node-linux-x86_64
  • Verify checksum (CRITICAL for security!)
  sha256sum pilier-node-linux-x86_64
# Compare with published checksum
  • Test on non-validator node first (if available)

    • Sync testnet with new binary
    • Verify no errors for 24 hours
    • Then update validator
  • Backup before upgrade

  # Backup keystore (in case binary incompatible)
cp -r /var/lib/pilier/chains/*/keystore /secure-backup/

# Note current block height (for rollback reference)
curl -s http://127.0.0.1:9933 -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method":"chain_getBlock"}' \
| jq -r '.result.block.header.number'
  • Schedule maintenance window
    • Low-traffic time (3-5 AM UTC typical)
    • ~30-60 minutes (actual downtime: 30 seconds)
    • Notify your team/stakeholders

Binary Update Procedure

Standard update (no breaking changes):

# 1. Download and verify new binary (see above)

# 2. Stop node
sudo systemctl stop pilier

# 3. Backup old binary (for rollback)
sudo cp /usr/local/bin/pilier-node /usr/local/bin/pilier-node.v0.1.0

# 4. Install new binary
sudo cp pilier-node-linux-x86_64 /usr/local/bin/pilier-node
sudo chmod +x /usr/local/bin/pilier-node

# 5. Verify version
/usr/local/bin/pilier-node --version
# Output: pilier-node 0.2.0-abc123def

# 6. Start node
sudo systemctl start pilier

# 7. Monitor logs
sudo journalctl -u pilier -f

# Expected output:
# 2026-02-15 14:00:00 Pilier Node
# 2026-02-15 14:00:00 version 0.2.0-abc123def
# 2026-02-15 14:00:05 Syncing, target=#50123 ...
# 2026-02-15 14:00:10 Imported #50120
# 2026-02-15 14:00:16 Imported #50121
# 2026-02-15 14:00:20 Using runtime version pilier-102 (v0.1.2) ← Wait for this!
# 2026-02-15 14:01:00 Using runtime version pilier-200 (v0.2.0) ← Upgrade happened!
# 2026-02-15 14:01:10 💤 Idle (5 peers), best: #50125, finalized #50122

Downtime: ~30 seconds (systemctl stop → start)


Runtime Upgrade Execution

What Happens at Upgrade Block

Example: Upgrade at block 50,000

Block 49,999 (last block with old runtime):
├─ All transactions validated with runtime v0.1.0
├─ State root: 0xabc...old
└─ Next runtime code in state: v0.2.0 (waiting)

Block 50,000 (upgrade block):
├─ Node detects: "Scheduled runtime upgrade at this block"
├─ Node loads new runtime from chain state
├─ Node compiles/caches new runtime (if not already done)
├─ CRITICAL: Migration functions execute (if any)
│ ├─ Storage migrations (old format → new format)
│ ├─ Account migrations
│ └─ Pallet initialization (new pallets)
├─ Block 50,000 executed with NEW runtime v0.2.0
└─ State root: 0xdef...new

Block 50,001+ (new runtime active):
├─ All nodes using runtime v0.2.0
├─ New features available
└─ Old runtime never used again

Timing:

  • Block 49,999: ~6 seconds
  • Block 50,000: ~10-30 seconds (longer due to migrations)
  • Block 50,001+: ~6 seconds (back to normal)

Monitoring During Upgrade

Watch telemetry:

https://telemetry.pilier.net

Indicators of successful upgrade:
✅ All validators show same runtime version (e.g., "pilier-200")
✅ Block production continues (new blocks every 6 seconds)
✅ Finality continues (GRANDPA votes visible)
✅ No validators offline (all green)

Red flags:
❌ Some validators stuck on old runtime version
❌ Block production stalled (no new blocks)
❌ Finality stalled (finalized height not increasing)
❌ Many validators offline (red/gray)

Watch your node logs:

sudo journalctl -u pilier -f --since "5 minutes ago"

# Look for:
"New runtime version: spec_version=200" ← Upgrade detected
"Applying 3 storage migrations" ← Migrations running (if any)
"Imported #50000" ← Upgrade block imported
"Using runtime version pilier-200" ← New runtime active
"💤 Idle (5 peers), best: #50001" ← Syncing normally

# Red flags:
"Wasm execution failed" ← Runtime error (likely bug)
"Consensus error" ← Validators disagree on state
"Finality stalled" ← GRANDPA not voting
"Could not find host function" ← Node binary too old!

Check runtime version via RPC:

# Before upgrade (block 49,999):
curl -s -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method":"state_getRuntimeVersion"}' \
http://127.0.0.1:9933/ | jq

# Output:
{
"specVersion": 100, ← Old version
"transactionVersion": 1
}

# After upgrade (block 50,001):
curl -s -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method":"state_getRuntimeVersion"}' \
http://127.0.0.1:9933/ | jq

# Output:
{
"specVersion": 200, ← New version ✓
"transactionVersion": 2
}

Troubleshooting

Issue 1: Node Won't Sync After Upgrade

Symptom:

Logs show:
"Could not find host function: ext_foo_bar_version_1"
"Wasm execution failed: Trap"
Node stuck at upgrade block, not progressing

Cause: Node binary too old (doesn't provide required host function).

Fix:

# 1. Update node binary immediately!
# Download latest version from GitHub releases

# 2. Verify release notes mention required binary version
# Example: "Requires node v0.2.0+"

# 3. Stop node
sudo systemctl stop pilier

# 4. Replace binary (see "Binary Update Procedure" above)

# 5. Start node
sudo systemctl start pilier

# 6. Monitor logs - should sync past upgrade block now
sudo journalctl -u pilier -f

Issue 2: Finality Stalled After Upgrade

Symptom:

Telemetry shows:
- Best block: #50,123 (increasing)
- Finalized block: #49,999 (stuck!)

Logs show:
"GRANDPA voter error: No valid votes received"

Cause:

  • Some validators didn't update node binary (can't execute new runtime)
  • <2/3 validators able to vote on finality

Fix (as validator):

# 1. Ensure YOUR node is updated (see Issue 1)

# 2. Check peer versions
# Via telemetry, see which validators stuck on old runtime

# 3. Report to Pilier team
# Email: validators@pilier.net
# Include: Your validator name, block height, peer IDs stuck

# 4. Wait for other validators to update
# Finality will resume when 2/3+ validators on new runtime

Fix (as network coordinator - Pilier team):

# Contact validators with old binary
# Assist with update if needed
# If validator unresponsive: Consider emergency measures (next section)

Issue 3: Runtime Upgrade Failed (Rare)

Symptom:

All validators stuck at upgrade block
Logs show:
"Migration failed: [error details]"
"State root mismatch"
No blocks produced after upgrade block

Cause: Bug in runtime upgrade or migration logic.

Fix:

Option A: Rollback runtime (if caught immediately)

Testnet (sudo available):
├─ Sudo holder submits: system.setCode(old_runtime_wasm)
├─ Rollback to previous runtime version
└─ Investigate bug, fix, retry upgrade later

Mainnet (governance required):
├─ Emergency council vote (2/3 majority)
├─ Fast-track rollback proposal
├─ Execute rollback via governance
└─ Slower (~7 days), but no sudo on mainnet

Option B: Emergency patch

If rollback not possible (e.g., migrations already applied):
├─ Developers create patch runtime (e.g., v0.2.1)
├─ Fix bug in migration logic
├─ Emergency upgrade to patched runtime
└─ Network resumes

Prevention:

  • Thorough testing on testnet first
  • Multiple testnet upgrades before mainnet
  • Extensive migration testing (test with real testnet state)

Issue 4: Node Binary Incompatible

Symptom:

After updating node binary:
"Database version mismatch"
"Failed to load chain state"
Node won't start

Cause: New node binary requires database migration (rare, but possible).

Fix:

# 1. Check release notes (should mention database migration)

# 2. Backup database (IMPORTANT!)
sudo systemctl stop pilier
sudo cp -r /var/lib/pilier/chains/pilier_testnet/db /secure-backup/db-before-migration

# 3. Run migration command (if provided in release notes)
pilier-node db-migrate --base-path /var/lib/pilier

# 4. Start node
sudo systemctl start pilier

# 5. If migration fails: Restore backup, report bug
sudo rm -rf /var/lib/pilier/chains/pilier_testnet/db
sudo cp -r /secure-backup/db-before-migration /var/lib/pilier/chains/pilier_testnet/db

Issue 5: Missed Upgrade Window

Symptom:

You updated node binary AFTER upgrade block
Node syncing, but showing warnings:
"Runtime version mismatch"
"Expected runtime v200, found v100"

Fix:

# 1. Update node binary (if not already done)
# See "Binary Update Procedure"

# 2. Let node sync
# Node will download historical blocks with old runtime
# Then switch to new runtime at upgrade block
# This is normal! Just wait.

# 3. Monitor logs
# Should see: "Using runtime version pilier-200" after sync passes upgrade block

# 4. Verify caught up
curl -s http://127.0.0.1:9933 -H "Content-Type: application/json" \
-d '{"id":1, "jsonrpc":"2.0", "method":"system_health"}' | jq

# Output:
{
"isSyncing": false, ← Should be false
"peers": 5,
"shouldHavePeers": true
}

Downtime: Duration of sync (typically 10-30 minutes if a few hours behind).


Best Practices

Validator Hygiene

Do:

  • Subscribe to all announcement channels (GitHub, Telegram, email)
  • Read release notes thoroughly (don't just skim)
  • Test on testnet first (if you run both testnet + mainnet validators)
  • Update during low-traffic periods (3-5 AM UTC)
  • Monitor node for 1 hour after update (catch issues early)
  • Keep 2-3 previous binary versions (for quick rollback)
  /usr/local/bin/pilier-node         ← Current (v0.2.0)
/usr/local/bin/pilier-node.v0.1.0 ← Previous (backup)
/usr/local/bin/pilier-node.v0.1.1 ← Previous-1 (backup)

Don't:

  • Never update during peak hours (risk to network if issues)
  • Never skip checksum verification (security!)
  • Never update without reading release notes (might miss critical steps)
  • Never run untested binaries (compile from source if paranoid)
  • Never ignore warnings in logs (early indicators of problems)

Testing Runtime Upgrades

For developers/advanced operators:

Step 1: Local testnet

# 1. Start local dev chain
pilier-node --dev --tmp

# 2. Compile new runtime
cd runtime/
cargo build --release --features=std

# 3. Submit runtime upgrade (via Polkadot.js Apps)
# sudo.sudo(system.setCode(new_runtime_wasm))

# 4. Monitor logs, verify upgrade successful

# 5. Test new features work as expected

Step 2: Public testnet

# 1. Deploy validator on testnet
# 2. Submit governance proposal (or coordinate with Pilier team for sudo upgrade)
# 3. Wait for community testing (7 days voting period)
# 4. Execute upgrade
# 5. Monitor testnet stability for 7 days
# 6. If stable → proceed to mainnet proposal

Step 3: Mainnet

# 1. Submit governance proposal
# 2. Campaign for community support (forum posts, validator discussions)
# 3. Pass vote (>50% approval)
# 4. Enactment delay (28 days - validators update node binary)
# 5. Upgrade executes
# 6. Monitor closely for 48 hours
# 7. Post-mortem report (document any issues)

Emergency Procedures

Emergency Runtime Rollback (Testnet)

When: Critical bug discovered immediately after upgrade.

Who: Pilier Foundation (sudo holder).

Process:

1. Identify issue (within 1 hour of upgrade)
2. Retrieve old runtime Wasm from chain history
3. Submit: sudo.sudo(system.setCode(old_runtime_wasm))
4. Rollback executes in next block
5. Announce to validators (Telegram)
6. Investigate bug, fix, schedule new upgrade

Downtime: 1-2 blocks (12 seconds)

Emergency Runtime Rollback (Mainnet)

When: Critical bug AND network consensus.

Who: Emergency council (2/3 majority required).

Process:

1. Council convenes (emergency meeting)
2. Assess severity (is rollback justified?)
3. If yes: Submit emergency proposal (fast-track)
4. Council votes (2/3 threshold, ~6 hours)
5. Rollback executes (if passed)
6. Post-mortem + compensation plan (if users affected)

Downtime: 6-12 hours (council vote + execution)

Criteria for emergency rollback:

Justified:
✅ Chain halted (no blocks produced)
✅ Finality broken (validators can't reach consensus)
✅ Critical security vulnerability (funds at risk)
✅ Data corruption (state root mismatch)

Not justified:
❌ Minor bug (annoying but not critical)
❌ UI issue (frontend only)
❌ Low-impact feature broken
❌ Governance disagreement (should have been caught during proposal)

Node Binary Rollback

When: New node binary causes issues (crashes, won't sync).

Process:

# 1. Stop problematic node
sudo systemctl stop pilier

# 2. Restore previous binary
sudo cp /usr/local/bin/pilier-node.v0.1.0 /usr/local/bin/pilier-node

# 3. Start node
sudo systemctl start pilier

# 4. Verify node syncing normally
sudo journalctl -u pilier -f

# 5. Report bug to developers
# GitHub issue: https://github.com/pilier-org/pilier-node/issues

# 6. Wait for patched binary release

Important:

  • Only rollback node binary (not runtime!)
  • Runtime rollback requires governance (separate process)
  • Node binary rollback: 2 minutes downtime
  • No network impact (only your validator affected)

Upgrade History (Reference)

Testnet Upgrades

DateVersionTypeChangesDowntime
2026-02-01v0.1.0GenesisInitial launchN/A
2026-02-15v0.1.1RuntimeFix fee calculation0s
2026-03-01v0.1.2Node + RuntimeAdd pallet-timestamp metadata30s
2026-04-01v0.2.0Node + RuntimeAdd pallet-dpp (major)45s

Lessons learned:

v0.1.1:
├─ Flawless (runtime-only, no binary update)
└─ All validators upgraded automatically ✓

v0.1.2:
├─ 2 validators late (updated binary 3 hours after upgrade)
├─ Minor finality delay (10 minutes)
└─ Resolved quickly ✓

v0.2.0:
├─ 1 validator never updated (forgotten)
├─ Removed from validator set after 24 hours
└─ Lesson: Better notification system needed

Mainnet Upgrades (Planned)

DateVersionTypeChangesEnactment Delay
2026-09-01v1.0.0GenesisMainnet launchN/A
TBDv1.1.0Runtime(To be announced)28 days

FAQ

Do validators vote on runtime upgrades?

Testnet: No (sudo-driven, Pilier Foundation decides).

Mainnet: Yes (governance-driven, tPIL token holders vote).

Validator influence:

  • Validators can vote if they hold tPIL tokens
  • Validators do NOT have special voting rights (1 tPIL = 1 vote)
  • Validators must execute upgrade regardless of vote outcome

What if I miss a node binary update?

Impact:

Short-term (0-24 hours):
├─ Your node may fall out of sync (can't execute new runtime)
├─ You stop producing blocks (downtime)
├─ Miss block rewards (proportional to downtime)
└─ No slashing (just missed rewards)

Long-term (>24 hours):
├─ Governance may remove you from validator set (if unresponsive)
├─ Must re-apply for validator slot
└─ Reputational damage

Recovery:

# Update binary ASAP (see "Binary Update Procedure")
# Let node sync (may take 10-30 minutes)
# You'll rejoin validator set in next session (~1 hour)

Can runtime upgrades introduce bugs?

Yes, but rare.

Safeguards:

Pre-upgrade:
├─ Extensive testing on testnet
├─ Code review (multiple developers)
├─ Formal verification (for critical pallets)
└─ Community review (28-day voting period on mainnet)

Post-upgrade:
├─ Rollback capability (emergency council on mainnet)
├─ Bug bounty program (incentivize disclosure)
└─ Rapid patching (new runtime within hours if needed)

Historical data (Substrate ecosystem):

  • ~95% of runtime upgrades: flawless
  • ~4%: Minor issues (fixed in follow-up upgrade)
  • ~1%: Major issues (required rollback or emergency patch)

How often do runtime upgrades happen?

Testnet:

Frequency: 2-4 times per month (rapid iteration)
Types: Bug fixes, new features, experiments
Notification: 24-48 hours

Mainnet:

Frequency: 1-2 times per quarter (stable releases)
Types: Security patches, governance-approved features
Notification: 28+ days (enactment delay)

What happens if validators disagree on runtime version?

Scenario: Half of validators on runtime v1, half on v2.

Result:

Block production:
├─ Validators on v1 produce blocks with v1 logic
├─ Validators on v2 produce blocks with v2 logic
├─ Result: Two competing chains (fork)

Finality:
├─ GRANDPA requires 2/3+ to finalize
├─ If <2/3 on same runtime → finality stalls
├─ Network halted (no finalized blocks)

Resolution:
├─ Validators must converge on same runtime
├─ Either: Update to v2 (if intended upgrade)
├─ Or: Rollback to v1 (if upgrade failed)
└─ Once >2/3 agree → finality resumes

Prevention:

  • Forkless upgrades (all nodes switch simultaneously)
  • On-chain coordination (upgrade at specific block height)
  • Validator monitoring (telemetry shows runtime versions)

Support

Runtime upgrade questions?

Emergency during upgrade?


Resources

Technical documentation:

Monitoring:


Document version: 1.0

Last updated: 2026-01-12

Next review: After first mainnet runtime upgrade