Lotus Node Clusters
Version 1.19.0 introduces redundant Lotus node cluster raft consensus in order to maintain consistent state for nonces and messages being published in the event of Lotus node failure.
A minimum number of 3 Lotus nodes are required to enable and use Lotus node clusters
Configure the original Lotus node
This document assumes that the reader is already fully operational with at least one Lotus node and miner instance.
- Stop both miner and daemon instances.
- Browse to your
/.lotus
repo folder and edit theconfig.toml
file changing[API] ListenAddress
and[Libp2p] ListenAddress
:
[API]
ListenAddress = "/ip4/127.0.0.1/tcp/4567/http"
[Libp2p]
ListenAddresses = ["/ip4/0.0.0.0/tcp/2222", "/ip6/::/tcp/2222"]
Configure the second Lotus node
- Create a new repo folder for the second node instance such as
/.lotus-2
. - In a new terminal session set the Lotus path for the second lotus node with
LOTUS_PATH=/home/usersname/.lotus-2
. - Initialize the new node by importing a lightweight snapshot and wait until it has fully synced.
- Stop the second Lotus node and edit the
/.lotus-2/config.toml
file changing[API] ListenAddress
and[Libp2p] ListenAddress
[API]
ListenAddress = "/ip4/127.0.0.1/tcp/5678/http"
[Libp2p]
ListenAddresses = ["/ip4/0.0.0.0/tcp/3333", "/ip6/::/tcp/3333"]
- Restart the second node and import Lotus wallet keys from the original node to the second node.
Configure the third Lotus node
- Create a new repo folder for the third node instance such as
/.lotus-3
. - In a new terminal session set the Lotus path for the third lotus node with
LOTUS_PATH=/home/usersname/.lotus-3
. - Initialize the new node by importing a lightweight snapshot and wait until it has fully synced.
- Stop the third Lotus node and edit the
/.lotus-3/config.toml
file changing[API] ListenAddress
and[Libp2p] ListenAddress
[API]
ListenAddress = "/ip4/127.0.0.1/tcp/6789/http"
[Libp2p]
ListenAddresses = ["/ip4/0.0.0.0/tcp/4444", "/ip6/::/tcp/4444"]
- Restart the third node and import Lotus wallet keys from the original node to the third node.
Configuring Raft Consensus / Redundant Chain nodes
- There is now a new section in the
config.toml
file for the lotus node, called[Cluster]
. If you don’t see this section in your ownconfig.toml
, please runlotus config default
and copy the new section across. - Whilst all three nodes and your miner are running, configure the
config.toml
for all three nodes as below. You can get themultiaddress
for your nodes by checking the output oflotus net listen
for all three daemons:
[Cluster]
ClusterModeEnabled = true
InitPeersetMultiAddr = ["/ip4/127.0.0.1/tcp/2222/p2p/12D3KooWHVawzGL5SG58rS1Ti8m3G8fA9NwEWkfnz1AcRLWq1deF","/ip4/127.0.0.1/tcp/3333/p2p/12D3KooWB2ikW3gvaQiwfdnD8HrFAqBd2Y54gdykLTFybUQsYrBG","/ip4/127.0.0.1/tcp/4444/p2p/12D3KooWHxNgWfmiJGf6sFXbjQhnBHudsXGz9WAuZB1H4LLwxx7V"]
- On the
lotus-miner
unset any LOTUS_PATH environment variables, and add the full node api info for the three daemons:export FULLNODE_API_INFO=<node0_info>,<node1_info>,<node2_info>
. You can get API-keys for each node bylotus auth api-info --perm admin
. The format for each nodes info is like this:export FULLNODE_API_INFO=<api_token>:/ip4/<lotus_daemon_ip>/tcp/<lotus_daemon_port>/http
FULLNODE_API_INFO=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZWFkIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.T_meWfWV-F_pX19EPZ1p0uLaRmX3kpE_KFE7nXx9ENs:/ip4/127.0.0.1/tcp/4567/http,eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZWFkIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.lIygxCSIqdSeVvN73aVIme9mRdjOunFsn5eb8K8Q5R8:/ip4/127.0.0.1/tcp/5678/http,eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZWFkIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.arVqeW93VujWC5JlIoumfbRFiHk8BtROp9rsdZPEaVk:/ip4/127.0.0.1/tcp/6789/http
- Restart all daemon and miner instances, stop the lotus-miner first followed by the three nodes.
- Start all three nodes followed by the lotus-miner.
- You are now running raft consensus through node clustering.
- You can check that the cluster is successfully running and determine the current node leader by running
./lotus-shed rpc --version v1 RaftLeader
.
Cluster config options
You can tune your cluster to your own unique requirements in the config.toml of the three nodes by editing the [Cluster]
section.
[Cluster]
# EXPERIMENTAL. config to enabled node cluster with raft consensus
#
# type: bool
# env var: LOTUS_CLUSTER_CLUSTERMODEENABLED
#ClusterModeEnabled = false
# A folder to store Raft's data.
#
# type: string
# env var: LOTUS_CLUSTER_DATAFOLDER
#DataFolder = ""
# InitPeersetMultiAddr provides the list of initial cluster peers for new Raft
# peers (with no prior state). It is ignored when Raft was already
# initialized or when starting in staging mode.
#
# type: []string
# env var: LOTUS_CLUSTER_INITPEERSETMULTIADDR
#InitPeersetMultiAddr = []
# LeaderTimeout specifies how long to wait for a leader before
# failing an operation.
#
# type: Duration
# env var: LOTUS_CLUSTER_WAITFORLEADERTIMEOUT
#WaitForLeaderTimeout = "15s"
# NetworkTimeout specifies how long before a Raft network
# operation is timed out
#
# type: Duration
# env var: LOTUS_CLUSTER_NETWORKTIMEOUT
#NetworkTimeout = "1m40s"
# CommitRetries specifies how many times we retry a failed commit until
# we give up.
#
# type: int
# env var: LOTUS_CLUSTER_COMMITRETRIES
#CommitRetries = 1
# How long to wait between retries
#
# type: Duration
# env var: LOTUS_CLUSTER_COMMITRETRYDELAY
#CommitRetryDelay = "200ms"
# BackupsRotate specifies the maximum number of Raft's DataFolder
# copies that we keep as backups (renaming) after cleanup.
#
# type: int
# env var: LOTUS_CLUSTER_BACKUPSROTATE
#BackupsRotate = 6
# Tracing enables propagation of contexts across binary boundaries.
#
# type: bool
# env var: LOTUS_CLUSTER_TRACING
#Tracing = false