Resilience in small clusters
editResilience in small clusters
editIn smaller clusters, it is most important to be resilient to single-node failures. This section gives some guidance on making your cluster as resilient as possible to the failure of an individual node.
One-node clusters
editIf your cluster consists of one node, that single node must do everything. To accommodate this, Elasticsearch assigns nodes every role by default.
A single node cluster is not resilient. If the the node fails, the cluster will
stop working. Because there are no replicas in a one-node cluster, you cannot
store your data redundantly. However, by default at least one replica is
required for a green
cluster health status. To ensure your
cluster can report a green
status, override the default by setting
index.number_of_replicas
to 0
on every index.
If the node fails, you may need to restore an older copy of any lost indices from a snapshot.
Because they are not resilient to any failures, we do not recommend using one-node clusters in production.
Two-node clusters
editIf you have two nodes, we recommend they both be data nodes. You should also
ensure every shard is stored redundantly on both nodes by setting
index.number_of_replicas
to 1
on every index.
This is the default number of replicas but may be overridden by an
index template. Auto-expand
replicas can also achieve the same thing, but it’s not necessary to use this
feature in such a small cluster.
We recommend you set node.master: false
on one of your two nodes so that it is
not master-eligible. This means you can be certain which of your
nodes is the elected master of the cluster. The cluster can tolerate the loss of
the other master-ineligible node. If you don’t set node.master: false
on one
node, both nodes are master-eligible. This means both nodes are required for a
master election. Since the election will fail if either node is unavailable,
your cluster cannot reliably tolerate the loss of either node.
By default, each node is assigned every role. We recommend you assign both nodes all other roles except master eligibility. If one node fails, the other node can handle its tasks.
You should avoid sending client requests to just one of your nodes. If you do and this node fails, such requests will not receive responses even if the remaining node is a healthy cluster on its own. Ideally, you should balance your client requests across both nodes. A good way to do this is to specify the addresses of both nodes when configuring the client to connect to your cluster. Alternatively, you can use a resilient load balancer to balance client requests across the nodes in your cluster.
Because it’s not resilient to failures, we do not recommend deploying a two-node cluster in production.
Two-node clusters with a tiebreaker
editBecause master elections are majority-based, the two-node cluster described above is tolerant to the loss of one of its nodes but not the other one. You cannot configure a two-node cluster so that it can tolerate the loss of either node because this is theoretically impossible. You might expect that if either node fails then Elasticsearch can elect the remaining node as the master, but it is impossible to tell the difference between the failure of a remote node and a mere loss of connectivity between the nodes. If both nodes were capable of running independent elections, a loss of connectivity would lead to a split-brain problem and therefore data loss. Elasticsearch avoids this and protects your data by electing neither node as master until that node can be sure that it has the latest cluster state and that there is no other master in the cluster. This could result in the cluster having no master until connectivity is restored.
You can solve this problem by adding a third node and making all three nodes master-eligible. A master election requires only two of the three master-eligible nodes. This means the cluster can tolerate the loss of any single node. This third node acts as a tiebreaker in cases where the two original nodes are disconnected from each other. You can reduce the resource requirements of this extra node by making it a dedicated voting-only master-eligible node, also known as a dedicated tiebreaker. Because it has no other roles, a dedicated tiebreaker does not need to be as powerful as the other two nodes. It will not perform any searches nor coordinate any client requests and cannot be elected as the master of the cluster.
The two original nodes should not be voting-only master-eligible nodes since a resilient cluster requires at least three master-eligible nodes, at least two of which are not voting-only master-eligible nodes. If two of your three nodes are voting-only master-eligible nodes then the elected master must be the third node. This node then becomes a single point of failure.
We recommend assigning both non-tiebreaker nodes all other roles. This creates redundancy by ensuring any task in the cluster can be handled by either node.
You should not send any client requests to the dedicated tiebreaker node. You should also avoid sending client requests to just one of the other two nodes. If you do, and this node fails, then any requests will not receive responses, even if the remaining nodes form a healthy cluster. Ideally, you should balance your client requests across both of the non-tiebreaker nodes. You can do this by specifying the address of both nodes when configuring your client to connect to your cluster. Alternatively, you can use a resilient load balancer to balance client requests across the appropriate nodes in your cluster. The Elastic Cloud service provides such a load balancer.
A two-node cluster with an additional tiebreaker node is the smallest possible cluster that is suitable for production deployments.
Three-node clusters
editIf you have three nodes, we recommend they all be data nodes and every index should have at least one replica. Nodes are data nodes by default. You may prefer for some indices to have two replicas so that each node has a copy of each shard in those indices. You should also configure each node to be master-eligible so that any two of them can hold a master election without needing to communicate with the third node. Nodes are master-eligible by default. This cluster will be resilient to the loss of any single node.
You should avoid sending client requests to just one of your nodes. If you do, and this node fails, then any requests will not receive responses even if the remaining two nodes form a healthy cluster. Ideally, you should balance your client requests across all three nodes. You can do this by specifying the address of multiple nodes when configuring your client to connect to your cluster. Alternatively you can use a resilient load balancer to balance client requests across your cluster. The Elastic Cloud service provides such a load balancer.
Clusters with more than three nodes
editOnce your cluster grows to more than three nodes, you can start to specialise these nodes according to their responsibilities, allowing you to scale their resources independently as needed. You can have as many data nodes, ingest nodes, machine learning nodes, etc. as needed to support your workload. As your cluster grows larger, we recommend using dedicated nodes for each role. This lets you to independently scale resources for each task.
However, it is good practice to limit the number of master-eligible nodes in the cluster to three. Master nodes do not scale like other node types since the cluster always elects just one of them as the master of the cluster. If there are too many master-eligible nodes then master elections may take a longer time to complete. In larger clusters, we recommend you configure some of your nodes as dedicated master-eligible nodes and avoid sending any client requests to these dedicated nodes. Your cluster may become unstable if the master-eligible nodes are overwhelmed with unnecessary extra work that could be handled by one of the other nodes.
You may configure one of your master-eligible nodes to be a voting-only node so that it can never be elected as the master node. For instance, you may have two dedicated master nodes and a third node that is both a data node and a voting-only master-eligible node. This third voting-only node will act as a tiebreaker in master elections but will never become the master itself.
Summary
editThe cluster will be resilient to the loss of any node as long as:
-
The cluster health status is
green
. - There are at least two data nodes.
- Every index has at least one replica of each shard, in addition to the primary.
- The cluster has at least three master-eligible nodes, as long as at least two of these nodes are not voting-only master-eligible nodes.
- Clients are configured to send their requests to more than one node or are configured to use a load balancer that balances the requests across an appropriate set of nodes. The Elastic Cloud service provides such a load balancer.