Troubleshooting an unstable cluster
editTroubleshooting an unstable cluster
editNormally, a node will only leave a cluster if deliberately shut down. If a node leaves the cluster unexpectedly, it’s important to address the cause. A cluster in which nodes leave unexpectedly is unstable and can create several issues. For instance:
- The cluster health may be yellow or red.
- Some shards will be initializing and other shards may be failing.
- Search, indexing, and monitoring operations may fail and report exceptions in logs.
-
The
.security
index may be unavailable, blocking access to the cluster. - The master may appear busy due to frequent cluster state updates.
To troubleshoot a cluster in this state, first ensure the cluster has a stable master. Next, focus on the nodes unexpectedly leaving the cluster ahead of all other issues. It will not be possible to solve other issues until the cluster has a stable master node and stable node membership.
Diagnostics and statistics are usually not useful in an unstable cluster. These tools only offer a view of the state of the cluster at a single point in time. Instead, look at the cluster logs to see the pattern of behaviour over time. Focus particularly on logs from the elected master. When a node leaves the cluster, logs for the elected master include a message like this (with line breaks added to make it easier to read):
[2022-03-21T11:02:35,513][INFO ][o.e.c.c.NodeLeftExecutor] [instance-0000000000] node-left: [{instance-0000000004}{bfcMDTiDRkietFb9v_di7w}{aNlyORLASam1ammv2DzYXA}{172.27.47.21}{172.27.47.21:19054}{m}] with reason [disconnected]
This message says that the NodeLeftExecutor
on the elected master
(instance-0000000000
) processed a node-left
task, identifying the node that
was removed and the reason for its removal. When the node joins the cluster
again, logs for the elected master will include a message like this (with line
breaks added to make it easier to read):
[2022-03-21T11:02:59,892][INFO ][o.e.c.c.NodeJoinExecutor] [instance-0000000000] node-join: [{instance-0000000004}{bfcMDTiDRkietFb9v_di7w}{UNw_RuazQCSBskWZV8ID_w}{172.27.47.21}{172.27.47.21:19054}{m}] with reason [joining after restart, removed [24s] ago with reason [disconnected]]
This message says that the NodeJoinExecutor
on the elected master
(instance-0000000000
) processed a node-join
task, identifying the node that
was added to the cluster and the reason for the task.
Other nodes may log similar messages, but report fewer details:
[2020-01-29T11:02:36,985][INFO ][o.e.c.s.ClusterApplierService] [instance-0000000001] removed { {instance-0000000004}{bfcMDTiDRkietFb9v_di7w}{aNlyORLASam1ammv2DzYXA}{172.27.47.21}{172.27.47.21:19054}{m} {tiebreaker-0000000003}{UNw_RuazQCSBskWZV8ID_w}{bltyVOQ-RNu20OQfTHSLtA}{172.27.161.154}{172.27.161.154:19251}{mv} }, term: 14, version: 1653415, reason: Publication{term=14, version=1653415}
These messages are not especially useful for troubleshooting, so focus on the
ones from the NodeLeftExecutor
and NodeJoinExecutor
which are only emitted
on the elected master and which contain more details. If you don’t see the
messages from the NodeLeftExecutor
and NodeJoinExecutor
, check that:
- You’re looking at the logs for the elected master node.
- The logs cover the correct time period.
-
Logging is enabled at
INFO
level.
Nodes will also log a message containing master node changed
whenever they
start or stop following the elected master. You can use these messages to
determine each node’s view of the state of the master over time.
If a node restarts, it will leave the cluster and then join the cluster again.
When it rejoins, the NodeJoinExecutor
will log that it processed a
node-join
task indicating that the node is joining after restart
. If a node
is unexpectedly restarting, look at the node’s logs to see why it is shutting
down.
The Health API on the affected node will also provide some useful information about the situation.
If the node did not restart then you should look at the reason for its departure more closely. Each reason has different troubleshooting steps, described below. There are three possible reasons:
-
disconnected
: The connection from the master node to the removed node was closed. -
lagging
: The master published a cluster state update, but the removed node did not apply it within the permitted timeout. By default, this timeout is 2 minutes. Refer to Discovery and cluster formation settings for information about the settings which control this mechanism. -
followers check retry count exceeded
: The master sent a number of consecutive health checks to the removed node. These checks were rejected or timed out. By default, each health check times out after 10 seconds and Elasticsearch removes the node removed after three consecutively failed health checks. Refer to Discovery and cluster formation settings for information about the settings which control this mechanism.
Diagnosing disconnected
nodes
editNodes typically leave the cluster with reason disconnected
when they shut
down, but if they rejoin the cluster without restarting then there is some
other problem.
Elasticsearch is designed to run on a fairly reliable network. It opens a number of TCP connections between nodes and expects these connections to remain open forever. If a connection is closed then Elasticsearch will try and reconnect, so the occasional blip may fail some in-flight operations but should otherwise have limited impact on the cluster. In contrast, repeatedly-dropped connections will severely affect its operation.
The connections from the elected master node to every other node in the cluster are particularly important. The elected master never spontaneously closes its outbound connections to other nodes. Similarly, once an inbound connection is fully established, a node never spontaneously closes it unless the node is shutting down.
If you see a node unexpectedly leave the cluster with the disconnected
reason, something other than Elasticsearch likely caused the connection to close. A
common cause is a misconfigured firewall with an improper timeout or another
policy that’s incompatible with Elasticsearch. It could also
be caused by general connectivity issues, such as packet loss due to faulty
hardware or network congestion. If you’re an advanced user, configure the
following loggers to get more detailed information about network exceptions:
logger.org.elasticsearch.transport.TcpTransport: DEBUG logger.org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport: DEBUG
If these logs do not show enough information to diagnose the problem, obtain a packet capture simultaneously from the nodes at both ends of an unstable connection and analyse it alongside the Elasticsearch logs from those nodes to determine if traffic between the nodes is being disrupted by another device on the network.
Diagnosing lagging
nodes
editElasticsearch needs every node to process cluster state updates reasonably quickly. If a
node takes too long to process a cluster state update, it can be harmful to the
cluster. The master will remove these nodes with the lagging
reason. Refer to
Discovery and cluster formation settings for information about the settings which control
this mechanism.
Lagging is typically caused by performance issues on the removed node. However,
a node may also lag due to severe network delays. To rule out network delays,
ensure that net.ipv4.tcp_retries2
is configured
properly. Log messages that contain warn threshold
may provide more
information about the root cause.
If you’re an advanced user, you can get more detailed information about what the node was doing when it was removed by configuring the following logger:
logger.org.elasticsearch.cluster.coordination.LagDetector: DEBUG
When this logger is enabled, Elasticsearch will attempt to run the Nodes hot threads API on the faulty node and report the results in the logs on the elected master. The results are compressed, encoded, and split into chunks to avoid truncation:
[DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] [part 1]: H4sIAAAAAAAA/x... [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] [part 2]: p7x3w1hmOQVtuV... [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] [part 3]: v7uTboMGDbyOy+... [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] [part 4]: 4tse0RnPnLeDNN... [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] (gzip compressed, base64-encoded, and split into 4 parts on preceding log lines)
To reconstruct the output, base64-decode the data and decompress it using
gzip
. For instance, on Unix-like systems:
cat lagdetector.log | sed -e 's/.*://' | base64 --decode | gzip --decompress
Diagnosing follower check retry count exceeded
nodes
editNodes sometimes leave the cluster with reason follower check retry count
exceeded
when they shut down, but if they rejoin the cluster without
restarting then there is some other problem.
Elasticsearch needs every node to respond to network messages successfully and
reasonably quickly. If a node rejects requests or does not respond at all then
it can be harmful to the cluster. If enough consecutive checks fail then the
master will remove the node with reason follower check retry count exceeded
and will indicate in the node-left
message how many of the consecutive
unsuccessful checks failed and how many of them timed out. Refer to
Discovery and cluster formation settings for information about the settings which control
this mechanism.
Timeouts and failures may be due to network delays or performance problems on
the affected nodes. Ensure that net.ipv4.tcp_retries2
is
configured properly to eliminate network delays as
a possible cause for this kind of instability. Log messages containing
warn threshold
may give further clues about the cause of the instability.
If the last check failed with an exception then the exception is reported, and typically indicates the problem that needs to be addressed. If any of the checks timed out then narrow down the problem as follows.
-
GC pauses are recorded in the GC logs that Elasticsearch emits by default, and also
usually by the
JvmMonitorService
in the main node logs. Use these logs to confirm whether or not the node is experiencing high heap usage with long GC pauses. If so, the troubleshooting guide for high heap usage has some suggestions for further investigation but typically you will need to capture a heap dump and the garbage collector logs during a time of high heap usage to fully understand the problem. - VM pauses also affect other processes on the same host. A VM pause also typically causes a discontinuity in the system clock, which Elasticsearch will report in its logs. If you see evidence of other processes pausing at the same time, or unexpected clock discontinuities, investigate the infrastructure on which you are running Elasticsearch.
- Packet captures will reveal system-level and network-level faults, especially if you capture the network traffic simultaneously at the elected master and the faulty node and analyse it alongside the Elasticsearch logs from those nodes. The connection used for follower checks is not used for any other traffic so it can be easily identified from the flow pattern alone, even if TLS is in use: almost exactly every second there will be a few hundred bytes sent each way, first the request by the master and then the response by the follower. You should be able to observe any retransmissions, packet loss, or other delays on such a connection.
-
Long waits for particular threads to be available can be identified by taking stack dumps of the main Elasticsearch process (for example, using
jstack
) or a profiling trace (for example, using Java Flight Recorder) in the few seconds leading up to the relevant log message.The Nodes hot threads API sometimes yields useful information, but bear in mind that this API also requires a number of
transport_worker
andgeneric
threads across all the nodes in the cluster. The API may be affected by the very problem you’re trying to diagnose.jstack
is much more reliable since it doesn’t require any JVM threads.The threads involved in discovery and cluster membership are mainly
transport_worker
andcluster_coordination
threads, for which there should never be a long wait. There may also be evidence of long waits for threads in the Elasticsearch logs, particularly looking at warning logs fromorg.elasticsearch.transport.InboundHandler
. See Networking threading model for more information.
By default the follower checks will time out after 30s, so if node departures are unpredictable then capture stack dumps every 15s to be sure that at least one stack dump was taken at the right time.
Diagnosing ShardLockObtainFailedException
failures
editIf a node leaves and rejoins the cluster then Elasticsearch will usually shut down and
re-initialize its shards. If the shards do not shut down quickly enough then
Elasticsearch may fail to re-initialize them due to a ShardLockObtainFailedException
.
To gather more information about the reason for shards shutting down slowly, configure the following logger:
logger.org.elasticsearch.env.NodeEnvironment: DEBUG
When this logger is enabled, Elasticsearch will attempt to run the
Nodes hot threads API whenever it encounters a
ShardLockObtainFailedException
. The results are compressed, encoded, and
split into chunks to avoid truncation:
[DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] [part 1]: H4sIAAAAAAAA/x... [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] [part 2]: p7x3w1hmOQVtuV... [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] [part 3]: v7uTboMGDbyOy+... [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] [part 4]: 4tse0RnPnLeDNN... [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] (gzip compressed, base64-encoded, and split into 4 parts on preceding log lines)
To reconstruct the output, base64-decode the data and decompress it using
gzip
. For instance, on Unix-like systems:
cat shardlock.log | sed -e 's/.*://' | base64 --decode | gzip --decompress
Diagnosing other network disconnections
editElasticsearch is designed to run on a fairly reliable network. It opens a number of TCP connections between nodes and expects these connections to remain open forever. If a connection is closed then Elasticsearch will try and reconnect, so the occasional blip may fail some in-flight operations but should otherwise have limited impact on the cluster. In contrast, repeatedly-dropped connections will severely affect its operation.
Elasticsearch nodes will only actively close an outbound connection to another node if the other node leaves the cluster. See Troubleshooting an unstable cluster for further information about identifying and troubleshooting this situation. If an outbound connection closes for some other reason, nodes will log a message such as the following:
[INFO ][o.e.t.ClusterConnectionManager] [node-1] transport connection to [{node-2}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] closed by remote
Similarly, once an inbound connection is fully established, a node never spontaneously closes it unless the node is shutting down.
Therefore if you see a node report that a connection to another node closed unexpectedly, something other than Elasticsearch likely caused the connection to close. A common cause is a misconfigured firewall with an improper timeout or another policy that’s incompatible with Elasticsearch. It could also be caused by general connectivity issues, such as packet loss due to faulty hardware or network congestion. If you’re an advanced user, configure the following loggers to get more detailed information about network exceptions:
logger.org.elasticsearch.transport.TcpTransport: DEBUG logger.org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport: DEBUG
If these logs do not show enough information to diagnose the problem, obtain a packet capture simultaneously from the nodes at both ends of an unstable connection and analyse it alongside the Elasticsearch logs from those nodes to determine if traffic between the nodes is being disrupted by another device on the network.