Fix common cluster issues
editFix common cluster issues
editThis guide describes how to fix common errors and problems with Elasticsearch clusters.
Error: disk usage exceeded flood-stage watermark, index has read-only-allow-delete block
editThis error indicates a data node is critically low on disk space and has reached the flood-stage disk usage watermark. To prevent a full disk, when a node reaches this watermark, Elasticsearch blocks writes to any index with a shard on the node. If the block affects related system indices, Kibana and other Elastic Stack features may become unavailable.
Elasticsearch will automatically remove the write block when the affected node’s disk usage goes below the high disk watermark. To achieve this, Elasticsearch automatically moves some of the affected node’s shards to other nodes in the same data tier.
To verify that shards are moving off the affected node, use the cat shards API.
GET _cat/shards?v=true
If shards remain on the node, use the cluster allocation explanation API to get an explanation for their allocation status.
GET _cluster/allocation/explain { "index": "my-index", "shard": 0, "primary": false, "current_node": "my-node" }
To immediately restore write operations, you can temporarily increase the disk watermarks and remove the write block.
PUT _cluster/settings { "transient": { "cluster.routing.allocation.disk.watermark.low": "90%", "cluster.routing.allocation.disk.watermark.high": "95%", "cluster.routing.allocation.disk.watermark.flood_stage": "97%" } } PUT */_settings?expand_wildcards=all { "index.blocks.read_only_allow_delete": null }
As a long-term solution, we recommend you add nodes to the affected data tiers or upgrade existing nodes to increase disk space. To free up additional disk space, you can delete unneeded indices using the delete index API.
DELETE my-index
When a long-term solution is in place, reset or reconfigure the disk watermarks.
PUT _cluster/settings { "transient": { "cluster.routing.allocation.disk.watermark.low": null, "cluster.routing.allocation.disk.watermark.high": null, "cluster.routing.allocation.disk.watermark.flood_stage": null } }
Circuit breaker errors
editElasticsearch uses circuit breakers to prevent nodes from running out of JVM heap memory. If Elasticsearch estimates an operation would exceed a circuit breaker, it stops the operation and returns an error.
By default, the parent circuit breaker triggers at 95% JVM memory usage. To prevent errors, we recommend taking steps to reduce memory pressure if usage consistently exceeds 85%.
Diagnose circuit breaker errors
editError messages
If a request triggers a circuit breaker, Elasticsearch returns an error with a 429
HTTP
status code.
{ 'error': { 'type': 'circuit_breaking_exception', 'reason': '[parent] Data too large, data for [<http_request>] would be [123848638/118.1mb], which is larger than the limit of [123273216/117.5mb], real usage: [120182112/114.6mb], new bytes reserved: [3666526/3.4mb]', 'bytes_wanted': 123848638, 'bytes_limit': 123273216, 'durability': 'TRANSIENT' }, 'status': 429 }
Elasticsearch also writes circuit breaker errors to elasticsearch.log
. This
is helpful when automated processes, such as allocation, trigger a circuit
breaker.
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [num/numGB], which is larger than the limit of [num/numGB], usages [request=0/0b, fielddata=num/numKB, in_flight_requests=num/numGB, accounting=num/numGB]
Check JVM memory usage
If you’ve enabled Stack Monitoring, you can view JVM memory usage in Kibana. In the main menu, click Stack Monitoring. On the Stack Monitoring Overview page, click Nodes. The JVM Heap column lists the current memory usage for each node.
You can also use the cat nodes API to get the current
heap.percent
for each node.
GET _cat/nodes?v=true&h=name,node*,heap*
To get the JVM memory usage for each circuit breaker, use the node stats API.
GET _nodes/stats/breaker
Prevent circuit breaker errors
editReduce JVM memory pressure
High JVM memory pressure often causes circuit breaker errors. See High JVM memory pressure.
Avoid using fielddata on text
fields
For high-cardinality text
fields, fielddata can use a large amount of JVM
memory. To avoid this, Elasticsearch disables fielddata on text
fields by default. If
you’ve enabled fielddata and triggered the fielddata
circuit breaker, consider disabling it and using a keyword
field instead.
See fielddata
mapping parameter.
Clear the fieldata cache
If you’ve triggered the fielddata circuit breaker and can’t disable fielddata, use the clear cache API to clear the fielddata cache. This may disrupt any in-flight searches that use fielddata.
POST _cache/clear?fielddata=true
High CPU usage
editElasticsearch uses thread pools to manage CPU resources for concurrent operations. High CPU usage typically means one or more thread pools are running low.
If a thread pool is depleted, Elasticsearch will reject requests
related to the thread pool. For example, if the search
thread pool is
depleted, Elasticsearch will reject search requests until more threads are available.
Diagnose high CPU usage
editCheck CPU usage
From your deployment menu, click Performance. The page’s CPU Usage chart shows your deployment’s CPU usage as a percentage.
High CPU usage can also deplete your CPU credits. CPU credits let Elasticsearch Service provide smaller clusters with a performance boost when needed. The CPU credits chart shows your remaining CPU credits, measured in seconds of CPU time.
You can also use the cat nodes API to get the current CPU usage for each node.
GET _cat/nodes?v=true&s=cpu:desc
The response’s cpu
column contains the current CPU usage as a percentage. The
node
column contains the node’s name.
Use the cat nodes API to get the current CPU usage for each node.
GET _cat/nodes?v=true&s=cpu:desc
The response’s cpu
column contains the current CPU usage as a percentage. The
node
column contains the node’s name.
Check hot threads
If a node has high CPU usage, use the nodes hot threads API to check for resource-intensive threads running on the node.
GET _nodes/my-node,my-other-node/hot_threads
This API returns a breakdown of any hot threads in plain text.
Reduce CPU usage
editThe following tips outline the most common causes of high CPU usage and their solutions.
Scale your cluster
Heavy indexing and search loads can deplete smaller thread pools. To better handle heavy workloads, add more nodes to your cluster or upgrade your existing nodes to increase capacity.
Spread out bulk requests
While more efficient than individual requests, large bulk indexing or multi-search requests still require CPU resources. If possible, submit smaller requests and allow more time between them.
Cancel long-running searches
Long-running searches can block threads in the search
thread pool. To check
for these searches, use the task management API.
GET _tasks?actions=*search&detailed
The response’s description
contains the search request and its queries.
running_time_in_nanos
shows how long the search has been running.
{ "nodes" : { "oTUltX4IQMOUUVeiohTt8A" : { "name" : "my-node", "transport_address" : "127.0.0.1:9300", "host" : "127.0.0.1", "ip" : "127.0.0.1:9300", "tasks" : { "oTUltX4IQMOUUVeiohTt8A:464" : { "node" : "oTUltX4IQMOUUVeiohTt8A", "id" : 464, "type" : "transport", "action" : "indices:data/read/search", "description" : "indices[my-index], search_type[QUERY_THEN_FETCH], source[{\"query\":...}]", "start_time_in_millis" : 4081771730000, "running_time_in_nanos" : 13991383, "cancellable" : true } } } } }
To cancel a search and free up resources, use the API’s _cancel
endpoint.
POST _tasks/oTUltX4IQMOUUVeiohTt8A:464/_cancel
For additional tips on how to track and avoid resource-intensive searches, see Avoid expensive searches.
High JVM memory pressure
editHigh JVM memory usage can degrade cluster performance and trigger circuit breaker errors. To prevent this, we recommend taking steps to reduce memory pressure if a node’s JVM memory usage consistently exceeds 85%.
Diagnose high JVM memory pressure
editCheck JVM memory pressure
From your deployment menu, click Elasticsearch. Under Instances, each instance displays a JVM memory pressure indicator. When the JVM memory pressure reaches 75%, the indicator turns red.
You can also use the nodes stats API to calculate the current JVM memory pressure for each node.
GET _nodes/stats?filter_path=nodes.*.jvm.mem.pools.old
Use the response to calculate memory pressure as follows:
JVM Memory Pressure = used_in_bytes
/ max_in_bytes
To calculate the current JVM memory pressure for each node, use the nodes stats API.
GET _nodes/stats?filter_path=nodes.*.jvm.mem.pools.old
Use the response to calculate memory pressure as follows:
JVM Memory Pressure = used_in_bytes
/ max_in_bytes
Check garbage collection logs
As memory usage increases, garbage collection becomes more frequent and takes
longer. You can track the frequency and length of garbage collection events in
elasticsearch.log
. For example, the following event states Elasticsearch
spent more than 50% (21 seconds) of the last 40 seconds performing garbage
collection.
[timestamp_short_interval_from_last][INFO ][o.e.m.j.JvmGcMonitorService] [node_id] [gc][number] overhead, spent [21s] collecting in the last [40s]
Reduce JVM memory pressure
editReduce your shard count
Every shard uses memory. In most cases, a small set of large shards uses fewer resources than many small shards. For tips on reducing your shard count, see Size your shards.
Expensive searches can use large amounts of memory. To better track expensive searches on your cluster, enable slow logs.
Expensive searches may have a large size
argument,
use aggregations with a large number of buckets, or include
expensive queries. To prevent expensive
searches, consider the following setting changes:
-
Lower the
size
limit using theindex.max_result_window
index setting. - Decrease the maximum number of allowed aggregation buckets using the search.max_buckets cluster setting.
-
Disable expensive queries using the
search.allow_expensive_queries
cluster setting.
PUT _settings { "index.max_result_window": 5000 } PUT _cluster/settings { "persistent": { "search.max_buckets": 20000, "search.allow_expensive_queries": false } }
Prevent mapping explosions
Defining too many fields or nesting fields too deeply can lead to mapping explosions that use large amounts of memory. To prevent mapping explosions, use the mapping limit settings to limit the number of field mappings.
Spread out bulk requests
While more efficient than individual requests, large bulk indexing or multi-search requests can still create high JVM memory pressure. If possible, submit smaller requests and allow more time between them.
Upgrade node memory
Heavy indexing and search loads can cause high JVM memory pressure. To better handle heavy workloads, upgrade your nodes to increase their memory capacity.
Red or yellow cluster status
editA red or yellow cluster status indicates one or more shards are missing or unallocated. These unassigned shards increase your risk of data loss and can degrade cluster performance.
Diagnose your cluster status
editCheck your cluster status
Use the cluster health API.
GET _cluster/health?filter_path=status,*_shards
A healthy cluster has a green status
and zero unassigned_shards
. A yellow
status means only replicas are unassigned. A red status means one or
more primary shards are unassigned.
View unassigned shards
To view unassigned shards, use the cat shards API.
GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
Unassigned shards have a state
of UNASSIGNED
. The prirep
value is p
for
primary shards and r
for replicas. The unassigned.reason
describes why the
shard remains unassigned.
To get a more in-depth explanation of an unassigned shard’s allocation status, use the cluster allocation explanation API. You can often use details in the response to resolve the issue.
GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.* { "index": "my-index", "shard": 0, "primary": false, "current_node": "my-node" }
Fix a red or yellow cluster status
editA shard can become unassigned for several reasons. The following tips outline the most common causes and their solutions.
Re-enable shard allocation
You typically disable allocation during a restart or other
cluster maintenance. If you forgot to re-enable allocation afterward, Elasticsearch will
be unable to assign shards. To re-enable allocation, reset the
cluster.routing.allocation.enable
cluster setting.
PUT _cluster/settings { "persistent" : { "cluster.routing.allocation.enable" : null } }
Recover lost nodes
Shards often become unassigned when a data node leaves the cluster. This can occur for several reasons, ranging from connectivity issues to hardware failure. After you resolve the issue and recover the node, it will rejoin the cluster. Elasticsearch will then automatically allocate any unassigned shards.
To avoid wasting resources on temporary issues, Elasticsearch delays allocation by one minute by default. If you’ve recovered a node and don’t want to wait for the delay period, you can call the cluster reroute API with no arguments to start the allocation process. The process runs asynchronously in the background.
POST _cluster/reroute
Fix allocation settings
Misconfigured allocation settings can result in an unassigned primary shard. These settings include:
- Shard allocation index settings
- Allocation filtering cluster settings
- Allocation awareness cluster settings
To review your allocation settings, use the get index settings and cluster get settings APIs.
GET my-index/_settings?flat_settings=true&include_defaults=true GET _cluster/settings?flat_settings=true&include_defaults=true
You can change the settings using the update index settings and cluster update settings APIs.
Allocate or reduce replicas
To protect against hardware failure, Elasticsearch will not assign a replica to the same node as its primary shard. If no other data nodes are available to host the replica, it remains unassigned. To fix this, you can:
- Add a data node to the same tier to host the replica.
-
Change the
index.number_of_replicas
index setting to reduce the number of replicas for each primary shard. We recommend keeping at least one replica per primary.
PUT _settings { "index.number_of_replicas": 1 }
Free up or increase disk space
Elasticsearch uses a low disk watermark to ensure data nodes have enough disk space for incoming shards. By default, Elasticsearch does not allocate shards to nodes using more than 85% of disk space.
To check the current disk space of your nodes, use the cat allocation API.
GET _cat/allocation?v=true&h=node,shards,disk.*
If your nodes are running low on disk space, you have a few options:
- Upgrade your nodes to increase disk space.
- Delete unneeded indices to free up space. If you use ILM, you can update your lifecycle policy to use searchable snapshots or add a delete phase. If you no longer need to search the data, you can use a snapshot to store it off-cluster.
-
If you no longer write to an index, use the force merge API or ILM’s force merge action to merge its segments into larger ones.
POST my-index/_forcemerge
-
If an index is read-only, use the shrink index API or ILM’s shrink action to reduce its primary shard count.
POST my-index/_shrink/my-shrunken-index
-
If your node has a large disk capacity, you can increase the low disk watermark or set it to an explicit byte value.
PUT _cluster/settings { "persistent": { "cluster.routing.allocation.disk.watermark.low": "30gb" } }
Reduce JVM memory pressure
Shard allocation requires JVM heap memory. High JVM memory pressure can trigger circuit breakers that stop allocation and leave shards unassigned. See High JVM memory pressure.
Recover data for a lost primary shard
If a node containing a primary shard is lost, Elasticsearch can typically replace it using a replica on another node. If you can’t recover the node and replicas don’t exist or are irrecoverable, you’ll need to re-add the missing data from a snapshot or the original data source.
Only use this option if node recovery is no longer possible. This process allocates an empty primary shard. If the node later rejoins the cluster, Elasticsearch will overwrite its primary shard with data from this newer empty shard, resulting in data loss.
Use the cluster reroute API to manually allocate the
unassigned primary shard to another data node in the same tier. Set
accept_data_loss
to true
.
POST _cluster/reroute { "commands": [ { "allocate_empty_primary": { "index": "my-index", "shard": 0, "node": "my-node", "accept_data_loss": "true" } } ] }
If you backed up the missing index data to a snapshot, use the restore snapshot API to restore the individual index. Alternatively, you can index the missing data from the original data source.
Rejected requests
editWhen Elasticsearch rejects a request, it stops the operation and returns an error with a
429
response code. Rejected requests are commonly caused by:
-
A depleted thread pool. A depleted
search
orwrite
thread pool returns aTOO_MANY_REQUESTS
error message. - A circuit breaker error.
-
High indexing pressure that exceeds the
indexing_pressure.memory.limit
.
Check rejected tasks
editTo check the number of rejected tasks for each thread pool, use the
cat thread pool API. A high ratio of rejected
to
completed
tasks, particularly in the search
and write
thread pools, means
Elasticsearch regularly rejects requests.
GET /_cat/thread_pool?v=true&h=id,name,active,rejected,completed
Prevent rejected requests
editFix high CPU and memory usage
If Elasticsearch regularly rejects requests and other tasks, your cluster likely has high CPU usage or high JVM memory pressure. For tips, see High CPU usage and High JVM memory pressure.
Prevent circuit breaker errors
If you regularly trigger circuit breaker errors, see Circuit breaker errors for tips on diagnosing and preventing them.
Task queue backlog
editA backlogged task queue can prevent tasks from completing and put the cluster into an unhealthy state. Resource constraints, a large number of tasks being triggered at once, and long running tasks can all contribute to a backlogged task queue.
Diagnose a task queue backlog
editCheck the thread pool status
A depleted thread pool can result in rejected requests.
You can use the cat thread pool API to see the number of active threads in each thread pool and how many tasks are queued, how many have been rejected, and how many have completed.
GET /_cat/thread_pool?v&s=t,n&h=type,name,node_name,active,queue,rejected,completed
Inspect the hot threads on each node
If a particular thread pool queue is backed up, you can periodically poll the Nodes hot threads API to determine if the thread has sufficient resources to progress and gauge how quickly it is progressing.
GET /_nodes/hot_threads
Look for long running tasks
Long-running tasks can also cause a backlog.
You can use the task management API to get information about the tasks that are running.
Check the running_time_in_nanos
to identify tasks that are taking an excessive amount of time to complete.
GET /_tasks?filter_path=nodes.*.tasks
Resolve a task queue backlog
editIncrease available resources
If tasks are progressing slowly and the queue is backing up, you might need to take steps to Reduce CPU usage.
In some cases, increasing the thread pool size might help.
For example, the force_merge
thread pool defaults to a single thread.
Increasing the size to 2 might help reduce a backlog of force merge requests.
Cancel stuck tasks
If you find the active task’s hot thread isn’t progressing and there’s a backlog, consider canceling the task.