Troubleshooting shards capacity health issues
editTroubleshooting shards capacity health issues
editElasticsearch limits the maximum number of shards to be held per node using the
cluster.max_shards_per_node
and
cluster.max_shards_per_node.frozen
settings.
The current shards capacity of the cluster is available in the
health API shards capacity section.
Cluster is close to reaching the configured maximum number of shards for data nodes.
editThe cluster.max_shards_per_node
cluster
setting limits the maximum number of open shards for a cluster, only counting data nodes
that do not belong to the frozen tier.
This symptom indicates that action should be taken, otherwise, either the creation of new indices or upgrading the cluster could be blocked.
If you’re confident your changes won’t destabilize the cluster, you can temporarily increase the limit using the cluster update settings API:
Use Kibana
- Log in to the Elastic Cloud console.
-
On the Elasticsearch Service panel, click the name of your deployment.
If the name of your deployment is disabled your Kibana instances might be unhealthy, in which case please contact Elastic Support. If your deployment doesn’t include Kibana, all you need to do is enable it first.
-
Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to Dev Tools > Console.
-
Check the current status of the cluster according the shards capacity indicator:
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
The response will look like this:
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "yellow", "symptom": "Cluster is close to reaching the configured maximum number of shards for data nodes.", "details": { "data": { "max_shards_in_cluster": 1000, "current_used_shards": 988 }, "frozen": { "max_shards_in_cluster": 3000, "current_used_shards": 0 } }, "impacts": [ ... ], "diagnosis": [ ... } } }
-
Update the
cluster.max_shards_per_node
setting with a proper value:response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => 1200 } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": 1200 } }
This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or reduce your cluster’s shard count on nodes that do not belong to the frozen tier.
-
To verify that the change has fixed the issue, you can get the current status of the
shards_capacity
indicator by checking thedata
section of the health API:response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
The response will look like this:
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "green", "symptom": "The cluster has enough room to add new shards.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3000 } } } } }
-
When a long-term solution is in place, we recommend you reset the
cluster.max_shards_per_node
limit.response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => nil } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": null } }
Check the current status of the cluster according the shards capacity indicator:
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
The response will look like this:
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "yellow", "symptom": "Cluster is close to reaching the configured maximum number of shards for data nodes.", "details": { "data": { "max_shards_in_cluster": 1000, "current_used_shards": 988 }, "frozen": { "max_shards_in_cluster": 3000 } }, "impacts": [ ... ], "diagnosis": [ ... } } }
Current value of the setting |
|
Current number of open shards across the cluster |
Using the cluster settings API
, update the
cluster.max_shards_per_node
setting:
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => 1200 } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": 1200 } }
This increase should only be temporary. As a long-term solution, we recommend
you add nodes to the oversharded data tier or
reduce your cluster’s shard count on nodes that do not belong
to the frozen tier. To verify that the change has fixed the issue, you can get the current
status of the shards_capacity
indicator by checking the data
section of the
health API:
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
The response will look like this:
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "green", "symptom": "The cluster has enough room to add new shards.", "details": { "data": { "max_shards_in_cluster": 1200 }, "frozen": { "max_shards_in_cluster": 3000 } } } } }
When a long-term solution is in place, we recommend you reset the
cluster.max_shards_per_node
limit.
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => nil } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": null } }
Cluster is close to reaching the configured maximum number of shards for frozen nodes.
editThe cluster.max_shards_per_node.frozen
cluster
setting limits the maximum number of open shards for a cluster, only counting data nodes
that belong to the frozen tier.
This symptom indicates that action should be taken, otherwise, either the creation of new indices or upgrading the cluster could be blocked.
If you’re confident your changes won’t destabilize the cluster, you can temporarily increase the limit using the cluster update settings API:
Use Kibana
- Log in to the Elastic Cloud console.
-
On the Elasticsearch Service panel, click the name of your deployment.
If the name of your deployment is disabled your Kibana instances might be unhealthy, in which case please contact Elastic Support. If your deployment doesn’t include Kibana, all you need to do is enable it first.
-
Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to Dev Tools > Console.
-
Check the current status of the cluster according the shards capacity indicator:
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
The response will look like this:
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "yellow", "symptom": "Cluster is close to reaching the configured maximum number of shards for frozen nodes.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3000, "current_used_shards": 2998 } }, "impacts": [ ... ], "diagnosis": [ ... } } }
-
Update the
cluster.max_shards_per_node.frozen
setting:response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node.frozen' => 3200 } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node.frozen": 3200 } }
This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or reduce your cluster’s shard count on nodes that belong to the frozen tier.
-
To verify that the change has fixed the issue, you can get the current status of the
shards_capacity
indicator by checking thedata
section of the health API:response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
The response will look like this:
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "green", "symptom": "The cluster has enough room to add new shards.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3200 } } } } }
-
When a long-term solution is in place, we recommend you reset the
cluster.max_shards_per_node.frozen
limit.response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node.frozen' => nil } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node.frozen": null } }
Check the current status of the cluster according the shards capacity indicator:
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "yellow", "symptom": "Cluster is close to reaching the configured maximum number of shards for frozen nodes.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3000, "current_used_shards": 2998 } }, "impacts": [ ... ], "diagnosis": [ ... } } }
Current value of the setting |
|
Current number of open shards used by frozen nodes across the cluster. |
Using the cluster settings API
, update the
cluster.max_shards_per_node.frozen
setting:
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node.frozen' => 3200 } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node.frozen": 3200 } }
This increase should only be temporary. As a long-term solution, we recommend
you add nodes to the oversharded data tier or
reduce your cluster’s shard count on nodes that belong
to the frozen tier. To verify that the change has fixed the issue, you can get the current
status of the shards_capacity
indicator by checking the data
section of the
health API:
response = client.health_report( feature: 'shards_capacity' ) puts response
GET _health_report/shards_capacity
The response will look like this:
{ "cluster_name": "...", "indicators": { "shards_capacity": { "status": "green", "symptom": "The cluster has enough room to add new shards.", "details": { "data": { "max_shards_in_cluster": 1000 }, "frozen": { "max_shards_in_cluster": 3200 } } } } }
When a long-term solution is in place, we recommend you reset the
cluster.max_shards_per_node.frozen
limit.
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node.frozen' => nil } } ) puts response
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node.frozen": null } }