Loading

Warning: Not enough nodes to allocate all shard replicas

Distributing copies of the data (index shard replicas) on different nodes can parallelize processing requests, speeding up search queries. To achieve this, increase the number of replica shards up to the maximum value (total number of nodes minus one), which also protects against hardware failure. If the index has a preferred tier, Elasticsearch will only place the copies of the data for that index on nodes in the target tier.

If a warning is encountered with not enough nodes to allocate all shard replicas, you can influence this behavior by adding more nodes to the cluster (or tier), or by reducing the index.number_of_replicas index setting.

To accomplish this, complete the following steps:

  1. Determine which data tier needs more capacity to identify the tier where shards need to be allocated.
  2. Resize your deployment to add capacity and accommodate all shard replicas.
  3. Check and adjust the index replicas limit to determine the current value and reduce it if needed.

You can run the following step using either API console or direct Elasticsearch API calls.

Use the get index settings API to retrieve the configured value for the index.routing.allocation.include._tier_preference setting:

				GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings
		

The response looks like this:

{
  "my-index-000001": {
    "settings": {
      "index.routing.allocation.include._tier_preference": "data_warm,data_hot"
    }
  }
}
		
  1. Represents a comma-separated list of data tier node roles this index is allowed to be allocated on. The first tier in the list has the highest priority and is the tier the index is targeting. In this example, the tier preference is data_warm,data_hot, so the index is targeting the warm tier. If the warm tier lacks capacity, the index will fall back to the data_hot tier.

After you've identified the tier that needs more capacity, you can resize your deployment to distribute the shard load and allow previously unassigned shards to be allocated.

Warning

In ECE, resizing is limited by your allocator capacity.

To resize your deployment and increase its capacity by expanding a data tier or adding a new one, use the following options:

Option 1: Configure Autoscaling

  1. Log in to the Elastic Cloud console or ECE Cloud UI.
  2. On the home page, find your deployment and select Manage.
  3. Go to Actions > Edit deployment and check that autoscaling is enabled. Adjust the Enable Autoscaling for dropdown menu as needed and select Save.
  4. If autoscaling is successful, the cluster returns to a healthy status. If the cluster is still out of disk, check if autoscaling has reached its set limits and update your autoscaling settings.

Option 2: Configure deployment size and tiers

You can increase the deployment capacity by editing the deployment and adjusting the size of the existing data tiers or adding new ones.

  1. In Kibana, open your deployment’s navigation menu (placed under the Elastic logo in the upper left corner) and go to Manage this deployment.
  2. From the right hand side, click to expand the Manage dropdown button and select Edit deployment from the list of options.
  3. On the Edit page, increase capacity for the data tier you identified earlier by either adding a new tier with + Add capacity or adjusting the size of an existing one. Choose the desired size and availability zones for that tier.
  4. Navigate to the bottom of the page and click the Save button.

Option 3: Change the hardware profiles/deployment templates

You can change the hardware profile for Elastic Cloud Hosted deployments or deployment template of the Elastic Cloud Enterprise cluster to one with a higher disk-to-memory ratio.

Option 4: Override disk quota

Elastic Cloud Enterprise administrators can temporarily override the disk quota of Elasticsearch nodes in real time as explained in Resource overrides. We strongly recommend making this change only under the guidance of Elastic Support, and only as a temporary measure or for troubleshooting purposes.

To increase the data node capacity in your cluster, you can add more nodes to the cluster and assign the index’s target tier node role to the new nodes, or increase the disk capacity of existing nodes. Disk expansion procedures depend on your operating system and storage infrastructure and are outside the scope of Elastic support. In practice, this is often achieved by removing a node from the cluster and reinstalling it with a larger disk.

To increase the capacity of the data nodes in your Elastic Cloud on Kubernetes cluster, you can either add more data nodes to the desired tier, or increase the storage size of existing nodes.

Option 1: Add more data nodes

  1. Update the count field in your data node nodeSets to add more nodes:

    apiVersion: elasticsearch.k8s.elastic.co/v1
    kind: Elasticsearch
    metadata:
      name: quickstart
    spec:
      version: 9.3.0
      nodeSets:
      - name: data-nodes
        count: 5
        config:
          node.roles: ["data"]
        volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 100Gi
    		
    1. Increase from previous count
  2. Apply the changes:

    kubectl apply -f your-elasticsearch-manifest.yaml
    		

    ECK automatically creates the new nodes with a data node role and Elasticsearch will relocate shards to balance the load.

    You can monitor the progress using:

    				GET /_cat/shards?v&h=state,node&s=state
    		

Option 2: Increase storage size of existing nodes

  1. If your storage class supports volume expansion, you can increase the storage size in the volumeClaimTemplates:

    apiVersion: elasticsearch.k8s.elastic.co/v1
    kind: Elasticsearch
    metadata:
      name: quickstart
    spec:
      version: 9.3.0
      nodeSets:
      - name: data-nodes
        count: 3
        config:
          node.roles: ["data"]
        volumeClaimTemplates:
        - metadata:
            name: elasticsearch-data
          spec:
            accessModes:
            - ReadWriteOnce
            resources:
              requests:
                storage: 200Gi
    		
    1. Increased from previous size
  2. Apply the changes. If the volume driver supports ExpandInUsePersistentVolumes, the filesystem will be resized online without restarting Elasticsearch. Otherwise, you might need to manually delete the Pods after the resize so they can be recreated with the expanded filesystem.

For more information, refer to Update your deployments and Volume claim templates > Updating the volume claim settings.

Simplify monitoring with AutoOps

AutoOps is a monitoring tool that simplifies cluster management through performance recommendations, resource utilization visibility, and real-time issue detection with resolution paths. Learn more about AutoOps.

If it is not possible to increase capacity by resizing your deployment, you can reduce the number of replicas of your index data. You achieve this by inspecting the index.number_of_replicas index setting index setting and decreasing the configured value.

  1. Use the get index settings API to retrieve the configured value for the index.number_of_replicas index setting.

    				GET /my-index-000001/_settings/index.number_of_replicas
    		

    The response looks like this:

    {
      "my-index-000001" : {
        "settings" : {
          "index" : {
            "number_of_replicas" : "2"
          }
        }
      }
    }
    		
    1. Represents the currently configured value for the number of replica shards required for the index
  2. Use the _cat/nodes API to find the number of nodes in the target tier:

    				GET /_cat/nodes?h=node.role
    		

    The response looks like this, containing one row per node:

    himrst
    mv
    himrst
    		

    You can count the rows containing the letter representing the target tier to know how many nodes you have. See Query parameters for details. The example above has two rows containing h, so there are two nodes in the hot tier.

  3. Use the update index settings API to decrease the value for the total number of replica shards required for this index. As replica shards cannot reside on the same node as primary shards for high availability, the new value needs to be less than or equal to the number of nodes found above minus one. Since the example above found 2 nodes in the hot tier, the maximum value for index.number_of_replicas is 1.

    				PUT /my-index-000001/_settings
    					{
      "index" : {
        "number_of_replicas" : 1
      }
    }
    		
    1. The new value for the index.number_of_replicas index configuration is decreased from the previous value of 2 to 1. It can be set as low as 0 but configuring it to 0 for indices other than searchable snapshot indices may lead to temporary availability loss during node restarts or permanent data loss in case of data corruption.

Reduce the index.number_of_replicas index setting.

Simplify monitoring with AutoOps

AutoOps is a monitoring tool that simplifies cluster management through performance recommendations, resource utilization visibility, and real-time issue detection with resolution paths. Learn more about AutoOps.