Elasticsearch autoscaling
editElasticsearch autoscaling
editElasticsearch autoscaling requires a valid Enterprise license or Enterprise trial license. Check the license documentation for more details about managing licenses.
ECK can leverage the autoscaling API introduced in Elasticsearch 7.11 to adjust automatically the number of Pods and the allocated resources in a tier. Currently, autoscaling is supported for Elasticsearch data tiers and machine learning nodes.
Enable autoscaling
editTo enable autoscaling on an Elasticsearch cluster, you need to define one or more autoscaling policies. Each autoscaling policy applies to one or more NodeSets which share the same set of roles specified in the node.roles
setting in the Elasticsearch configuration.
Define autoscaling policies
editAutoscaling policies can be defined in an ElasticsearchAutoscaler
resource. Each autoscaling policy must have the following fields:
-
name
is a unique name used to identify the autoscaling policy. -
roles
contains a set of node roles, unique across all the autoscaling policies, used to identify the NodeSets to which this policy applies. At least one NodeSet with the exact same set of roles must exist in the Elasticsearch resource specification. -
resources
helps define the minimum and maximum compute resources usage:-
nodeCount
defines the minimum and maximum nodes allowed in the tier. -
cpu
andmemory
enforce minimum and maximum compute resources usage for the Elasticsearch container. -
storage
enforces minimum and maximum storage request per PersistentVolumeClaim.
-
apiVersion: autoscaling.k8s.elastic.co/v1alpha1 kind: ElasticsearchAutoscaler metadata: name: autoscaling-sample spec: ## The name of the Elasticsearch cluster to be scaled automatically. elasticsearchRef: name: elasticsearch-sample ## The autoscaling policies. policies: - name: data-ingest roles: ["data", "ingest" , "transform"] resources: nodeCount: min: 3 max: 8 cpu: min: 2 max: 8 memory: min: 2Gi max: 16Gi storage: min: 64Gi max: 512Gi - name: ml roles: - ml resources: nodeCount: min: 1 max: 9 cpu: min: 1 max: 4 memory: min: 2Gi max: 8Gi storage: min: 1Gi max: 1Gi
A node role should not be referenced in more than one autoscaling policy.
In the case of storage the following restrictions apply:
-
Scaling the storage size automatically requires the
ExpandInUsePersistentVolumes
feature to be enabled. It also requires a storage class that supports volume expansion. - Only one persistent volume claim per Elasticsearch node is supported when autoscaling is enabled.
- Volume size cannot be scaled down.
- Scaling up (vertically) is only supported if the available capacity in a PersistentVolume matches the capacity claimed in the PersistentVolumeClaim. Refer to the next section for more information.
Scale Up and Scale Out
editIn order to adapt the resources to the workload, the operator first attempts to scale up the resources (cpu, memory, and storage) allocated to each node in the NodeSets. The operator always ensures that the requested resources are within the limits specified in the autoscaling policy.
If each individual node has reached the limits specified in the autoscaling policy, but more resources are required to handle the load, then the operator adds some nodes to the NodeSets. Nodes are added up to the max
value specified in the nodeCount
of the policy.
Scaling up (vertically) is only supported if the actual storage capacity of the persistent volumes matches the capacity claimed. If the physical capacity of a PersistentVolume may be greater than the capacity claimed in the PersistentVolumeClaim, it is advised to set the same value for the min
and the max
setting of each resource. It is however still possible to let the operator scale out the NodeSets automatically, as in the following example:
apiVersion: autoscaling.k8s.elastic.co/v1alpha1 kind: ElasticsearchAutoscaler metadata: name: autoscaling-sample spec: elasticsearchRef: name: elasticsearch-sample policies: - name: data-ingest roles: ["data", "ingest" , "transform"] resources: nodeCount: min: 3 max: 9 cpu: min: 4 max: 4 memory: min: 16Gi max: 16Gi storage: min: 512Gi max: 512Gi
Set the limits
editThe value set for memory and CPU limits are computed by applying a ratio to the calculated resource request. The default ratio between the request and the limit for both CPU and memory is 1. This means that request and limit have the same value. You can change the default ratio between the request and the limit for both the CPU and memory ranges by using the requestsToLimitsRatio
field.
For example, you can set a CPU limit to twice the value of the request, as follows:
apiVersion: autoscaling.k8s.elastic.co/v1alpha1 kind: ElasticsearchAutoscaler metadata: name: autoscaling-sample spec: elasticsearchRef: name: elasticsearch-sample policies: - name: data-ingest roles: ["data", "ingest" , "transform"] resources: nodeCount: min: 2 max: 5 cpu: min: 1 max: 2 requestsToLimitsRatio: 2 memory: min: 2Gi max: 6Gi storage: min: 512Gi max: 512Gi
You can find a complete example in the ECK GitHub repository which will also show you how to fine-tune the autoscaling deciders.
Change the polling interval
editThe Elasticsearch autoscaling capacity endpoint is polled every minute by the operator. This interval duration can be controlled using the pollingPeriod
field in the autoscaling specification:
apiVersion: autoscaling.k8s.elastic.co/v1alpha1 kind: ElasticsearchAutoscaler metadata: name: autoscaling-sample spec: pollingPeriod: "42s" elasticsearchRef: name: elasticsearch-sample policies: - name: data-ingest roles: ["data", "ingest" , "transform"] resources: nodeCount: min: 2 max: 5 cpu: min: 1 max: 2 memory: min: 2Gi max: 6Gi storage: min: 512Gi max: 512Gi
Monitoring
editAutoscaling status
editIn addition to the logs generated by the operator, an autoscaling status is maintained in the ElasticsearchAutoscaler
resource. This status holds several Conditions
to summarize the health and the status of the autoscaling mechanism. For example, dedicated Conditions
may report if the controller cannot connect to the Elasticsearch cluster, or if a resource limit has been reached:
kubectl get elasticsearchautoscaler autoscaling-sample \ -o jsonpath='{ .status.conditions }' | jq
[ { "lastTransitionTime": "2022-09-09T08:07:10Z", "message": "Limit reached for policies data-ingest", "status": "True", "type": "Limited" }, { "lastTransitionTime": "2022-09-09T07:55:08Z", "status": "True", "type": "Active" }, { "lastTransitionTime": "2022-09-09T08:07:10Z", "status": "True", "type": "Healthy" }, { "lastTransitionTime": "2022-09-09T07:56:22Z", "message": "Elasticsearch is available", "status": "True", "type": "Online" } ]
Expected resources
editThe autoscaler status also contains a policies
section which describes the expected resources for each NodeSet managed by an autoscaling policy.
kubectl get elasticsearchautoscaler.autoscaling.k8s.elastic.co/autoscaling-sample \ -o jsonpath='{ .status.policies }' | jq
[ { "lastModificationTime": "2022-10-05T05:47:13Z", "name": "data-ingest", "nodeSets": [ { "name": "nodeset-1", "nodeCount": 2 } ], "resources": { "limits": { "cpu": "1", "memory": "2Gi" }, "requests": { "cpu": "500m", "memory": "2Gi", "storage": "1Gi" } } } ]
Events
editImportant events are also reported through Kubernetes events, for example when the maximum autoscaling size limit is reached:
> kubectl get events 40m Warning HorizontalScalingLimitReached elasticsearch/sample Can't provide total required storage 32588740338, max number of nodes is 5, requires 6 nodes
Disable autoscaling
editYou can disable autoscaling at any time by deleting the ElasticsearchAutoscaler
resource. For machine learning the following settings are not automatically reset:
-
xpack.ml.max_ml_node_size
-
xpack.ml.max_lazy_ml_nodes
-
xpack.ml.use_auto_machine_memory_percent
You should adjust those settings manually to match the size of your deployment when you disable autoscaling.