Trained model autoscaling
editTrained model autoscaling
editYou can enable autoscaling for each of your trained model deployments. Autoscaling allows Elasticsearch to automatically adjust the resources the model deployment can use based on the workload demand.
There are two ways to enable autoscaling:
- through APIs by enabling adaptive allocations
- in Kibana by enabling adaptive resources
To fully leverage model autoscaling, it is highly recommended to enable Elasticsearch deployment autoscaling.
Enabling autoscaling through APIs - adaptive allocations
editModel allocations are independent units of work for NLP tasks. If you set the numbers of threads and allocations for a model manually, they remain constant even when not all the available resources are fully used or when the load on the model requires more resources. Instead of setting the number of allocations manually, you can enable adaptive allocations to set the number of allocations based on the load on the process. This can help you to manage performance and cost more easily. (Refer to the pricing calculator to learn more about the possible costs.)
When adaptive allocations are enabled, the number of allocations of the model is set automatically based on the current load. When the load is high, a new model allocation is automatically created. When the load is low, a model allocation is automatically removed. You can explicitely set the minimum and maximum number of allocations; autoscaling will occur within these limits.
You can enable adaptive allocations by using:
- the create inference endpoint API for ELSER, E5 and models uploaded through Eland that are used as inference services.
- the start trained model deployment or update trained model deployment APIs for trained models that are deployed on machine learning nodes.
If the new allocations fit on the current machine learning nodes, they are immediately started. If more resource capacity is needed for creating new model allocations, then your machine learning node will be scaled up if machine learning autoscaling is enabled to provide enough resources for the new allocation. The number of model allocations can be scaled down to 0. They cannot be scaled up to more than 32 allocations, unless you explicitly set the maximum number of allocations to more. Adaptive allocations must be set up independently for each deployment and inference endpoint.
Optimizing for typical use cases
editYou can optimize your model deployment for typical use cases, such as search and ingest. When you optimize for ingest, the throughput will be higher, which increases the number of inference requests that can be performed in parallel. When you optimize for search, the latency will be lower during search processes.
-
If you want to optimize for ingest, set the number of threads to
1
("threads_per_allocation": 1
). -
If you want to optimize for search, set the number of threads to greater than
1
. Increasing the number of threads will make the search processes more performant.
Enabling autoscaling in Kibana - adaptive resources
editYou can enable adaptive resources for your models when starting or updating the model deployment. Adaptive resources make it possible for Elasticsearch to scale up or down the available resources based on the load on the process. This can help you to manage performance and cost more easily. When adaptive resources are enabled, the number of vCPUs that the model deployment uses is set automatically based on the current load. When the load is high, the number of vCPUs that the process can use is automatically increased. When the load is low, the number of vCPUs that the process can use is automatically decreased.
You can choose from three levels of resource usage for your trained model deployment; autoscaling will occur within the selected level’s range.
Refer to the tables in the Model deployment resource matrix section to find out the setings for the level you selected.
Model deployment resource matrix
editThe used resources for trained model deployments depend on three factors:
- your cluster environment (Serverless, Cloud, or on-premises)
- the use case you optimize the model deployment for (ingest or search)
- whether model autoscaling is enabled with adaptive allocations/resources to have dynamic resources, or disabled for static resources
If you use Elasticsearch on-premises, vCPUs level ranges are derived from the total_ml_processors
and max_single_ml_node_processors
values.
Use the get machine learning info API to check these values.
The following tables show you the number of allocations, threads, and vCPUs available in Cloud when adaptive resources are enabled or disabled.
On Serverless, adaptive allocations are automatically enabled for all project types. However, the "Adaptive resources" control is not displayed in Kibana for Observability and Security projects.
Deployments in Cloud optimized for ingest
editIn case of ingest-optimized deployments, we maximize the number of model allocations.
Adaptive resources enabled
editLevel | Allocations | Threads | vCPUs |
---|---|---|---|
Low |
0 to 2 if available, dynamically |
1 |
0 to 2 if available, dynamically |
Medium |
1 to 32 dynamically |
1 |
1 to the smaller of 32 or the limit set in the Cloud console, dynamically |
High |
1 to limit set in the Cloud console *, dynamically |
1 |
1 to limit set in the Cloud console, dynamically |
* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.
Adaptive resources disabled
editLevel | Allocations | Threads | vCPUs |
---|---|---|---|
Low |
2 if available, otherwise 1, statically |
1 |
2 if available |
Medium |
the smaller of 32 or the limit set in the Cloud console, statically |
1 |
32 if available |
High |
Maximum available set in the Cloud console *, statically |
1 |
Maximum available set in the Cloud console, statically |
* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.
Deployments in Cloud optimized for search
editIn case of search-optimized deployments, we maximize the number of threads. The maximum number of threads that can be claimed depends on the hardware your architecture has.
Adaptive resources enabled
editLevel | Allocations | Threads | vCPUs |
---|---|---|---|
Low |
1 |
2 |
2 |
Medium |
1 to 2 (if threads=16) dynamically |
maximum that the hardware allows (for example, 16) |
1 to 32 dynamically |
High |
1 to limit set in the Cloud console *, dynamically |
maximum that the hardware allows (for example, 16) |
1 to limit set in the Cloud console, dynamically |
* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.
Adaptive resources disabled
editLevel | Allocations | Threads | vCPUs |
---|---|---|---|
Low |
1 if available, statically |
2 |
2 if available |
Medium |
2 (if threads=16) statically |
maximum that the hardware allows (for example, 16) |
32 if available |
High |
Maximum available set in the Cloud console *, statically |
maximum that the hardware allows (for example, 16) |
Maximum available set in the Cloud console, statically |
* The Cloud console doesn’t directly set an allocations limit; it only sets a vCPU limit. This vCPU limit indirectly determines the number of allocations, calculated as the vCPU limit divided by the number of threads.