IMPORTANT: No additional bug fixes or documentation updates
will be released for this version. For the latest information, see the
current release documentation.
Clear trained model deployment cache API
editClear trained model deployment cache API
editClears the inference cache on all nodes where the deployment is assigned.
Request
editPOST _ml/trained_models/<deployment_id>/deployment/cache/_clear
Prerequisites
editRequires the manage_ml
cluster privilege. This privilege is included in the
machine_learning_admin
built-in role.
Description
editA trained model deployment may have an inference cache enabled. As requests are handled by each allocated node, their responses may be cached on that individual node. Calling this API clears the caches without restarting the deployment.
Path parameters
edit-
deployment_id
- (Required, string) A unique identifier for the deployment of the model.
Examples
editThe following example clears the cache for the new deployment for the
elastic__distilbert-base-uncased-finetuned-conll03-english
trained model:
resp = client.ml.clear_trained_model_deployment_cache( model_id="elastic__distilbert-base-uncased-finetuned-conll03-english", ) print(resp)
response = client.ml.clear_trained_model_deployment_cache( model_id: 'elastic__distilbert-base-uncased-finetuned-conll03-english' ) puts response
POST _ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/deployment/cache/_clear
The API returns the following results:
{ "cleared": true }