IMPORTANT: No additional bug fixes or documentation updates
will be released for this version. For the latest information, see the
current release documentation.
Stop trained model deployment API
editStop trained model deployment API
editStops a trained model deployment.
Request
editPOST _ml/trained_models/<deployment_id>/deployment/_stop
Prerequisites
editRequires the manage_ml
cluster privilege. This privilege is included in the
machine_learning_admin
built-in role.
Description
editDeployment is required only for trained models that have a PyTorch model_type
.
Path parameters
edit-
<deployment_id>
- (Required, string) A unique identifier for the deployment of the model.
Query parameters
edit-
allow_no_match
-
(Optional, Boolean) Specifies what to do when the request:
- Contains wildcard expressions and there are no deployments that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
The default value is
true
, which returns an empty array when there are no matches and the subset of results when there are partial matches. If this parameter isfalse
, the request returns a404
status code when there are no matches or only partial matches. -
force
- (Optional, Boolean) If true, the deployment is stopped even if it or one of its model aliases is referenced by ingest pipelines. You can’t use these pipelines until you restart the model deployment.
-
finish_pending_work
-
(Optional, Boolean) If true, the deployment is stopped after any queued work is completed. Defaults to
false
.
Examples
editThe following example stops the my_model_for_search
deployment:
response = client.ml.stop_trained_model_deployment( model_id: 'my_model_for_search' ) puts response
POST _ml/trained_models/my_model_for_search/deployment/_stop