Update data frame analytics jobs API
editUpdate data frame analytics jobs API
editUpdates an existing data frame analytics job.
This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
Request
editPOST _ml/data_frame/analytics/<data_frame_analytics_id>/_update
Prerequisites
editIf the Elasticsearch security features are enabled, you must have the following built-in roles and privileges:
-
machine_learning_admin
-
source indices:
read
,view_index_metadata
-
destination index:
read
,create_index
,manage
andindex
For more information, see Built-in roles, Security privileges, and Machine learning security privileges.
The data frame analytics job remembers which roles the user who updated it had at the time of the update. When you start the job, it performs the analysis using those same roles. If you provide secondary authorization headers, those credentials are used instead.
Description
editThis API updates an existing data frame analytics job that performs an analysis on the source indices and stores the outcome in a destination index.
Path parameters
edit-
<data_frame_analytics_id>
- (Required, string) Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Request body
edit-
allow_lazy_start
-
(Optional, Boolean)
Specifies whether this job can start when there is insufficient machine learning node
capacity for it to be immediately assigned to a node. The default is
false
; if a machine learning node with capacity to run the job cannot immediately be found, the API returns an error. However, this is also subject to the cluster-widexpack.ml.max_lazy_ml_nodes
setting. See Advanced machine learning settings. If this option is set totrue
, the API does not return an error and the job waits in thestarting
state until sufficient machine learning node capacity is available. -
description
- (Optional, string) A description of the job.
-
max_num_threads
-
(Optional, integer)
The maximum number of threads to be used by the analysis.
The default value is
1
. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself. -
model_memory_limit
-
(Optional, string)
The approximate maximum amount of memory resources that are permitted for
analytical processing. The default value for data frame analytics jobs is
1gb
. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. For more information, see Machine learning settings.