Create data frame analytics jobs API
editCreate data frame analytics jobs API
editInstantiates a data frame analytics job.
This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
Request
editPUT _ml/data_frame/analytics/<data_frame_analytics_id>
Prerequisites
editIf the Elasticsearch security features are enabled, you must have the following built-in roles and privileges:
-
machine_learning_admin
-
source indices:
read
,view_index_metadata
-
destination index:
read
,create_index
,manage
andindex
For more information, see Built-in roles, Security privileges, and Machine learning security privileges.
The data frame analytics job remembers which roles the user who created it had at the time of creation. When you start the job, it performs the analysis using those same roles. If you provide secondary authorization headers, those credentials are used instead.
Description
editThis API creates a data frame analytics job that performs an analysis on the source indices and stores the outcome in a destination index.
If the destination index does not exist, it is created automatically when you start the job. See Start data frame analytics jobs.
If you supply only a subset of the regression or classification parameters, hyperparameter optimization occurs. It determines a value for each of the undefined parameters.
Path parameters
edit-
<data_frame_analytics_id>
- (Required, string) Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Request body
edit-
allow_lazy_start
-
(Optional, Boolean)
Specifies whether this job can start when there is insufficient machine learning node
capacity for it to be immediately assigned to a node. The default is
false
; if a machine learning node with capacity to run the job cannot immediately be found, the Start data frame analytics jobs API returns an error. However, this is also subject to the cluster-widexpack.ml.max_lazy_ml_nodes
setting. See Advanced machine learning settings. If this option is set totrue
, the API does not return an error and the job waits in thestarting
state until sufficient machine learning node capacity is available.
-
analysis
-
(Required, object) The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression.
Properties of
analysis
-
classification
-
(Required*, object) The configuration information necessary to perform classification.
Advanced parameters are for fine-tuning classification analysis. They are set automatically by hyperparameter optimization to give the minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters.
Properties of
classification
-
class_assignment_objective
-
(Optional, string)
Defines the objective to optimize when assigning class labels:
maximize_accuracy
ormaximize_minimum_recall
. When maximizing accuracy, class labels are chosen to maximize the number of correct predictions. When maximizing minimum recall, labels are chosen to maximize the minimum recall for any class. Defaults tomaximize_minimum_recall
. -
dependent_variable
-
(Required, string)
Defines which field of the document is to be predicted. This parameter is supplied by field name and must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable.
The data type of the field must be numeric (
integer
,short
,long
,byte
), categorical (ip
orkeyword
), or boolean. There must be no more than 30 different values in this field. -
eta
- (Optional, double) Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, the smaller the value the longer the training will take. For more information, about shrinkage, see this wiki article. By default, this value is calcuated during hyperparameter optimization.
-
feature_bag_fraction
- (Optional, double) Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.
-
gamma
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. The higher the value the more training will prefer smaller trees. The smaller this parameter the larger individual trees will be and the longer training will take. By default, this value is calculated during hyperparameter optimization.
-
lambda
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularisation term which applies to leaf weights of the individual trees in the forest. The higher the value the more training will attempt to keep leaf weights small. This makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. The smaller this parameter the larger individual trees will be and the longer training will take. By default, this value is calculated during hyperparameter optimization.
-
max_trees
- (Optional, integer) Advanced configuration option. Defines the maximum number of trees the forest is allowed to contain. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.
-
num_top_classes
- (Optional, integer) Defines the number of categories for which the predicted probabilities are reported. It must be non-negative. If it is greater than the total number of categories, the API reports all category probabilities. Defaults to 2.
-
num_top_feature_importance_values
- (Optional, integer) Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, it is zero and no feature importance calculation occurs.
-
prediction_field_name
-
(Optional, string)
Defines the name of the prediction field in the results.
Defaults to
<dependent_variable>_prediction
. -
randomize_seed
-
(Optional, long)
Defines the seed to the random generator that is used to pick
which documents will be used for training. By default it is randomly generated.
Set it to a specific value to ensure the same documents are used for training
assuming other related parameters (e.g.
source
,analyzed_fields
, etc.) are the same. -
training_percent
-
(Optional, integer)
Defines what percentage of the eligible documents that will
be used for training. Documents that are ignored by the analysis (for example
those that contain arrays with more than one value) won’t be included in the
calculation for used percentage. Defaults to
100
.
-
-
outlier_detection
-
(Required*, object) The configuration information necessary to perform outlier detection:
Properties of
outlier_detection
-
compute_feature_influence
-
(Optional, Boolean)
Specifies whether the feature influence calculation is enabled. Defaults to
true
. -
feature_influence_threshold
-
(Optional, double)
The minimum outlier score that a document needs to have to calculate its feature
influence score. Value range: 0-1 (
0.1
by default). -
method
-
(Optional, string)
The method that outlier detection uses. Available methods are
lof
,ldof
,distance_kth_nn
,distance_knn
, andensemble
. The default value isensemble
, which means that outlier detection uses an ensemble of different methods and normalises and combines their individual outlier scores to obtain the overall outlier score. -
n_neighbors
- (Optional, integer) Defines the value for how many nearest neighbors each method of outlier detection uses to calculate its outlier score. When the value is not set, different values are used for different ensemble members. This deafault behavior helps improve the diversity in the ensemble; only override it if you are confident that the value you choose is appropriate for the data set.
-
outlier_fraction
- (Optional, double) The proportion of the data set that is assumed to be outlying prior to outlier detection. For example, 0.05 means it is assumed that 5% of values are real outliers and 95% are inliers.
-
standardization_enabled
-
(Optional, Boolean)
If
true
, the following operation is performed on the columns before computing outlier scores: (x_i - mean(x_i)) / sd(x_i). Defaults totrue
. For more information about this concept, see Wikipedia.
-
-
regression
-
(Required*, object) The configuration information necessary to perform regression.
Advanced parameters are for fine-tuning regression analysis. They are set automatically by hyperparameter optimization to give minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters.
Properties of
regression
-
dependent_variable
-
(Required, string)
Defines which field of the document is to be predicted. This parameter is supplied by field name and must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable.
The data type of the field must be numeric.
-
eta
- (Optional, double) Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, the smaller the value the longer the training will take. For more information, about shrinkage, see this wiki article. By default, this value is calcuated during hyperparameter optimization.
-
feature_bag_fraction
- (Optional, double) Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.
-
gamma
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. The higher the value the more training will prefer smaller trees. The smaller this parameter the larger individual trees will be and the longer training will take. By default, this value is calculated during hyperparameter optimization.
-
lambda
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularisation term which applies to leaf weights of the individual trees in the forest. The higher the value the more training will attempt to keep leaf weights small. This makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. The smaller this parameter the larger individual trees will be and the longer training will take. By default, this value is calculated during hyperparameter optimization.
-
loss_function
-
(Optional, string)
The loss function used during regression. Available options are
mse
(mean squared error),msle
(mean squared logarithmic error),huber
(Pseudo-Huber loss). Defaults tomse
. Refer to Loss functions for regression analyses to learn more. -
loss_function_parameter
-
(Optional, double)
A positive number that is used as a parameter to the
loss_function
. -
max_trees
- (Optional, integer) Advanced configuration option. Defines the maximum number of trees the forest is allowed to contain. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.
-
num_top_feature_importance_values
- (Optional, integer) Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, it is zero and no feature importance calculation occurs.
-
prediction_field_name
-
(Optional, string)
Defines the name of the prediction field in the results.
Defaults to
<dependent_variable>_prediction
. -
randomize_seed
-
(Optional, long)
Defines the seed to the random generator that is used to pick
which documents will be used for training. By default it is randomly generated.
Set it to a specific value to ensure the same documents are used for training
assuming other related parameters (e.g.
source
,analyzed_fields
, etc.) are the same. -
training_percent
-
(Optional, integer)
Defines what percentage of the eligible documents that will
be used for training. Documents that are ignored by the analysis (for example
those that contain arrays with more than one value) won’t be included in the
calculation for used percentage. Defaults to
100
.
-
-
-
analyzed_fields
-
(Optional, object) Specify
includes
and/orexcludes
patterns to select which fields will be included in the analysis. The patterns specified inexcludes
are applied last, thereforeexcludes
takes precedence. In other words, if the same field is specified in bothincludes
andexcludes
, then the field will not be included in the analysis.The supported fields for each type of analysis are as follows:
-
Outlier detection requires numeric or boolean data to analyze. The algorithms
don’t support missing values therefore fields that have data types other than
numeric or boolean are ignored. Documents where included fields contain missing
values, null values, or an array are also ignored. Therefore the
dest
index may contain documents that don’t have an outlier score. -
Regression supports fields that are numeric,
boolean
,text
,keyword
, andip
. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the regression analysis. -
Classification supports fields that are numeric,
boolean
,text
,keyword
, andip
. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the classification analysis. Classification analysis can be improved by mapping ordinal variable values to a single number. For example, in case of age ranges, you can model the values as "0-14" = 0, "15-24" = 1, "25-34" = 2, and so on.
If
analyzed_fields
is not set, only the relevant fields will be included. For example, all the numeric fields for outlier detection. For more information about field selection, see Explain data frame analytics API.Properties of
analyzed_fields
-
excludes
-
(Optional, array)
An array of strings that defines the fields that will be excluded from the
analysis. You do not need to add fields with unsupported data types to
excludes
, these fields are excluded from the analysis automatically. -
includes
- (Optional, array) An array of strings that defines the fields that will be included in the analysis.
-
Outlier detection requires numeric or boolean data to analyze. The algorithms
don’t support missing values therefore fields that have data types other than
numeric or boolean are ignored. Documents where included fields contain missing
values, null values, or an array are also ignored. Therefore the
-
description
- (Optional, string) A description of the job.
-
dest
-
(Required, object) The destination configuration, consisting of
index
and optionallyresults_field
(ml
by default).Properties of
dest
-
index
- (Required, string) Defines the destination index to store the results of the data frame analytics job.
-
results_field
-
(Optional, string) Defines the name of the field in which to store the results
of the analysis. Defaults to
ml
.
-
-
max_num_threads
-
(Optional, integer)
The maximum number of threads to be used by the analysis.
The default value is
1
. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself. -
model_memory_limit
-
(Optional, string)
The approximate maximum amount of memory resources that are permitted for
analytical processing. The default value for data frame analytics jobs is
1gb
. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. For more information, see Machine learning settings. -
source
-
(object) The configuration of how to source the analysis data. It requires an
index
. Optionally,query
and_source
may be specified.Properties of
source
-
index
-
(Required, string or array) Index or indices on which to perform the analysis. It can be a single index or index pattern as well as an array of indices or patterns.
If your source indices contain documents with the same IDs, only the document that is indexed last appears in the destination index.
-
query
-
(Optional, object) The Elasticsearch query domain-specific language (DSL).
This value corresponds to the query object in an Elasticsearch search POST body. All the
options that are supported by Elasticsearch can be used, as this object is passed
verbatim to Elasticsearch. By default, this property has the following value:
{"match_all": {}}
. -
_source
-
(Optional, object) Specify
includes
and/orexcludes
patterns to select which fields will be present in the destination. Fields that are excluded cannot be included in the analysis.Properties of
_source
-
includes
- (array) An array of strings that defines the fields that will be included in the destination.
-
excludes
- (array) An array of strings that defines the fields that will be excluded from the destination.
-
-
Examples
editPreprocessing actions example
editThe following example shows how to limit the scope of the analysis to certain fields, specify excluded fields in the destination index, and use a query to filter your data before analysis.
PUT _ml/data_frame/analytics/model-flight-delays-pre { "source": { "index": [ "kibana_sample_data_flights" ], "query": { "range": { "DistanceKilometers": { "gt": 0 } } }, "_source": { "includes": [], "excludes": [ "FlightDelay", "FlightDelayType" ] } }, "dest": { "index": "df-flight-delays", "results_field": "ml-results" }, "analysis": { "regression": { "dependent_variable": "FlightDelayMin", "training_percent": 90 } }, "analyzed_fields": { "includes": [], "excludes": [ "FlightNum" ] }, "model_memory_limit": "100mb" }
Source index to analyze. |
|
This query filters out entire documents that will not be present in the destination index. |
|
The |
|
Defines the destination index that contains the results of the analysis and
the fields of the source index specified in the |
|
Specifies fields to be included in or excluded from the analysis. This does not affect whether the fields will be present in the destination index, only affects whether they are used in the analysis. |
In this example, we can see that all the fields of the source index are included
in the destination index except FlightDelay
and FlightDelayType
because
these are defined as excluded fields by the excludes
parameter of the
_source
object. The FlightNum
field is included in the destination index,
however it is not included in the analysis because it is explicitly specified as
excluded field by the excludes
parameter of the analyzed_fields
object.
Outlier detection example
editThe following example creates the loganalytics
data frame analytics job, the analysis
type is outlier_detection
:
PUT _ml/data_frame/analytics/loganalytics { "description": "Outlier detection on log data", "source": { "index": "logdata" }, "dest": { "index": "logdata_out" }, "analysis": { "outlier_detection": { "compute_feature_influence": true, "outlier_fraction": 0.05, "standardization_enabled": true } } }
The API returns the following result:
{ "id": "loganalytics", "description": "Outlier detection on log data", "source": { "index": ["logdata"], "query": { "match_all": {} } }, "dest": { "index": "logdata_out", "results_field": "ml" }, "analysis": { "outlier_detection": { "compute_feature_influence": true, "outlier_fraction": 0.05, "standardization_enabled": true } }, "model_memory_limit": "1gb", "create_time" : 1562265491319, "version" : "7.6.0", "allow_lazy_start" : false, "max_num_threads": 1 }
Regression examples
editThe following example creates the house_price_regression_analysis
data frame analytics job, the analysis type is regression
:
PUT _ml/data_frame/analytics/house_price_regression_analysis { "source": { "index": "houses_sold_last_10_yrs" }, "dest": { "index": "house_price_predictions" }, "analysis": { "regression": { "dependent_variable": "price" } } }
The API returns the following result:
{ "id" : "house_price_regression_analysis", "source" : { "index" : [ "houses_sold_last_10_yrs" ], "query" : { "match_all" : { } } }, "dest" : { "index" : "house_price_predictions", "results_field" : "ml" }, "analysis" : { "regression" : { "dependent_variable" : "price", "training_percent" : 100 } }, "model_memory_limit" : "1gb", "create_time" : 1567168659127, "version" : "8.0.0", "allow_lazy_start" : false }
The following example creates a job and specifies a training percent:
PUT _ml/data_frame/analytics/student_performance_mathematics_0.3 { "source": { "index": "student_performance_mathematics" }, "dest": { "index":"student_performance_mathematics_reg" }, "analysis": { "regression": { "dependent_variable": "G3", "training_percent": 70, "randomize_seed": 19673948271 } } }
Classification example
editThe following example creates the loan_classification
data frame analytics job, the
analysis type is classification
:
PUT _ml/data_frame/analytics/loan_classification { "source" : { "index": "loan-applicants" }, "dest" : { "index": "loan-applicants-classified" }, "analysis" : { "classification": { "dependent_variable": "label", "training_percent": 75, "num_top_classes": 2 } } }