- Elasticsearch Guide: other versions:
- Elasticsearch introduction
- Getting started with Elasticsearch
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Setting JVM options
- Secure settings
- Logging configuration
- Auditing settings
- Cross-cluster replication settings
- Transforms settings
- Index lifecycle management settings
- License settings
- Machine learning settings
- Monitoring settings
- Security settings
- Snapshot lifecycle management settings
- SQL access settings
- Watcher settings
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Full-cluster restart and rolling restart
- Set up X-Pack
- Configuring X-Pack Java Clients
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- String Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Median Absolute Deviation Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Auto-interval Date Histogram Aggregation
- Children Aggregation
- Composite aggregation
- Date histogram aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- GeoTile Grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Parent Aggregation
- Range Aggregation
- Rare Terms Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Subtleties of bucketing range fields
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Cumulative Cardinality Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Indexing aggregation results with transforms
- Metrics Aggregations
- Query DSL
- Search across clusters
- Scripting
- Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Char Group Tokenizer
- Classic Tokenizer
- Edge n-gram tokenizer
- Keyword Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- N-gram tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Pattern Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Standard Tokenizer
- Thai Tokenizer
- UAX URL Email Tokenizer
- Whitespace Tokenizer
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Modules
- Index modules
- Ingest node
- Pipeline Definition
- Accessing Data in Pipelines
- Conditional Execution in Pipelines
- Handling Failures in Pipelines
- Enrich your data
- Processors
- Append Processor
- Bytes Processor
- Circle Processor
- Convert Processor
- CSV Processor
- Date Processor
- Date Index Name Processor
- Dissect Processor
- Dot Expander Processor
- Drop Processor
- Enrich Processor
- Fail Processor
- Foreach Processor
- GeoIP Processor
- Grok Processor
- Gsub Processor
- HTML Strip Processor
- Inference Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Pipeline Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Set Security User Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- URL Decode Processor
- User Agent processor
- ILM: Manage the index lifecycle
- SQL access
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Monitor a cluster
- Frozen indices
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure a cluster
- Overview
- Configuring security
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Security privileges
- Document level security
- Field level security
- Granting privileges for indices and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enabling audit logging
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on cluster and index events
- Command line tools
- How To
- Glossary of terms
- REST APIs
- API conventions
- cat APIs
- Cluster APIs
- Cross-cluster replication APIs
- Document APIs
- Enrich APIs
- Explore API
- Index APIs
- Add index alias
- Analyze
- Clear cache
- Clone index
- Close index
- Create index
- Delete index
- Delete index alias
- Delete index template
- Flush
- Force merge
- Freeze index
- Get field mapping
- Get index
- Get index alias
- Get index settings
- Get index template
- Get mapping
- Index alias exists
- Index exists
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists
- Open index
- Put index template
- Put mapping
- Refresh
- Rollover index
- Shrink index
- Split index
- Synced flush
- Type exists
- Unfreeze index
- Update index alias
- Update index settings
- Index lifecycle management API
- Ingest APIs
- Info API
- Licensing APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendar
- Create datafeeds
- Create filter
- Delete calendar
- Delete datafeeds
- Delete events from calendar
- Delete filter
- Delete forecast
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Find file structure
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filter
- Update jobs
- Update model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Create inference trained model
- Delete data frame analytics jobs
- Delete inference trained model
- Evaluate data frame analytics
- Explain data frame analytics API
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Get inference trained model
- Get inference trained model stats
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Migration APIs
- Reload search analyzers
- Rollup APIs
- Search APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect Prepare Authentication API
- OpenID Connect authenticate API
- OpenID Connect logout API
- SAML prepare authentication API
- SAML authenticate API
- SAML logout API
- SAML invalidate API
- SSL certificate
- Snapshot and restore APIs
- Snapshot lifecycle management API
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Release highlights
- Breaking changes
- Release notes
- Elasticsearch version 7.6.2
- Elasticsearch version 7.6.1
- Elasticsearch version 7.6.0
- Elasticsearch version 7.5.2
- Elasticsearch version 7.5.1
- Elasticsearch version 7.5.0
- Elasticsearch version 7.4.2
- Elasticsearch version 7.4.1
- Elasticsearch version 7.4.0
- Elasticsearch version 7.3.2
- Elasticsearch version 7.3.1
- Elasticsearch version 7.3.0
- Elasticsearch version 7.2.1
- Elasticsearch version 7.2.0
- Elasticsearch version 7.1.1
- Elasticsearch version 7.1.0
- Elasticsearch version 7.0.0
- Elasticsearch version 7.0.0-rc2
- Elasticsearch version 7.0.0-rc1
- Elasticsearch version 7.0.0-beta1
- Elasticsearch version 7.0.0-alpha2
- Elasticsearch version 7.0.0-alpha1
Create data frame analytics jobs API
editCreate data frame analytics jobs API
editInstantiates a data frame analytics job.
This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
Request
editPUT _ml/data_frame/analytics/<data_frame_analytics_id>
Prerequisites
editIf the Elasticsearch security features are enabled, you must have the following built-in roles and privileges:
-
machine_learning_admin
-
kibana_user
(UI only) -
source index:
read
,view_index_metadata
-
destination index:
read
,create_index
,manage
andindex
-
cluster:
monitor
(UI only)
For more information, see Security privileges and Built-in roles.
Description
editThis API creates a data frame analytics job that performs an analysis on the source index and stores the outcome in a destination index.
The destination index will be automatically created if it does not exist. The
index.number_of_shards
and index.number_of_replicas
settings of the source
index will be copied over the destination index. When the source index matches
multiple indices, these settings will be set to the maximum values found in the
source indices.
The mappings of the source indices are also attempted to be copied over to the destination index, however, if the mappings of any of the fields don’t match among the source indices, the attempt will fail with an error message.
If the destination index already exists, then it will be use as is. This makes it possible to set up the destination index in advance with custom settings and mappings.
Hyperparameter optimization
editIf you don’t supply regression or classification parameters, hyperparameter optimization occurs, which sets a value for the undefined parameters. The starting point is calculated for data dependent parameters by examining the loss on the training data. Subject to the size constraint, this operation provides an upper bound on the improvement in validation loss.
A fixed number of rounds is used for optimization which depends on the number of parameters being optimized. The optimization starts with random search, then Bayesian optimization is performed that is targeting maximum expected improvement. If you override any parameters by explicitely setting it, the optimization calculates the value of the remaining parameters accordingly and uses the value you provided for the overridden parameter. The number of rounds are reduced respectively. The validation error is estimated in each round by using 4-fold cross validation.
Path parameters
edit-
<data_frame_analytics_id>
- (Required, string) Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Request body
edit-
allow_lazy_start
-
(Optional, boolean)
Whether this job should be allowed to start when there is insufficient machine learning node
capacity for it to be immediately assigned to a node. The default is
false
, which means that the Start data frame analytics jobs will return an error if a machine learning node with capacity to run the job cannot immediately be found. (However, this is also subject to the cluster-widexpack.ml.max_lazy_ml_nodes
setting - see Advanced machine learning settings.) If this option is set totrue
then the Start data frame analytics jobs will not return an error, and the job will wait in thestarting
state until sufficient machine learning node capacity is available. -
analysis
-
(Required, object) The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression.
-
analysis
.classification
- (Required*, object) The configuration information necessary to perform classification.
Advanced parameters are for fine-tuning classification analysis. They are set automatically by hyperparameter optimization to give minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters.
-
analysis
.classification
.dependent_variable
-
(Required, string)
Defines which field of the document is to be predicted. This parameter is supplied by field name and must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable.
The data type of the field must be numeric (
integer
,short
,long
,byte
), categorical (ip
orkeyword
), or boolean. -
analysis
.classification
.eta
- (Optional, double) Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have better generalization error. However, the smaller the value the longer the training will take. For more information, about shrinkage, see this wiki article.
-
analysis
.classification
.feature_bag_fraction
- (Optional, double) Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split.
-
analysis
.classification
.maximum_number_trees
- (Optional, integer) Advanced configuration option. Defines the maximum number of trees the forest is allowed to contain. The maximum value is 2000.
-
analysis
.classification
.gamma
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training dataset. Multiplies a linear penalty associated with the size of individual trees in the forest. The higher the value the more training will prefer smaller trees. The smaller this parameter the larger individual trees will be and the longer train will take.
-
analysis
.classification
.lambda
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training dataset. Multiplies an L2 regularisation term which applies to leaf weights of the individual trees in the forest. The higher the value the more training will attempt to keep leaf weights small. This makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. The smaller this parameter the larger individual trees will be and the longer train will take.
-
analysis
.classification
.num_top_classes
- (Optional, integer) Defines the number of categories for which the predicted probabilities are reported. It must be non-negative. If it is greater than the total number of categories (in the 7.6.2 version of the Elastic Stack, it’s two) to predict then we will report all category probabilities. Defaults to 2.
-
analysis
.classification
.prediction_field_name
-
(Optional, string)
Defines the name of the prediction field in the results.
Defaults to
<dependent_variable>_prediction
. -
analysis
.classification
.randomize_seed
-
(Optional, long)
Defines the seed to the random generator that is used to pick
which documents will be used for training. By default it is randomly generated.
Set it to a specific value to ensure the same documents are used for training
assuming other related parameters (e.g.
source
,analyzed_fields
, etc.) are the same. -
analysis
.classification
.num_top_feature_importance_values
- (Optional, integer) Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, it is zero and no feature importance calculation occurs.
-
analysis
.classification
.training_percent
-
(Optional, integer) Defines what percentage of the eligible documents that will be used for training. Documents that are ignored by the analysis (for example those that contain arrays with more than one value) won’t be included in the calculation for used percentage. Defaults to
100
.-
analysis
.outlier_detection
- (Required*, object) The configuration information necessary to perform outlier detection:
-
-
analysis
.outlier_detection
.compute_feature_influence
-
(Optional, boolean)
If
true
, the feature influence calculation is enabled. Defaults totrue
. -
analysis
.outlier_detection
.feature_influence_threshold
-
(Optional, double)
The minimum outlier score that a document needs to have in order to calculate its
feature influence score. Value range: 0-1 (
0.1
by default). -
analysis
.outlier_detection
.method
-
(Optional, string)
Sets the method that outlier detection uses. If the method is not set outlier detection
uses an ensemble of different methods and normalises and combines their
individual outlier scores to obtain the overall outlier score. We recommend to use the
ensemble method. Available methods are
lof
,ldof
,distance_kth_nn
,distance_knn
. -
analysis
.outlier_detection
.n_neighbors
- (Optional, integer) Defines the value for how many nearest neighbors each method of outlier detection will use to calculate its outlier score. When the value is not set, different values will be used for different ensemble members. This helps improve diversity in the ensemble. Therefore, only override this if you are confident that the value you choose is appropriate for the data set.
-
analysis
.outlier_detection
.outlier_fraction
- (Optional, double) Sets the proportion of the data set that is assumed to be outlying prior to outlier detection. For example, 0.05 means it is assumed that 5% of values are real outliers and 95% are inliers.
-
analysis
.outlier_detection
.standardization_enabled
-
(Optional, boolean) If
true
, then the following operation is performed on the columns before computing outlier scores: (x_i - mean(x_i)) / sd(x_i). Defaults totrue
. For more information, see this wiki page about standardization.-
analysis
.regression
-
(Required*, object) The configuration information necessary to perform regression.
Advanced parameters are for fine-tuning regression analysis. They are set automatically by hyperparameter optimization to give minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters.
-
-
analysis
.regression
.dependent_variable
-
(Required, string)
Defines which field of the document is to be predicted. This parameter is supplied by field name and must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable.
The data type of the field must be numeric.
-
analysis
.regression
.eta
- (Optional, double) Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have better generalization error. However, the smaller the value the longer the training will take. For more information, about shrinkage, see this wiki article.
-
analysis
.regression
.feature_bag_fraction
- (Optional, double) Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split.
-
analysis
.regression
.maximum_number_trees
- (Optional, integer) Advanced configuration option. Defines the maximum number of trees the forest is allowed to contain. The maximum value is 2000.
-
analysis
.regression
.gamma
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training dataset. Multiplies a linear penalty associated with the size of individual trees in the forest. The higher the value the more training will prefer smaller trees. The smaller this parameter the larger individual trees will be and the longer train will take.
-
analysis
.regression
.lambda
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training dataset. Multiplies an L2 regularisation term which applies to leaf weights of the individual trees in the forest. The higher the value the more training will attempt to keep leaf weights small. This makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. The smaller this parameter the larger individual trees will be and the longer train will take.
-
analysis
.regression
.prediction_field_name
-
(Optional, string)
Defines the name of the prediction field in the results.
Defaults to
<dependent_variable>_prediction
. -
analysis
.regression
.num_top_feature_importance_values
- (Optional, integer) Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, it is zero and no feature importance calculation occurs.
-
analysis
.regression
.training_percent
-
(Optional, integer)
Defines what percentage of the eligible documents that will
be used for training. Documents that are ignored by the analysis (for example
those that contain arrays with more than one value) won’t be included in the
calculation for used percentage. Defaults to
100
. -
analysis
.regression
.randomize_seed
-
(Optional, long)
Defines the seed to the random generator that is used to pick
which documents will be used for training. By default it is randomly generated.
Set it to a specific value to ensure the same documents are used for training
assuming other related parameters (e.g.
source
,analyzed_fields
, etc.) are the same.
-
-
analyzed_fields
-
(Optional, object) Specify
includes
and/orexcludes
patterns to select which fields will be included in the analysis. The patterns specified inexcludes
are applied last, thereforeexcludes
takes precedence. In other words, if the same field is specified in bothincludes
andexcludes
, then the field will not be included in the analysis.The supported fields for each type of analysis are as follows:
-
Outlier detection requires numeric or boolean data to analyze. The algorithms
don’t support missing values therefore fields that have data types other than
numeric or boolean are ignored. Documents where included fields contain missing
values, null values, or an array are also ignored. Therefore the
dest
index may contain documents that don’t have an outlier score. -
Regression supports fields that are numeric,
boolean
,text
,keyword
, andip
. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the regression analysis. -
Classification supports fields that are numeric,
boolean
,text
,keyword
, andip
. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the classification analysis. Classification analysis can be improved by mapping ordinal variable values to a single number. For example, in case of age ranges, you can model the values as "0-14" = 0, "15-24" = 1, "25-34" = 2, and so on. Ifanalyzed_fields
is not set, only the relevant fields will be included. For example, all the numeric fields for outlier detection. For more information about field selection, see Explain data frame analytics API.
-
analyzed_fields
.excludes
-
(Optional, array)
An array of strings that defines the fields that will be excluded from the
analysis. You do not need to add fields with unsupported data types to
excludes
, these fields are excluded from the analysis automatically. -
analyzed_fields
.includes
- (Optional, array) An array of strings that defines the fields that will be included in the analysis.
-
Outlier detection requires numeric or boolean data to analyze. The algorithms
don’t support missing values therefore fields that have data types other than
numeric or boolean are ignored. Documents where included fields contain missing
values, null values, or an array are also ignored. Therefore the
-
description
- (Optional, string) A description of the job.
-
dest
-
(Required, object) The destination configuration, consisting of
index
and optionallyresults_field
(ml
by default).-
index
- (Required, string) Defines the destination index to store the results of the data frame analytics job.
-
results_field
-
(Optional, string) Defines the name of the field in which to store the
results of the analysis. Default to
ml
.
-
-
model_memory_limit
-
(Optional, string)
The approximate maximum amount of memory resources that are permitted for
analytical processing. The default value for data frame analytics jobs is
1gb
. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. For more information, see Machine learning settings. -
source
-
(object) The configuration of how to source the analysis data. It requires an
index
. Optionally,query
and_source
may be specified.-
index
-
(Required, string or array) Index or indices on which to perform the analysis. It can be a single index or index pattern as well as an array of indices or patterns.
If your source indices contain documents with the same IDs, only the document that is indexed last appears in the destination index.
-
query
-
(Optional, object) The Elasticsearch query domain-specific language
(DSL). This value corresponds to the query object in an Elasticsearch
search POST body. All the options that are supported by Elasticsearch can be used,
as this object is passed verbatim to Elasticsearch. By default, this property has
the following value:
{"match_all": {}}
. -
_source
-
(Optional, object) Specify
includes
and/orexcludes
patterns to select which fields will be present in the destination. Fields that are excluded cannot be included in the analysis.-
includes
- (array) An array of strings that defines the fields that will be included in the destination.
-
excludes
- (array) An array of strings that defines the fields that will be excluded from the destination.
-
-
Examples
editPreprocessing actions example
editThe following example shows how to limit the scope of the analysis to certain fields, specify excluded fields in the destination index, and use a query to filter your data before analysis.
PUT _ml/data_frame/analytics/model-flight-delays-pre { "source": { "index": [ "kibana_sample_data_flights" ], "query": { "range": { "DistanceKilometers": { "gt": 0 } } }, "_source": { "includes": [], "excludes": [ "FlightDelay", "FlightDelayType" ] } }, "dest": { "index": "df-flight-delays", "results_field": "ml-results" }, "analysis": { "regression": { "dependent_variable": "FlightDelayMin", "training_percent": 90 } }, "analyzed_fields": { "includes": [], "excludes": [ "FlightNum" ] }, "model_memory_limit": "100mb" }
The source index to analyze. |
|
This query filters out entire documents that will not be present in the destination index. |
|
The |
|
Defines the destination index that contains the results of the analysis and
the fields of the source index specified in the |
|
Specifies fields to be included in or excluded from the analysis. This does not affect whether the fields will be present in the destination index, only affects whether they are used in the analysis. |
In this example, we can see that all the fields of the source index are included
in the destination index except FlightDelay
and FlightDelayType
because
these are defined as excluded fields by the excludes
parameter of the
_source
object. The FlightNum
field is included in the destination index,
however it is not included in the analysis because it is explicitly specified as
excluded field by the excludes
parameter of the analyzed_fields
object.
Outlier detection example
editThe following example creates the loganalytics
data frame analytics job, the analysis
type is outlier_detection
:
PUT _ml/data_frame/analytics/loganalytics { "description": "Outlier detection on log data", "source": { "index": "logdata" }, "dest": { "index": "logdata_out" }, "analysis": { "outlier_detection": { "compute_feature_influence": true, "outlier_fraction": 0.05, "standardization_enabled": true } } }
The API returns the following result:
{ "id": "loganalytics", "description": "Outlier detection on log data", "source": { "index": ["logdata"], "query": { "match_all": {} } }, "dest": { "index": "logdata_out", "results_field": "ml" }, "analysis": { "outlier_detection": { "compute_feature_influence": true, "outlier_fraction": 0.05, "standardization_enabled": true } }, "model_memory_limit": "1gb", "create_time" : 1562265491319, "version" : "7.6.0", "allow_lazy_start" : false }
Regression examples
editThe following example creates the house_price_regression_analysis
data frame analytics job, the analysis type is regression
:
PUT _ml/data_frame/analytics/house_price_regression_analysis { "source": { "index": "houses_sold_last_10_yrs" }, "dest": { "index": "house_price_predictions" }, "analysis": { "regression": { "dependent_variable": "price" } } }
The API returns the following result:
{ "id" : "house_price_regression_analysis", "source" : { "index" : [ "houses_sold_last_10_yrs" ], "query" : { "match_all" : { } } }, "dest" : { "index" : "house_price_predictions", "results_field" : "ml" }, "analysis" : { "regression" : { "dependent_variable" : "price", "training_percent" : 100 } }, "model_memory_limit" : "1gb", "create_time" : 1567168659127, "version" : "8.0.0", "allow_lazy_start" : false }
The following example creates a job and specifies a training percent:
PUT _ml/data_frame/analytics/student_performance_mathematics_0.3 { "source": { "index": "student_performance_mathematics" }, "dest": { "index":"student_performance_mathematics_reg" }, "analysis": { "regression": { "dependent_variable": "G3", "training_percent": 70, "randomize_seed": 19673948271 } } }
Classification example
editThe following example creates the loan_classification
data frame analytics job, the
analysis type is classification
:
PUT _ml/data_frame/analytics/loan_classification { "source" : { "index": "loan-applicants" }, "dest" : { "index": "loan-applicants-classified" }, "analysis" : { "classification": { "dependent_variable": "label", "training_percent": 75, "num_top_classes": 2 } } }
On this page