- Elasticsearch Guide: other versions:
- What is Elasticsearch?
- What’s new in 8.10
- Set up Elasticsearch
- Installing Elasticsearch
- Run Elasticsearch locally
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Miscellaneous cluster settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Field data cache settings
- Health Diagnostic settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- License settings
- Local gateway settings
- Logging
- Machine learning settings
- Monitoring settings
- Node
- Networking
- Node query cache settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Advanced configuration
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Attachment
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- Geo-grid
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Redact
- Registered domain
- Remove
- Rename
- Reroute
- Script
- Set
- Set security user
- Sort
- Split
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Aliases
- Search your data
- Collapse search results
- Filter search results
- Highlighting
- Long-running searches
- Near real-time search
- Paginate search results
- Retrieve inner hits
- Retrieve selected fields
- Search across clusters
- Search multiple data streams and indices
- Search shard routing
- Search templates
- Search with synonyms
- Sort search results
- kNN search
- Semantic search
- Searching with query rules
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Frequent item sets
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Time series
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- Geospatial analysis
- EQL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Role restriction
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- How to
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Fix data nodes out of disk
- Fix master nodes out of disk
- Fix other role nodes out of disk
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Multiple deployments writing to the same snapshot repository
- Addressing repeated snapshot policy failures
- Troubleshooting an unstable cluster
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- Troubleshooting searches
- Troubleshooting shards capacity health issues
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Behavioral Analytics APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat component templates
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Prevalidate node removal
- Nodes reload secure settings
- Nodes stats
- Cluster Info
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Get desired balance
- Reset desired balance
- Cross-cluster replication APIs
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- Features APIs
- Fleet APIs
- Find structure API
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Ingest APIs
- Info API
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Clear trained model deployment cache
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model
- Start trained model deployment
- Stop trained model deployment
- Update trained model deployment
- Migration APIs
- Node lifecycle APIs
- Query rules APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Script APIs
- Search APIs
- Search Application APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get service accounts
- Get service account credentials
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- Update API key
- Bulk update API keys
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profiles
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Create Cross-Cluster API key
- Update Cross-Cluster API key
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Synonyms APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Migration guide
- Release notes
- Elasticsearch version 8.10.4
- Elasticsearch version 8.10.3
- Elasticsearch version 8.10.2
- Elasticsearch version 8.10.1
- Elasticsearch version 8.10.0
- Elasticsearch version 8.9.2
- Elasticsearch version 8.9.1
- Elasticsearch version 8.9.0
- Elasticsearch version 8.8.2
- Elasticsearch version 8.8.1
- Elasticsearch version 8.8.0
- Elasticsearch version 8.7.1
- Elasticsearch version 8.7.0
- Elasticsearch version 8.6.2
- Elasticsearch version 8.6.1
- Elasticsearch version 8.6.0
- Elasticsearch version 8.5.3
- Elasticsearch version 8.5.2
- Elasticsearch version 8.5.1
- Elasticsearch version 8.5.0
- Elasticsearch version 8.4.3
- Elasticsearch version 8.4.2
- Elasticsearch version 8.4.1
- Elasticsearch version 8.4.0
- Elasticsearch version 8.3.3
- Elasticsearch version 8.3.2
- Elasticsearch version 8.3.1
- Elasticsearch version 8.3.0
- Elasticsearch version 8.2.3
- Elasticsearch version 8.2.2
- Elasticsearch version 8.2.1
- Elasticsearch version 8.2.0
- Elasticsearch version 8.1.3
- Elasticsearch version 8.1.2
- Elasticsearch version 8.1.1
- Elasticsearch version 8.1.0
- Elasticsearch version 8.0.1
- Elasticsearch version 8.0.0
- Elasticsearch version 8.0.0-rc2
- Elasticsearch version 8.0.0-rc1
- Elasticsearch version 8.0.0-beta1
- Elasticsearch version 8.0.0-alpha2
- Elasticsearch version 8.0.0-alpha1
- Dependencies and versions
Create data frame analytics jobs API
editCreate data frame analytics jobs API
editInstantiates a data frame analytics job.
Request
editPUT _ml/data_frame/analytics/<data_frame_analytics_id>
Prerequisites
editRequires the following privileges:
-
cluster:
manage_ml
(themachine_learning_admin
built-in role grants this privilege) -
source indices:
read
,view_index_metadata
-
destination index:
read
,create_index
,manage
andindex
The data frame analytics job remembers which roles the user who created it had at the time of creation. When you start the job, it performs the analysis using those same roles. If you provide secondary authorization headers, those credentials are used instead.
Description
editThis API creates a data frame analytics job that performs an analysis on the source indices and stores the outcome in a destination index.
If the destination index does not exist, it is created automatically when you start the job. See Start data frame analytics jobs.
If you supply only a subset of the regression or classification parameters, hyperparameter optimization occurs. It determines a value for each of the undefined parameters.
Path parameters
edit-
<data_frame_analytics_id>
- (Required, string) Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Request body
edit-
allow_lazy_start
-
(Optional, Boolean)
Specifies whether this job can start when there is insufficient machine learning node
capacity for it to be immediately assigned to a node. The default is
false
; if a machine learning node with capacity to run the job cannot immediately be found, the API returns an error. However, this is also subject to the cluster-widexpack.ml.max_lazy_ml_nodes
setting. See Advanced machine learning settings. If this option is set totrue
, the API does not return an error and the job waits in thestarting
state until sufficient machine learning node capacity is available.
-
analysis
-
(Required, object) The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression.
Properties of
analysis
-
classification
-
(Required*, object) The configuration information necessary to perform classification.
Advanced parameters are for fine-tuning classification analysis. They are set automatically by hyperparameter optimization to give the minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters.
Properties of
classification
-
alpha
- (Optional, double) Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.
-
class_assignment_objective
-
(Optional, string)
Defines the objective to optimize when assigning class labels:
maximize_accuracy
ormaximize_minimum_recall
. When maximizing accuracy, class labels are chosen to maximize the number of correct predictions. When maximizing minimum recall, labels are chosen to maximize the minimum recall for any class. Defaults tomaximize_minimum_recall
. -
dependent_variable
-
(Required, string)
Defines which field of the document is to be predicted. This parameter is supplied by field name and must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable.
The data type of the field must be numeric (
integer
,short
,long
,byte
), categorical (ip
orkeyword
), or boolean. There must be no more than 100 different values in this field. -
downsample_factor
- (Optional, double) Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. For more information about shrinkage, refer to this wiki article. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.
-
early_stopping_enabled
- (Optional, Boolean) Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable. By default, early stoppping is enabled.
-
eta
- (Optional, double) Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. For more information about shrinkage, refer to this wiki article. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.
-
eta_growth_rate_per_tree
-
(Optional, double)
Advanced configuration option. Specifies the rate at which
eta
increases for each new tree that is added to the forest. For example, a rate of 1.05 increaseseta
by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2. -
feature_bag_fraction
- (Optional, double) Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.
-
feature_processors
-
(Optional, list) Advanced configuration option. A collection of feature preprocessors that modify one or more included fields. The analysis uses the resulting one or more features instead of the original document field. However, these features are ephemeral; they are not stored in the destination index. Multiple
feature_processors
entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields. Refer to data frame analytics feature processors to learn more.Properties of
feature_processors
-
frequency_encoding
-
(object) The configuration information necessary to perform frequency encoding.
Properties of
frequency_encoding
-
feature_name
- (Required, string) The resulting feature name.
-
field
- (Required, string) The name of the field to encode.
-
frequency_map
-
(Required, object)
The resulting frequency map for the field value. If the field value is missing
from the
frequency_map
, the resulting value is0
.
-
-
multi_encoding
-
(object) The configuration information necessary to perform multi encoding. It allows multiple processors to be changed together. This way the output of a processor can then be passed to another as an input.
Properties of
multi_encoding
-
processors
- (Required, array) The ordered array of custom processors to execute. Must be more than 1.
-
-
n_gram_encoding
-
(object) The configuration information necessary to perform n-gram encoding. Features created by this encoder have the following name format:
<feature_prefix>.<ngram><string position>
. For example, if thefeature_prefix
isf
, the feature name for the second unigram in a string isf.11
.Properties of
n_gram_encoding
-
feature_prefix
-
(Optional, string)
The feature name prefix. Defaults to
ngram_<start>_<length>
. -
field
- (Required, string) The name of the text field to encode.
-
length
-
(Optional, integer)
Specifies the length of the n-gram substring. Defaults to
50
. Must be greater than0
. -
n_grams
- (Required, array) Specifies which n-grams to gather. It’s an array of integer values where the minimum value is 1, and a maximum value is 5.
-
start
-
(Optional, integer)
Specifies the zero-indexed start of the n-gram substring. Negative values are
allowed for encoding n-grams of string suffixes. Defaults to
0
.
-
-
one_hot_encoding
-
(object) The configuration information necessary to perform one hot encoding.
Properties of
one_hot_encoding
-
field
- (Required, string) The name of the field to encode.
-
hot_map
- (Required, string) The one hot map mapping the field value with the column name.
-
-
target_mean_encoding
-
(object) The configuration information necessary to perform target mean encoding.
Properties of
target_mean_encoding
-
default_value
-
(Required, integer)
The default value if field value is not found in the
target_map
. -
feature_name
- (Required, string) The resulting feature name.
-
field
- (Required, string) The name of the field to encode.
-
target_map
- (Required, object) The field value to target mean transition map.
-
-
-
gamma
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.
-
lambda
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.
-
max_optimization_rounds_per_hyperparameter
- (Optional, integer) Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.
-
max_trees
- (Optional, integer) Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.
-
num_top_classes
-
(Optional, integer) Defines the number of categories for which the predicted probabilities are reported. It must be non-negative or -1. If it is -1 or greater than the total number of categories, probabilities are reported for all categories; if you have a large number of categories, there could be a significant effect on the size of your destination index. Defaults to 2.
To use the AUC ROC evaluation method,
num_top_classes
must be set to-1
or a value greater than or equal to the total number of categories. -
num_top_feature_importance_values
- (Optional, integer) Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, it is zero and no feature importance calculation occurs.
-
prediction_field_name
-
(Optional, string)
Defines the name of the prediction field in the results.
Defaults to
<dependent_variable>_prediction
. -
randomize_seed
-
(Optional, long)
Defines the seed for the random generator that is used to pick training data. By
default, it is randomly generated. Set it to a specific value to use the same
training data each time you start a job (assuming other related parameters such
as
source
andanalyzed_fields
are the same). -
soft_tree_depth_limit
-
(Optional, double)
Advanced configuration option. Machine learning uses loss guided tree growing, which
means that the decision trees grow where the regularized loss decreases most
quickly. This soft limit combines with the
soft_tree_depth_tolerance
to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0. -
soft_tree_depth_tolerance
-
(Optional, double)
Advanced configuration option. This option controls how quickly the regularized
loss increases when the tree depth exceeds
soft_tree_depth_limit
. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01. -
training_percent
-
(Optional, integer)
Defines what percentage of the eligible documents that will
be used for training. Documents that are ignored by the analysis (for example
those that contain arrays with more than one value) won’t be included in the
calculation for used percentage. Defaults to
100
.
-
-
outlier_detection
-
(Required*, object) The configuration information necessary to perform outlier detection:
Properties of
outlier_detection
-
compute_feature_influence
-
(Optional, Boolean)
Specifies whether the feature influence calculation is enabled. Defaults to
true
. -
feature_influence_threshold
-
(Optional, double)
The minimum outlier score that a document needs to have in order to calculate its
feature influence score. Value range: 0-1 (
0.1
by default). -
method
-
(Optional, string)
The method that outlier detection uses. Available methods are
lof
,ldof
,distance_kth_nn
,distance_knn
, andensemble
. The default value isensemble
, which means that outlier detection uses an ensemble of different methods and normalises and combines their individual outlier scores to obtain the overall outlier score. -
n_neighbors
- (Optional, integer) Defines the value for how many nearest neighbors each method of outlier detection uses to calculate its outlier score. When the value is not set, different values are used for different ensemble members. This default behavior helps improve the diversity in the ensemble; only override it if you are confident that the value you choose is appropriate for the data set.
-
outlier_fraction
- (Optional, double) The proportion of the data set that is assumed to be outlying prior to outlier detection. For example, 0.05 means it is assumed that 5% of values are real outliers and 95% are inliers.
-
standardization_enabled
-
(Optional, Boolean)
If
true
, the following operation is performed on the columns before computing outlier scores: (x_i - mean(x_i)) / sd(x_i). Defaults totrue
. For more information about this concept, see Wikipedia.
-
-
regression
-
(Required*, object) The configuration information necessary to perform regression.
Advanced parameters are for fine-tuning regression analysis. They are set automatically by hyperparameter optimization to give the minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters.
Properties of
regression
-
alpha
- (Optional, double) Advanced configuration option. Machine learning uses loss guided tree growing, which means that the decision trees grow where the regularized loss decreases most quickly. This parameter affects loss calculations by acting as a multiplier of the tree depth. Higher alpha values result in shallower trees and faster training times. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to zero.
-
dependent_variable
-
(Required, string)
Defines which field of the document is to be predicted. This parameter is supplied by field name and must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. It is also known as continuous target variable.
The data type of the field must be numeric.
-
downsample_factor
- (Optional, double) Advanced configuration option. Controls the fraction of data that is used to compute the derivatives of the loss function for tree training. A small value results in the use of a small fraction of the data. If this value is set to be less than 1, accuracy typically improves. However, too small a value may result in poor convergence for the ensemble and so require more trees. For more information about shrinkage, refer to this wiki article. By default, this value is calculated during hyperparameter optimization. It must be greater than zero and less than or equal to 1.
-
early_stopping_enabled
- (Optional, Boolean) Advanced configuration option. Specifies whether the training process should finish if it is not finding any better performing models. If disabled, the training process can take significantly longer and the chance of finding a better performing model is unremarkable. By default, early stoppping is enabled.
-
eta
- (Optional, double) Advanced configuration option. The shrinkage applied to the weights. Smaller values result in larger forests which have a better generalization error. However, larger forests cause slower training. For more information about shrinkage, refer to this wiki article. By default, this value is calculated during hyperparameter optimization. It must be a value between 0.001 and 1.
-
eta_growth_rate_per_tree
-
(Optional, double)
Advanced configuration option. Specifies the rate at which
eta
increases for each new tree that is added to the forest. For example, a rate of 1.05 increaseseta
by 5% for each extra tree. By default, this value is calculated during hyperparameter optimization. It must be between 0.5 and 2. -
feature_bag_fraction
- (Optional, double) Advanced configuration option. Defines the fraction of features that will be used when selecting a random bag for each candidate split. By default, this value is calculated during hyperparameter optimization.
-
feature_processors
-
(Optional, list)
Advanced configuration option. A collection of feature preprocessors that modify
one or more included fields. The analysis uses the resulting one or more
features instead of the original document field. However, these features are
ephemeral; they are not stored in the destination index. Multiple
feature_processors
entries can refer to the same document fields. Automatic categorical feature encoding still occurs for the fields that are unprocessed by a custom processor or that have categorical values. Use this property only if you want to override the automatic feature encoding of the specified fields. Refer to data frame analytics feature processors to learn more. -
gamma
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies a linear penalty associated with the size of individual trees in the forest. A high gamma value causes training to prefer small trees. A small gamma value results in larger individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.
-
lambda
- (Optional, double) Advanced configuration option. Regularization parameter to prevent overfitting on the training data set. Multiplies an L2 regularization term which applies to leaf weights of the individual trees in the forest. A high lambda value causes training to favor small leaf weights. This behavior makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. A small lambda value results in large individual trees and slower training. By default, this value is calculated during hyperparameter optimization. It must be a nonnegative value.
-
loss_function
-
(Optional, string)
The loss function used during regression. Available options are
mse
(mean squared error),msle
(mean squared logarithmic error),huber
(Pseudo-Huber loss). Defaults tomse
. Refer to Loss functions for regression analyses to learn more. -
loss_function_parameter
-
(Optional, double)
A positive number that is used as a parameter to the
loss_function
. -
max_optimization_rounds_per_hyperparameter
- (Optional, integer) Advanced configuration option. A multiplier responsible for determining the maximum number of hyperparameter optimization steps in the Bayesian optimization procedure. The maximum number of steps is determined based on the number of undefined hyperparameters times the maximum optimization rounds per hyperparameter. By default, this value is calculated during hyperparameter optimization.
-
max_trees
- (Optional, integer) Advanced configuration option. Defines the maximum number of decision trees in the forest. The maximum value is 2000. By default, this value is calculated during hyperparameter optimization.
-
num_top_feature_importance_values
- (Optional, integer) Advanced configuration option. Specifies the maximum number of feature importance values per document to return. By default, it is zero and no feature importance calculation occurs.
-
prediction_field_name
-
(Optional, string)
Defines the name of the prediction field in the results.
Defaults to
<dependent_variable>_prediction
. -
randomize_seed
-
(Optional, long)
Defines the seed for the random generator that is used to pick training data. By
default, it is randomly generated. Set it to a specific value to use the same
training data each time you start a job (assuming other related parameters such
as
source
andanalyzed_fields
are the same). -
soft_tree_depth_limit
-
(Optional, double)
Advanced configuration option. Machine learning uses loss guided tree growing, which
means that the decision trees grow where the regularized loss decreases most
quickly. This soft limit combines with the
soft_tree_depth_tolerance
to penalize trees that exceed the specified depth; the regularized loss increases quickly beyond this depth. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0. -
soft_tree_depth_tolerance
-
(Optional, double)
Advanced configuration option. This option controls how quickly the regularized
loss increases when the tree depth exceeds
soft_tree_depth_limit
. By default, this value is calculated during hyperparameter optimization. It must be greater than or equal to 0.01. -
training_percent
-
(Optional, integer)
Defines what percentage of the eligible documents that will
be used for training. Documents that are ignored by the analysis (for example
those that contain arrays with more than one value) won’t be included in the
calculation for used percentage. Defaults to
100
.
-
-
-
analyzed_fields
-
(Optional, object) Specify
includes
and/orexcludes
patterns to select which fields will be included in the analysis. The patterns specified inexcludes
are applied last, thereforeexcludes
takes precedence. In other words, if the same field is specified in bothincludes
andexcludes
, then the field will not be included in the analysis.The supported fields for each type of analysis are as follows:
-
Outlier detection requires numeric or boolean data to analyze. The algorithms
don’t support missing values therefore fields that have data types other than
numeric or boolean are ignored. Documents where included fields contain missing
values, null values, or an array are also ignored. Therefore the
dest
index may contain documents that don’t have an outlier score. -
Regression supports fields that are numeric,
boolean
,text
,keyword
, andip
. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the regression analysis. -
Classification supports fields that are numeric,
boolean
,text
,keyword
, andip
. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the classification analysis. Classification analysis can be improved by mapping ordinal variable values to a single number. For example, in case of age ranges, you can model the values as "0-14" = 0, "15-24" = 1, "25-34" = 2, and so on.
If
analyzed_fields
is not set, only the relevant fields will be included. For example, all the numeric fields for outlier detection. For more information about field selection, see Explain data frame analytics.Properties of
analyzed_fields
-
excludes
-
(Optional, array)
An array of strings that defines the fields that will be excluded from the
analysis. You do not need to add fields with unsupported data types to
excludes
, these fields are excluded from the analysis automatically. -
includes
- (Optional, array) An array of strings that defines the fields that will be included in the analysis.
-
Outlier detection requires numeric or boolean data to analyze. The algorithms
don’t support missing values therefore fields that have data types other than
numeric or boolean are ignored. Documents where included fields contain missing
values, null values, or an array are also ignored. Therefore the
-
description
- (Optional, string) A description of the job.
-
dest
-
(Required, object) The destination configuration, consisting of
index
and optionallyresults_field
(ml
by default).Properties of
dest
-
index
- (Required, string) Defines the destination index to store the results of the data frame analytics job.
-
results_field
-
(Optional, string) Defines the name of the field in which to store the results
of the analysis. Defaults to
ml
.
-
-
max_num_threads
-
(Optional, integer)
The maximum number of threads to be used by the analysis.
The default value is
1
. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself. -
_meta
- (Optional, object) Advanced configuration option. Contains custom metadata about the job. For example, it can contain custom URL information.
-
model_memory_limit
-
(Optional, string)
The approximate maximum amount of memory resources that are permitted for
analytical processing. The default value for data frame analytics jobs is
1gb
. If you specify a value for thexpack.ml.max_model_memory_limit
setting, an error occurs when you try to create jobs that havemodel_memory_limit
values greater than that setting value. For more information, see Machine learning settings. -
source
-
(object) The configuration of how to source the analysis data. It requires an
index
. Optionally,query
,runtime_mappings
, and_source
may be specified.Properties of
source
-
index
-
(Required, string or array) Index or indices on which to perform the analysis. It can be a single index or index pattern as well as an array of indices or patterns.
If your source indices contain documents with the same IDs, only the document that is indexed last appears in the destination index.
-
query
-
(Optional, object) The Elasticsearch query domain-specific language (DSL).
This value corresponds to the query object in an Elasticsearch search POST body. All the
options that are supported by Elasticsearch can be used, as this object is passed
verbatim to Elasticsearch. By default, this property has the following value:
{"match_all": {}}
. -
runtime_mappings
- (Optional, object) Definitions of runtime fields that will become part of the mapping of the destination index.
-
_source
-
(Optional, object) Specify
includes
and/orexcludes
patterns to select which fields will be present in the destination. Fields that are excluded cannot be included in the analysis.Properties of
_source
-
includes
- (array) An array of strings that defines the fields that will be included in the destination.
-
excludes
- (array) An array of strings that defines the fields that will be excluded from the destination.
-
-
Examples
editPreprocessing actions example
editThe following example shows how to limit the scope of the analysis to certain fields, specify excluded fields in the destination index, and use a query to filter your data before analysis.
PUT _ml/data_frame/analytics/model-flight-delays-pre { "source": { "index": [ "kibana_sample_data_flights" ], "query": { "range": { "DistanceKilometers": { "gt": 0 } } }, "_source": { "includes": [], "excludes": [ "FlightDelay", "FlightDelayType" ] } }, "dest": { "index": "df-flight-delays", "results_field": "ml-results" }, "analysis": { "regression": { "dependent_variable": "FlightDelayMin", "training_percent": 90 } }, "analyzed_fields": { "includes": [], "excludes": [ "FlightNum" ] }, "model_memory_limit": "100mb" }
Source index to analyze. |
|
This query filters out entire documents that will not be present in the destination index. |
|
The |
|
Defines the destination index that contains the results of the analysis and
the fields of the source index specified in the |
|
Specifies fields to be included in or excluded from the analysis. This does not affect whether the fields will be present in the destination index, only affects whether they are used in the analysis. |
In this example, we can see that all the fields of the source index are included
in the destination index except FlightDelay
and FlightDelayType
because
these are defined as excluded fields by the excludes
parameter of the
_source
object. The FlightNum
field is included in the destination index,
however it is not included in the analysis because it is explicitly specified as
excluded field by the excludes
parameter of the analyzed_fields
object.
Outlier detection example
editThe following example creates the loganalytics
data frame analytics job, the analysis
type is outlier_detection
:
PUT _ml/data_frame/analytics/loganalytics { "description": "Outlier detection on log data", "source": { "index": "logdata" }, "dest": { "index": "logdata_out" }, "analysis": { "outlier_detection": { "compute_feature_influence": true, "outlier_fraction": 0.05, "standardization_enabled": true } } }
The API returns the following result:
{ "id" : "loganalytics", "create_time" : 1656364565517, "version" : "8.4.0", "authorization" : { "roles" : [ "superuser" ] }, "description" : "Outlier detection on log data", "source" : { "index" : [ "logdata" ], "query" : { "match_all" : { } } }, "dest" : { "index" : "logdata_out", "results_field" : "ml" }, "analysis" : { "outlier_detection" : { "compute_feature_influence" : true, "outlier_fraction" : 0.05, "standardization_enabled" : true } }, "model_memory_limit" : "1gb", "allow_lazy_start" : false, "max_num_threads" : 1 }
Regression examples
editThe following example creates the house_price_regression_analysis
data frame analytics job, the analysis type is regression
:
PUT _ml/data_frame/analytics/house_price_regression_analysis { "source": { "index": "houses_sold_last_10_yrs" }, "dest": { "index": "house_price_predictions" }, "analysis": { "regression": { "dependent_variable": "price" } } }
The API returns the following result:
{ "id" : "house_price_regression_analysis", "create_time" : 1656364845151, "version" : "8.4.0", "authorization" : { "roles" : [ "superuser" ] }, "source" : { "index" : [ "houses_sold_last_10_yrs" ], "query" : { "match_all" : { } } }, "dest" : { "index" : "house_price_predictions", "results_field" : "ml" }, "analysis" : { "regression" : { "dependent_variable" : "price", "prediction_field_name" : "price_prediction", "training_percent" : 100.0, "randomize_seed" : -3578554885299300212, "loss_function" : "mse", "early_stopping_enabled" : true } }, "model_memory_limit" : "1gb", "allow_lazy_start" : false, "max_num_threads" : 1 }
The following example creates a job and specifies a training percent:
PUT _ml/data_frame/analytics/student_performance_mathematics_0.3 { "source": { "index": "student_performance_mathematics" }, "dest": { "index":"student_performance_mathematics_reg" }, "analysis": { "regression": { "dependent_variable": "G3", "training_percent": 70, "randomize_seed": 19673948271 } } }
The percentage of the data set that is used for training the model. |
|
The seed that is used to randomly pick which data is used for training. |
The following example uses custom feature processors to transform the
categorical values for DestWeather
into numerical values using one-hot,
target-mean, and frequency encoding techniques:
PUT _ml/data_frame/analytics/flight_prices { "source": { "index": [ "kibana_sample_data_flights" ] }, "dest": { "index": "kibana_sample_flight_prices" }, "analysis": { "regression": { "dependent_variable": "AvgTicketPrice", "num_top_feature_importance_values": 2, "feature_processors": [ { "frequency_encoding": { "field": "DestWeather", "feature_name": "DestWeather_frequency", "frequency_map": { "Rain": 0.14604811155570188, "Heavy Fog": 0.14604811155570188, "Thunder & Lightning": 0.14604811155570188, "Cloudy": 0.14604811155570188, "Damaging Wind": 0.14604811155570188, "Hail": 0.14604811155570188, "Sunny": 0.14604811155570188, "Clear": 0.14604811155570188 } } }, { "target_mean_encoding": { "field": "DestWeather", "feature_name": "DestWeather_targetmean", "target_map": { "Rain": 626.5588814585794, "Heavy Fog": 626.5588814585794, "Thunder & Lightning": 626.5588814585794, "Hail": 626.5588814585794, "Damaging Wind": 626.5588814585794, "Cloudy": 626.5588814585794, "Clear": 626.5588814585794, "Sunny": 626.5588814585794 }, "default_value": 624.0249512020454 } }, { "one_hot_encoding": { "field": "DestWeather", "hot_map": { "Rain": "DestWeather_Rain", "Heavy Fog": "DestWeather_Heavy Fog", "Thunder & Lightning": "DestWeather_Thunder & Lightning", "Cloudy": "DestWeather_Cloudy", "Damaging Wind": "DestWeather_Damaging Wind", "Hail": "DestWeather_Hail", "Clear": "DestWeather_Clear", "Sunny": "DestWeather_Sunny" } } } ] } }, "analyzed_fields": { "includes": [ "AvgTicketPrice", "Cancelled", "DestWeather", "FlightDelayMin", "DistanceMiles" ] }, "model_memory_limit": "30mb" }
These custom feature processors are optional; automatic feature encoding still occurs for all categorical features.
Classification example
editThe following example creates the loan_classification
data frame analytics job, the
analysis type is classification
:
PUT _ml/data_frame/analytics/loan_classification { "source" : { "index": "loan-applicants" }, "dest" : { "index": "loan-applicants-classified" }, "analysis" : { "classification": { "dependent_variable": "label", "training_percent": 75, "num_top_classes": 2 } } }
On this page