- Elasticsearch Guide: other versions:
- Elasticsearch introduction
- Getting started with Elasticsearch
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Full-cluster restart and rolling restart
- Set up X-Pack
- Configuring X-Pack Java Clients
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Median Absolute Deviation Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Auto-interval Date Histogram Aggregation
- Children Aggregation
- Composite Aggregation
- Date histogram aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- GeoTile Grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Parent Aggregation
- Range Aggregation
- Rare Terms Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Subtleties of bucketing range fields
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Cumulative Cardinality Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Query DSL
- Search across clusters
- Scripting
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Char Group Tokenizer
- Classic Tokenizer
- Edge n-gram tokenizer
- Keyword Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- N-gram tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Pattern Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Standard Tokenizer
- Thai Tokenizer
- UAX URL Email Tokenizer
- Whitespace Tokenizer
- Token Filters
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten Graph Token Filter
- Hunspell Token Filter
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Length Token Filter
- Limit Token Count Token Filter
- Lowercase Token Filter
- MinHash Token Filter
- Multiplexer Token Filter
- N-gram
- Normalization Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Phonetic Token Filter
- Porter Stem Token Filter
- Predicate Token Filter Script
- Remove Duplicates Token Filter
- Reverse Token Filter
- Shingle Token Filter
- Snowball Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Stop Token Filter
- Synonym Token Filter
- Synonym Graph Token Filter
- Trim Token Filter
- Truncate Token Filter
- Unique Token Filter
- Uppercase Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Character Filters
- Modules
- Index modules
- Ingest node
- Pipeline Definition
- Accessing Data in Pipelines
- Conditional Execution in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Bytes Processor
- Circle Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Dissect Processor
- Dot Expander Processor
- Drop Processor
- Fail Processor
- Foreach Processor
- GeoIP Processor
- Grok Processor
- Gsub Processor
- HTML Strip Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Pipeline Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Set Security User Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- URL Decode Processor
- User Agent processor
- Managing the index lifecycle
- Getting started with index lifecycle management
- Policy phases and actions
- Set up index lifecycle management policy
- Using policies to manage index rollover
- Update policy
- Index lifecycle error handling
- Restoring snapshots of managed indices
- Start and stop index lifecycle management
- Using ILM with existing indices
- Getting started with snapshot lifecycle management
- SQL access
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Monitor a cluster
- Frozen indices
- Roll up or transform your data
- Set up a cluster for high availability
- Secure a cluster
- Overview
- Configuring security
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Security privileges
- Document level security
- Field level security
- Granting privileges for indices and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enabling audit logging
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on cluster and index events
- Command line tools
- How To
- Testing
- Glossary of terms
- REST APIs
- API conventions
- cat APIs
- Cluster APIs
- Cross-cluster replication APIs
- Document APIs
- Explore API
- Index APIs
- Add index alias
- Analyze
- Clear cache
- Clone index
- Close index
- Create index
- Delete index
- Delete index alias
- Delete index template
- Flush
- Force merge
- Freeze index
- Get field mapping
- Get index
- Get index alias
- Get index settings
- Get index template
- Get mapping
- Index alias exists
- Index exists
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists
- Open index
- Put index template
- Put mapping
- Refresh
- Rollover index
- Shrink index
- Split index
- Synced flush
- Type exists
- Unfreeze index
- Update index alias
- Update index settings
- Index lifecycle management API
- Ingest APIs
- Info API
- Licensing APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendar
- Create datafeeds
- Create filter
- Delete calendar
- Delete datafeeds
- Delete events from calendar
- Delete filter
- Delete forecast
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Find file structure
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filter
- Update jobs
- Update model snapshots
- Machine learning data frame analytics APIs
- Migration APIs
- Reload search analyzers
- Rollup APIs
- Search APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect Prepare Authentication API
- OpenID Connect authenticate API
- OpenID Connect logout API
- SSL certificate
- Snapshot lifecycle management API
- Transform APIs
- Watcher APIs
- Definitions
- Release highlights
- Breaking changes
- Release notes
- Elasticsearch version 7.4.2
- Elasticsearch version 7.4.1
- Elasticsearch version 7.4.0
- Elasticsearch version 7.3.2
- Elasticsearch version 7.3.1
- Elasticsearch version 7.3.0
- Elasticsearch version 7.2.1
- Elasticsearch version 7.2.0
- Elasticsearch version 7.1.1
- Elasticsearch version 7.1.0
- Elasticsearch version 7.0.0
- Elasticsearch version 7.0.0-rc2
- Elasticsearch version 7.0.0-rc1
- Elasticsearch version 7.0.0-beta1
- Elasticsearch version 7.0.0-alpha2
- Elasticsearch version 7.0.0-alpha1
Data frame analytics job resources
editData frame analytics job resources
editData frame analytics resources relate to APIs such as Create data frame analytics jobs and Get data frame analytics jobs.
Properties
edit-
analysis
-
(object) The type of analysis that is performed on the
source
. For example:outlier_detection
orregression
. For more information, see Analysis objects. -
analyzed_fields
-
(object) You can specify both
includes
and/orexcludes
patterns. Ifanalyzed_fields
is not set, only the relevant fields will be included. For example, all the numeric fields for outlier detection. For the supported field types, see Supported fields.-
includes
- (array) An array of strings that defines the fields that will be included in the analysis.
-
excludes
- (array) An array of strings that defines the fields that will be excluded from the analysis.
-
PUT _ml/data_frame/analytics/loganalytics { "source": { "index": "logdata" }, "dest": { "index": "logdata_out" }, "analysis": { "outlier_detection": { } }, "analyzed_fields": { "includes": [ "request.bytes", "response.counts.error" ], "excludes": [ "source.geo" ] } }
-
description
- (Optional, string) A description of the job.
-
dest
-
(object) The destination configuration of the analysis.
-
index
- (Required, string) Defines the destination index to store the results of the data frame analytics job.
-
results_field
-
(Optional, string) Defines the name of the field in which to store the
results of the analysis. Default to
ml
.
-
-
id
- (string) The unique identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. This property is informational; you cannot change the identifier for existing jobs.
-
model_memory_limit
-
(string) The approximate maximum amount of memory resources that are
permitted for analytical processing. The default value for data frame analytics jobs
is
1gb
. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. For more information, see Machine learning settings. -
source
-
(object) The source configuration consisting an
index
and optionally aquery
object.-
index
- (Required, string or array) Index or indices on which to perform the analysis. It can be a single index or index pattern as well as an array of indices or patterns.
-
query
-
(Optional, object) The Elasticsearch query domain-specific language
(DSL). This value corresponds to the query object in an Elasticsearch
search POST body. All the options that are supported by Elasticsearch can be used,
as this object is passed verbatim to Elasticsearch. By default, this property has
the following value:
{"match_all": {}}
.
-
Analysis objects
editData frame analytics resources contain analysis
objects. For example, when you
create a data frame analytics job, you must define the type of analysis it performs.
Outlier detection configuration objects
editAn outlier_detection
configuration object has the following properties:
-
feature_influence_threshold
-
(double) The minimum outlier score that a document needs to have in order to
calculate its feature influence score. Value range: 0-1 (
0.1
by default). -
method
-
(string) Sets the method that outlier detection uses. If the method is not set
outlier detection uses an ensemble of different methods and normalises and
combines their individual outlier scores to obtain the overall outlier score. We
recommend to use the ensemble method. Available methods are
lof
,ldof
,distance_kth_nn
,distance_knn
. -
n_neighbors
- (integer) Defines the value for how many nearest neighbors each method of outlier detection will use to calculate its outlier score. When the value is not set, different values will be used for different ensemble members. This helps improve diversity in the ensemble. Therefore, only override this if you are confident that the value you choose is appropriate for the data set.
Regression configuration objects
editPUT _ml/data_frame/analytics/house_price_regression_analysis { "source": { "index": "houses_sold_last_10_yrs" }, "dest": { "index": "house_price_predictions" }, "analysis": { "regression": { "dependent_variable": "price" } } }
Training data is taken from source index |
|
Analysis results will be output to destination index
|
|
The regression analysis configuration object. |
|
Regression analysis will use field |
Standard parameters
edit-
dependent_variable
- (Required, string) Defines which field of the document is to be predicted. This parameter is supplied by field name and must match one of the fields in the index being used to train. If this field is missing from a document, then that document will not be used for training, but a prediction with the trained model will be generated for it. The data type of the field must be numeric. It is also known as continuous target variable.
-
prediction_field_name
-
(Optional, string) Defines the name of the prediction field in the results.
Defaults to
<dependent_variable>_prediction
. -
training_percent
-
(Optional, integer) Defines what percentage of the eligible documents that will
be used for training. Documents that are ignored by the analysis (for example
those that contain arrays) won’t be included in the calculation for used
percentage. Defaults to
100
.
Advanced parameters
editAdvanced parameters are for fine-tuning regression analysis. They are set automatically by hyperparameter optimization to give minimum validation error. It is highly recommended to use the default values unless you fully understand the function of these parameters. If these parameters are not supplied, their values are automatically tuned to give minimum validation error.
-
eta
- (Optional, double) The shrinkage applied to the weights. Smaller values result in larger forests which have better generalization error. However, the smaller the value the longer the training will take. For more information, see this wiki article about shrinkage.
-
feature_bag_fraction
- (Optional, double) Defines the fraction of features that will be used when selecting a random bag for each candidate split.
-
maximum_number_trees
- (Optional, integer) Defines the maximum number of trees the forest is allowed to contain. The maximum value is 2000.
-
gamma
- (Optional, double) Regularization parameter to prevent overfitting on the training dataset. Multiplies a linear penalty associated with the size of individual trees in the forest. The higher the value the more training will prefer smaller trees. The smaller this parameter the larger individual trees will be and the longer train will take.
-
lambda
- (Optional, double) Regularization parameter to prevent overfitting on the training dataset. Multiplies an L2 regularisation term which applies to leaf weights of the individual trees in the forest. The higher the value the more training will attempt to keep leaf weights small. This makes the prediction function smoother at the expense of potentially not being able to capture relevant relationships between the features and the dependent variable. The smaller this parameter the larger individual trees will be and the longer train will take.
Hyperparameter optimization
editIf you don’t supply regression parameters, hyperparameter optimization will be performed by default to set a value for the undefined parameters. The starting point is calculated for data dependent parameters by examining the loss on the training data. Subject to the size constraint, this operation provides an upper bound on the improvement in validation loss.
A fixed number of rounds is used for optimization which depends on the number of parameters being optimized. The optimitazion starts with random search, then Bayesian Optimisation is performed that is targeting maximum expected improvement. If you override any parameters, then the optimization will calculate the value of the remaining parameters accordingly and use the value you provided for the overridden parameter. The number of rounds are reduced respectively. The validation error is estimated in each round by using 4-fold cross validation.
On this page