- Elasticsearch Guide: other versions:
- Elasticsearch introduction
- Getting started with Elasticsearch
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Set up X-Pack
- Configuring X-Pack Java Clients
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- API conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Median Absolute Deviation Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Auto-interval Date Histogram Aggregation
- Children Aggregation
- Composite Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- GeoTile Grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Parent Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Shrink Index
- Split Index
- Rollover Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- cat APIs
- Cluster APIs
- Query DSL
- Scripting
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Standard Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- Whitespace Tokenizer
- UAX URL Email Tokenizer
- Classic Tokenizer
- Thai Tokenizer
- NGram Tokenizer
- Edge NGram Tokenizer
- Keyword Tokenizer
- Pattern Tokenizer
- Char Group Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Token Filters
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Multiplexer Token Filter
- Conditional Token Filter
- Predicate Token Filter Script
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Parsing synonym files
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Exclude mode settings example
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- MinHash Token Filter
- Remove Duplicates Token Filter
- Character Filters
- Modules
- Index modules
- Ingest node
- Pipeline Definition
- Ingest APIs
- Accessing Data in Pipelines
- Conditional Execution in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Bytes Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Dissect Processor
- Dot Expander Processor
- Drop Processor
- Fail Processor
- Foreach Processor
- GeoIP Processor
- Grok Processor
- Gsub Processor
- HTML Strip Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Pipeline Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Set Security User Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- URL Decode Processor
- User Agent processor
- Managing the index lifecycle
- Getting started with index lifecycle management
- Policy phases and actions
- Set up index lifecycle management policy
- Using policies to manage index rollover
- Update policy
- Index lifecycle error handling
- Restoring snapshots of managed indices
- Start and stop index lifecycle management
- Using ILM with existing indices
- SQL access
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Monitor a cluster
- Frozen indices
- Set up a cluster for high availability
- Roll up or transform your data
- X-Pack APIs
- Info API
- Cross-cluster replication APIs
- Explore API
- Freeze index
- Index lifecycle management API
- Licensing APIs
- Machine learning APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendar
- Create datafeeds
- Create filter
- Delete calendar
- Delete datafeeds
- Delete events from calendar
- Delete filter
- Delete forecast
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Find file structure
- Flush jobs
- Forecast jobs
- Get calendars
- Get buckets
- Get overall buckets
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filter
- Update jobs
- Update model snapshots
- Migration APIs
- Rollup APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect Prepare Authentication API
- OpenID Connect Authenticate API
- OpenID Connect Logout API
- SSL certificate
- Transform APIs
- Unfreeze index
- Watcher APIs
- Definitions
- Secure a cluster
- Overview
- Configuring security
- Encrypting communications in Elasticsearch
- Encrypting communications in an Elasticsearch Docker Container
- Enabling cipher suites for stronger encryption
- Separating node-to-node and client traffic
- Configuring an Active Directory realm
- Configuring a file realm
- Configuring an LDAP realm
- Configuring a native realm
- Configuring a PKI realm
- Configuring a SAML realm
- Configuring a Kerberos realm
- Security files
- FIPS 140-2
- How security works
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Auditing security events
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on cluster and index events
- Command line tools
- How To
- Testing
- Glossary of terms
- Release highlights
- Breaking changes
- Release notes
- Elasticsearch version 7.2.1
- Elasticsearch version 7.2.0
- Elasticsearch version 7.1.1
- Elasticsearch version 7.1.0
- Elasticsearch version 7.0.0
- Elasticsearch version 7.0.0-rc2
- Elasticsearch version 7.0.0-rc1
- Elasticsearch version 7.0.0-beta1
- Elasticsearch version 7.0.0-alpha2
- Elasticsearch version 7.0.0-alpha1
Transform examples
editTransform examples
editThis functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
These examples demonstrate how to use transforms to derive useful insights from your data. All the examples use one of the Kibana sample datasets. For a more detailed, step-by-step example, see Tutorial: Transforming the eCommerce sample data.
Finding your best customers
editIn this example, we use the eCommerce orders sample dataset to find the customers who spent the most in our hypothetical webshop. Let’s transform the data such that the destination index contains the number of orders, the total price of the orders, the amount of unique products and the average price per order, and the total amount of ordered products for each customer.
POST _data_frame/transforms/_preview { "source": { "index": "kibana_sample_data_ecommerce" }, "dest" : { "index" : "sample_ecommerce_orders_by_customer" }, "pivot": { "group_by": { "user": { "terms": { "field": "user" }}, "customer_id": { "terms": { "field": "customer_id" }} }, "aggregations": { "order_count": { "value_count": { "field": "order_id" }}, "total_order_amt": { "sum": { "field": "taxful_total_price" }}, "avg_amt_per_order": { "avg": { "field": "taxful_total_price" }}, "avg_unique_products_per_order": { "avg": { "field": "total_unique_products" }}, "total_unique_products": { "cardinality": { "field": "products.product_id" }} } } }
This is the destination index for the data frame. It is ignored by
|
|
Two |
In the example above, condensed JSON formatting has been used for easier readability of the pivot object.
The preview transforms API enables you to see the layout of the data frame in advance, populated with some sample values. For example:
{ "preview" : [ { "total_order_amt" : 3946.9765625, "order_count" : 59.0, "total_unique_products" : 116.0, "avg_unique_products_per_order" : 2.0, "customer_id" : "10", "user" : "recip", "avg_amt_per_order" : 66.89790783898304 }, ... ] }
This data frame makes it easier to answer questions such as:
- Which customers spend the most?
- Which customers spend the most per order?
- Which customers order most often?
- Which customers ordered the least number of different products?
It’s possible to answer these questions using aggregations alone, however data frames allow us to persist this data as a customer centric index. This enables us to analyze data at scale and gives more flexibility to explore and navigate data from a customer centric perspective. In some cases, it can even make creating visualizations much simpler.
Finding air carriers with the most delays
editIn this example, we use the Flights sample dataset to find out which air carrier
had the most delays. First, we filter the source data such that it excludes all
the cancelled flights by using a query filter. Then we transform the data to
contain the distinct number of flights, the sum of delayed minutes, and the sum
of the flight minutes by air carrier. Finally, we use a
bucket_script
to determine what percentage of the flight time was actually delay.
POST _data_frame/transforms/_preview { "source": { "index": "kibana_sample_data_flights", "query": { "bool": { "filter": [ { "term": { "Cancelled": false } } ] } } }, "dest" : { "index" : "sample_flight_delays_by_carrier" }, "pivot": { "group_by": { "carrier": { "terms": { "field": "Carrier" }} }, "aggregations": { "flights_count": { "value_count": { "field": "FlightNum" }}, "delay_mins_total": { "sum": { "field": "FlightDelayMin" }}, "flight_mins_total": { "sum": { "field": "FlightTimeMin" }}, "delay_time_percentage": { "bucket_script": { "buckets_path": { "delay_time": "delay_mins_total.value", "flight_time": "flight_mins_total.value" }, "script": "(params.delay_time / params.flight_time) * 100" } } } } }
Filter the source data to select only flights that were not cancelled. |
|
This is the destination index for the data frame. It is ignored by
|
|
The data is grouped by the |
|
This |
The preview shows you that the new index would contain data like this for each carrier:
{ "preview" : [ { "carrier" : "ES-Air", "flights_count" : 2802.0, "flight_mins_total" : 1436927.5130677223, "delay_time_percentage" : 9.335543983955839, "delay_mins_total" : 134145.0 }, ... ] }
This data frame makes it easier to answer questions such as:
- Which air carrier has the most delays as a percentage of flight time?
This data is fictional and does not reflect actual delays or flight stats for any of the featured destination or origin airports.
Finding suspicious client IPs by using scripted metrics
editWith transforms, you can use scripted metric aggregations on your data. These aggregations are flexible and make it possible to perform very complex processing. Let’s use scripted metrics to identify suspicious client IPs in the web log sample dataset.
We transform the data such that the new index contains the sum of bytes and the
number of distinct URLs, agents, incoming requests by location, and geographic
destinations for each client IP. We also use a scripted field to count the
specific types of HTTP responses that each client IP receives. Ultimately, the
example below transforms web log data into an entity centric index where the
entity is clientip
.
POST _data_frame/transforms/_preview { "source": { "index": "kibana_sample_data_logs", "query": { "range" : { "timestamp" : { "gte" : "now-30d/d" } } } }, "dest" : { "index" : "sample_weblogs_by_clientip" }, "pivot": { "group_by": { "clientip": { "terms": { "field": "clientip" } } }, "aggregations": { "url_dc": { "cardinality": { "field": "url.keyword" }}, "bytes_sum": { "sum": { "field": "bytes" }}, "geo.src_dc": { "cardinality": { "field": "geo.src" }}, "agent_dc": { "cardinality": { "field": "agent.keyword" }}, "geo.dest_dc": { "cardinality": { "field": "geo.dest" }}, "responses.total": { "value_count": { "field": "timestamp" }}, "responses.counts": { "scripted_metric": { "init_script": "state.responses = ['error':0L,'success':0L,'other':0L]", "map_script": """ def code = doc['response.keyword'].value; if (code.startsWith('5') || code.startsWith('4')) { state.responses.error += 1 ; } else if(code.startsWith('2')) { state.responses.success += 1; } else { state.responses.other += 1; } """, "combine_script": "state.responses", "reduce_script": """ def counts = ['error': 0L, 'success': 0L, 'other': 0L]; for (responses in states) { counts.error += responses['error']; counts.success += responses['success']; counts.other += responses['other']; } return counts; """ } }, "timestamp.min": { "min": { "field": "timestamp" }}, "timestamp.max": { "max": { "field": "timestamp" }}, "timestamp.duration_ms": { "bucket_script": { "buckets_path": { "min_time": "timestamp.min.value", "max_time": "timestamp.max.value" }, "script": "(params.max_time - params.min_time)" } } } } }
This range query limits the transform to documents that are within the last 30 days at the point in time the transform checkpoint is processed. For batch data frames this occurs once. |
|
This is the destination index for the data frame. It is ignored by
|
|
The data is grouped by the |
|
This |
|
This |
The preview shows you that the new index would contain data like this for each client IP:
{ "preview" : [ { "geo" : { "src_dc" : 12.0, "dest_dc" : 9.0 }, "clientip" : "0.72.176.46", "agent_dc" : 3.0, "responses" : { "total" : 14.0, "counts" : { "other" : 0, "success" : 14, "error" : 0 } }, "bytes_sum" : 74808.0, "timestamp" : { "duration_ms" : 4.919943239E9, "min" : "2019-06-17T07:51:57.333Z", "max" : "2019-08-13T06:31:00.572Z" }, "url_dc" : 11.0 }, ... }
This data frame makes it easier to answer questions such as:
- Which client IPs are transferring the most amounts of data?
- Which client IPs are interacting with a high number of different URLs?
- Which client IPs have high error rates?
- Which client IPs are interacting with a high number of destination countries?
On this page