- Elasticsearch Guide: other versions:
- Getting Started
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Installing X-Pack
- Set up X-Pack
- Configuring X-Pack Java Clients
- X-Pack Settings
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Children Aggregation
- Composite Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Shrink Index
- Split Index
- Rollover Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Standard Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- Whitespace Tokenizer
- UAX URL Email Tokenizer
- Classic Tokenizer
- Thai Tokenizer
- NGram Tokenizer
- Edge NGram Tokenizer
- Keyword Tokenizer
- Pattern Tokenizer
- Char Group Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Multiplexer Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Exclude mode settings example
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- Minhash Token Filter
- Remove Duplicates Token Filter
- Character Filters
- Modules
- Index Modules
- Ingest Node
- Pipeline Definition
- Ingest APIs
- Accessing Data in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Bytes Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Fail Processor
- Foreach Processor
- Grok Processor
- Gsub Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- Dot Expander Processor
- URL Decode Processor
- SQL Access
- Monitor a cluster
- Rolling up historical data
- Secure a cluster
- Overview
- Configuring Security
- Encrypting communications in Elasticsearch
- Encrypting Communications in an Elasticsearch Docker Container
- Enabling cipher suites for stronger encryption
- Separating node-to-node and client traffic
- Configuring an Active Directory realm
- Configuring a file realm
- Configuring an LDAP realm
- Configuring a native realm
- Configuring a PKI realm
- Configuring a SAML realm
- Configuring a Kerberos realm
- FIPS 140-2
- Security settings
- Auditing settings
- Getting started with security
- How security works
- User authentication
- Built-in users
- Internal users
- Realms
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- User authorization
- Auditing security events
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, tribe, clients, and integrations
- Reference
- Troubleshooting
- Can’t log in after upgrading to 6.4.3
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on Cluster and Index Events
- X-Pack APIs
- Info API
- Explore API
- Licensing APIs
- Migration APIs
- Machine Learning APIs
- Add Events to Calendar
- Add Jobs to Calendar
- Close Jobs
- Create Calendar
- Create Datafeeds
- Create Filter
- Create Jobs
- Delete Calendar
- Delete Datafeeds
- Delete Events from Calendar
- Delete Filter
- Delete Jobs
- Delete Jobs from Calendar
- Delete Model Snapshots
- Flush Jobs
- Forecast Jobs
- Get Calendars
- Get Buckets
- Get Overall Buckets
- Get Categories
- Get Datafeeds
- Get Datafeed Statistics
- Get Influencers
- Get Jobs
- Get Job Statistics
- Get Model Snapshots
- Get Scheduled Events
- Get Filters
- Get Records
- Open Jobs
- Post Data to Jobs
- Preview Datafeeds
- Revert Model Snapshots
- Start Datafeeds
- Stop Datafeeds
- Update Datafeeds
- Update Filter
- Update Jobs
- Update Model Snapshots
- Rollup APIs
- Security APIs
- Create or update application privileges API
- Authenticate API
- Change passwords API
- Clear Cache API
- Create or update role mappings API
- Clear roles cache API
- Create or update roles API
- Create or update users API
- Delete application privileges API
- Delete role mappings API
- Delete roles API
- Delete users API
- Disable users API
- Enable users API
- Get application privileges API
- Get role mappings API
- Get roles API
- Get token API
- Get users API
- Has Privileges API
- Invalidate token API
- SSL Certificate API
- Watcher APIs
- Definitions
- Command line tools
- How To
- Testing
- Glossary of terms
- Release Highlights
- Breaking changes
- Release Notes
- Elasticsearch version 6.4.3
- Elasticsearch version 6.4.2
- Elasticsearch version 6.4.1
- Elasticsearch version 6.4.0
- Elasticsearch version 6.3.2
- Elasticsearch version 6.3.1
- Elasticsearch version 6.3.0
- Elasticsearch version 6.2.4
- Elasticsearch version 6.2.3
- Elasticsearch version 6.2.2
- Elasticsearch version 6.2.1
- Elasticsearch version 6.2.0
- Elasticsearch version 6.1.4
- Elasticsearch version 6.1.3
- Elasticsearch version 6.1.2
- Elasticsearch version 6.1.1
- Elasticsearch version 6.1.0
- Elasticsearch version 6.0.1
- Elasticsearch version 6.0.0
- Elasticsearch version 6.0.0-rc2
- Elasticsearch version 6.0.0-rc1
- Elasticsearch version 6.0.0-beta2
- Elasticsearch version 6.0.0-beta1
- Elasticsearch version 6.0.0-alpha2
- Elasticsearch version 6.0.0-alpha1
- Elasticsearch version 6.0.0-alpha1 (Changes previously released in 5.x)
Profiling Queries
editProfiling Queries
editThe details provided by the Profile API directly expose Lucene class names and concepts, which means that complete interpretation of the results require fairly advanced knowledge of Lucene. This page attempts to give a crash-course in how Lucene executes queries so that you can use the Profile API to successfully diagnose and debug queries, but it is only an overview. For complete understanding, please refer to Lucene’s documentation and, in places, the code.
With that said, a complete understanding is often not required to fix a slow query. It is usually
sufficient to see that a particular component of a query is slow, and not necessarily understand why
the advance
phase of that query is the cause, for example.
query
Section
editThe query
section contains detailed timing of the query tree executed by Lucene on a particular shard.
The overall structure of this query tree will resemble your original Elasticsearch query, but may be slightly
(or sometimes very) different. It will also use similar but not always identical naming. Using our previous
match
query example, let’s analyze the query
section:
"query": [ { "type": "BooleanQuery", "description": "message:some message:number", "time_in_nanos": "1873811", "breakdown": {...}, "children": [ { "type": "TermQuery", "description": "message:some", "time_in_nanos": "391943", "breakdown": {...} }, { "type": "TermQuery", "description": "message:number", "time_in_nanos": "210682", "breakdown": {...} } ] } ]
Based on the profile structure, we can see that our match
query was rewritten by Lucene into a BooleanQuery with two
clauses (both holding a TermQuery). The type
field displays the Lucene class name, and often aligns with
the equivalent name in Elasticsearch. The description
field displays the Lucene explanation text for the query, and
is made available to help differentiating between parts of your query (e.g. both message:search
and message:test
are TermQuery’s and would appear identical otherwise.
The time_in_nanos
field shows that this query took ~1.8ms for the entire BooleanQuery to execute. The recorded time is inclusive
of all children.
The breakdown
field will give detailed stats about how the time was spent, we’ll look at
that in a moment. Finally, the children
array lists any sub-queries that may be present. Because we searched for two
values ("search test"), our BooleanQuery holds two children TermQueries. They have identical information (type, time,
breakdown, etc). Children are allowed to have their own children.
Timing Breakdown
editThe breakdown
component lists detailed timing statistics about low-level Lucene execution:
"breakdown": { "score": 51306, "score_count": 4, "build_scorer": 2935582, "build_scorer_count": 1, "match": 0, "match_count": 0, "create_weight": 919297, "create_weight_count": 1, "next_doc": 53876, "next_doc_count": 5, "advance": 0, "advance_count": 0 }
Timings are listed in wall-clock nanoseconds and are not normalized at all. All caveats about the overall
time_in_nanos
apply here. The intention of the breakdown is to give you a feel for A) what machinery in Lucene is
actually eating time, and B) the magnitude of differences in times between the various components. Like the overall time,
the breakdown is inclusive of all children times.
The meaning of the stats are as follows:
All parameters:
edit
|
A Query in Lucene must be capable of reuse across multiple IndexSearchers (think of it as the engine that
executes a search against a specific Lucene Index). This puts Lucene in a tricky spot, since many queries
need to accumulate temporary state/statistics associated with the index it is being used against, but the
Query contract mandates that it must be immutable.
|
|
This parameter shows how long it takes to build a Scorer for the query. A Scorer is the mechanism that
iterates over matching documents generates a score per-document (e.g. how well does "foo" match the document?).
Note, this records the time required to generate the Scorer object, not actually score the documents. Some
queries have faster or slower initialization of the Scorer, depending on optimizations, complexity, etc.
|
|
The Lucene method |
|
|
|
Some queries, such as phrase queries, match documents using a "Two Phase" process. First, the document is
"approximately" matched, and if it matches approximately, it is checked a second time with a more rigorous
(and expensive) process. The second phase verification is what the |
|
This records the time taken to score a particular document via it’s Scorer |
|
Records the number of invocations of the particular method. For example, |
collectors
Section
editThe Collectors portion of the response shows high-level execution details. Lucene works by defining a "Collector" which is responsible for coordinating the traversal, scoring and collection of matching documents. Collectors are also how a single query can record aggregation results, execute unscoped "global" queries, execute post-query filters, etc.
Looking at the previous example:
"collector": [ { "name": "CancellableCollector", "reason": "search_cancelled", "time_in_nanos": "304311", "children": [ { "name": "SimpleTopScoreDocCollector", "reason": "search_top_hits", "time_in_nanos": "32273" } ] } ]
We see a single collector named SimpleTopScoreDocCollector
wrapped into CancellableCollector
. SimpleTopScoreDocCollector
is the default "scoring and sorting"
Collector
used by Elasticsearch. The reason
field attempts to give a plain english description of the class name. The
time_in_nanos
is similar to the time in the Query tree: a wall-clock time inclusive of all children. Similarly, children
lists
all sub-collectors. The CancellableCollector
that wraps SimpleTopScoreDocCollector
is used by Elasticsearch to detect if the current
search was cancelled and stop collecting documents as soon as it occurs.
It should be noted that Collector times are independent from the Query times. They are calculated, combined and normalized independently! Due to the nature of Lucene’s execution, it is impossible to "merge" the times from the Collectors into the Query section, so they are displayed in separate portions.
For reference, the various collector reason’s are:
|
A collector that scores and sorts documents. This is the most common collector and will be seen in most simple searches |
|
A collector that only counts the number of documents that match the query, but does not fetch the source.
This is seen when |
|
A collector that terminates search execution after |
|
A collector that only returns matching documents that have a score greater than |
|
A collector that wraps several other collectors. This is seen when combinations of search, aggregations, global aggs and post_filters are combined in a single search. |
|
A collector that halts execution after a specified period of time. This is seen when a |
|
A collector that Elasticsearch uses to run aggregations against the query scope. A single |
|
A collector that executes an aggregation against the global query scope, rather than the specified query. Because the global scope is necessarily different from the executed query, it must execute it’s own match_all query (which you will see added to the Query section) to collect your entire dataset |
rewrite
Section
editAll queries in Lucene undergo a "rewriting" process. A query (and its sub-queries) may be rewritten one or more times, and the process continues until the query stops changing. This process allows Lucene to perform optimizations, such as removing redundant clauses, replacing one query for a more efficient execution path, etc. For example a Boolean → Boolean → TermQuery can be rewritten to a TermQuery, because all the Booleans are unnecessary in this case.
The rewriting process is complex and difficult to display, since queries can change drastically. Rather than showing the intermediate results, the total rewrite time is simply displayed as a value (in nanoseconds). This value is cumulative and contains the total time for all queries being rewritten.
A more complex example
editTo demonstrate a slightly more complex query and the associated results, we can profile the following query:
GET /twitter/_search { "profile": true, "query": { "term": { "user": { "value": "test" } } }, "aggs": { "my_scoped_agg": { "terms": { "field": "likes" } }, "my_global_agg": { "global": {}, "aggs": { "my_level_agg": { "terms": { "field": "likes" } } } } }, "post_filter": { "match": { "message": "some" } } }
This example has:
- A query
- A scoped aggregation
- A global aggregation
- A post_filter
And the response:
{ ... "profile": { "shards": [ { "id": "[P6-vulHtQRWuD4YnubWb7A][test][0]", "searches": [ { "query": [ { "type": "TermQuery", "description": "message:some", "time_in_nanos": "409456", "breakdown": { "score": 0, "build_scorer_count": 1, "match_count": 0, "create_weight": 31584, "next_doc": 0, "match": 0, "create_weight_count": 1, "next_doc_count": 2, "score_count": 1, "build_scorer": 377872, "advance": 0, "advance_count": 0 } }, { "type": "TermQuery", "description": "user:test", "time_in_nanos": "303702", "breakdown": { "score": 0, "build_scorer_count": 1, "match_count": 0, "create_weight": 185215, "next_doc": 5936, "match": 0, "create_weight_count": 1, "next_doc_count": 2, "score_count": 1, "build_scorer": 112551, "advance": 0, "advance_count": 0 } } ], "rewrite_time": 7208, "collector": [ { "name": "CancellableCollector", "reason": "search_cancelled", "time_in_nanos": 2390, "children": [ { "name": "MultiCollector", "reason": "search_multi", "time_in_nanos": 1820, "children": [ { "name": "FilteredCollector", "reason": "search_post_filter", "time_in_nanos": 7735, "children": [ { "name": "SimpleTopScoreDocCollector", "reason": "search_top_hits", "time_in_nanos": 1328 } ] }, { "name": "BucketCollector: [[my_scoped_agg, my_global_agg]]", "reason": "aggregation", "time_in_nanos": 8273 } ] } ] } ] } ], "aggregations": [...] } ] } }
As you can see, the output is significantly verbose from before. All the major portions of the query are represented:
-
The first
TermQuery
(user:test) represents the mainterm
query -
The second
TermQuery
(message:some) represents thepost_filter
query
The Collector tree is fairly straightforward, showing how a single CancellableCollector wraps a MultiCollector which also wraps a FilteredCollector to execute the post_filter (and in turn wraps the normal scoring SimpleCollector), a BucketCollector to run all scoped aggregations.
Understanding MultiTermQuery output
editA special note needs to be made about the MultiTermQuery
class of queries. This includes wildcards, regex and fuzzy
queries. These queries emit very verbose responses, and are not overly structured.
Essentially, these queries rewrite themselves on a per-segment basis. If you imagine the wildcard query b*
, it technically
can match any token that begins with the letter "b". It would be impossible to enumerate all possible combinations,
so Lucene rewrites the query in context of the segment being evaluated. E.g. one segment may contain the tokens
[bar, baz]
, so the query rewrites to a BooleanQuery combination of "bar" and "baz". Another segment may only have the
token [bakery]
, so query rewrites to a single TermQuery for "bakery".
Due to this dynamic, per-segment rewriting, the clean tree structure becomes distorted and no longer follows a clean "lineage" showing how one query rewrites into the next. At present time, all we can do is apologize, and suggest you collapse the details for that query’s children if it is too confusing. Luckily, all the timing statistics are correct, just not the physical layout in the response, so it is sufficient to just analyze the top-level MultiTermQuery and ignore its children if you find the details too tricky to interpret.
Hopefully this will be fixed in future iterations, but it is a tricky problem to solve and still in-progress :)
On this page