- Elasticsearch Guide: other versions:
- What is Elasticsearch?
- What’s new in 7.10
- Getting started with Elasticsearch
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Setting JVM options
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Field data cache settings
- HTTP
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- License settings
- Local gateway settings
- Logging
- Machine learning settings
- Monitoring settings
- Node
- Network settings
- Node query cache settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot lifecycle management settings
- Transforms settings
- Transport
- Thread pools
- Watcher settings
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Set up X-Pack
- Configuring X-Pack Java Clients
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest node
- Search your data
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Geo-distance
- Geohash grid
- Geotile grid
- Global
- Histogram
- IP range
- Missing
- Nested
- Parent
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Bucket aggregations
- EQL
- SQL access
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Overview
- Concepts
- Automate rollover
- Manage Filebeat time-based indices
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Resolve lifecycle policy execution errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Monitor a cluster
- Frozen indices
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure a cluster
- Overview
- Configuring security
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Granting access to Stack Management features
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and index aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enabling audit logging
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watch for cluster and index events
- Command line tools
- How To
- Glossary of terms
- REST APIs
- API conventions
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Nodes reload secure settings
- Nodes stats
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Cross-cluster replication APIs
- Data stream APIs
- Document APIs
- Enrich APIs
- Graph explore API
- Index APIs
- Add index alias
- Analyze
- Clear cache
- Clone index
- Close index
- Create index
- Delete index
- Delete index alias
- Delete component template
- Delete index template
- Delete index template (legacy)
- Flush
- Force merge
- Freeze index
- Get component template
- Get field mapping
- Get index
- Get index alias
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Index alias exists
- Index exists
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- Open index
- Put index template
- Put index template (legacy)
- Put component template
- Put mapping
- Refresh
- Rollover index
- Shrink index
- Simulate index
- Simulate template
- Split index
- Synced flush
- Type exists
- Unfreeze index
- Update index alias
- Update index settings
- Resolve index
- List dangling indices
- Import dangling index
- Delete dangling index
- Index lifecycle management APIs
- Ingest APIs
- Info API
- Licensing APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Find file structure
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Create trained models
- Update data frame analytics jobs
- Delete data frame analytics jobs
- Delete trained models
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Get trained models
- Get trained models stats
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Migration APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Search APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SSL certificate
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Migration guide
- Release notes
- Elasticsearch version 7.10.2
- Elasticsearch version 7.10.1
- Elasticsearch version 7.10.0
- Elasticsearch version 7.9.3
- Elasticsearch version 7.9.2
- Elasticsearch version 7.9.1
- Elasticsearch version 7.9.0
- Elasticsearch version 7.8.1
- Elasticsearch version 7.8.0
- Elasticsearch version 7.7.1
- Elasticsearch version 7.7.0
- Elasticsearch version 7.6.2
- Elasticsearch version 7.6.1
- Elasticsearch version 7.6.0
- Elasticsearch version 7.5.2
- Elasticsearch version 7.5.1
- Elasticsearch version 7.5.0
- Elasticsearch version 7.4.2
- Elasticsearch version 7.4.1
- Elasticsearch version 7.4.0
- Elasticsearch version 7.3.2
- Elasticsearch version 7.3.1
- Elasticsearch version 7.3.0
- Elasticsearch version 7.2.1
- Elasticsearch version 7.2.0
- Elasticsearch version 7.1.1
- Elasticsearch version 7.1.0
- Elasticsearch version 7.0.0
- Elasticsearch version 7.0.0-rc2
- Elasticsearch version 7.0.0-rc1
- Elasticsearch version 7.0.0-beta1
- Elasticsearch version 7.0.0-alpha2
- Elasticsearch version 7.0.0-alpha1
- Dependencies and versions
Percolate query
editPercolate query
editThe percolate
query can be used to match queries
stored in an index. The percolate
query itself
contains the document that will be used as query
to match with the stored queries.
Sample Usage
editCreate an index with two fields:
PUT /my-index-00001 { "mappings": { "properties": { "message": { "type": "text" }, "query": { "type": "percolator" } } } }
The message
field is the field used to preprocess the document defined in
the percolator
query before it gets indexed into a temporary index.
The query
field is used for indexing the query documents. It will hold a
json object that represents an actual Elasticsearch query. The query
field
has been configured to use the percolator field type. This field
type understands the query dsl and stores the query in such a way that it can be
used later on to match documents defined on the percolate
query.
Register a query in the percolator:
PUT /my-index-00001/_doc/1?refresh { "query": { "match": { "message": "bonsai tree" } } }
Match a document to the registered percolator queries:
GET /my-index-00001/_search { "query": { "percolate": { "field": "query", "document": { "message": "A new bonsai tree in the office" } } } }
The above request will yield the following response:
{ "took": 13, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped" : 0, "failed": 0 }, "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 0.26152915, "hits": [ { "_index": "my-index-00001", "_type": "_doc", "_id": "1", "_score": 0.26152915, "_source": { "query": { "match": { "message": "bonsai tree" } } }, "fields" : { "_percolator_document_slot" : [0] } } ] } }
The query with id |
|
The |
To provide a simple example, this documentation uses one index my-index-00001
for both the percolate queries and documents.
This set-up can work well when there are just a few percolate queries registered. However, with heavier usage it is recommended
to store queries and documents in separate indices. Please see How it Works Under the Hood for more details.
Parameters
editThe following parameters are required when percolating a document:
|
The field of type |
|
The suffix to be used for the |
|
The source of the document being percolated. |
|
Like the |
|
The type / mapping of the document being percolated. This parameter is deprecated and will be removed in Elasticsearch 8.0. |
Instead of specifying the source of the document being percolated, the source can also be retrieved from an already
stored document. The percolate
query will then internally execute a get request to fetch that document.
In that case the document
parameter can be substituted with the following parameters:
|
The index the document resides in. This is a required parameter. |
|
The type of the document to fetch. This parameter is deprecated and will be removed in Elasticsearch 8.0. |
|
The id of the document to fetch. This is a required parameter. |
|
Optionally, routing to be used to fetch document to percolate. |
|
Optionally, preference to be used to fetch document to percolate. |
|
Optionally, the expected version of the document to be fetched. |
Percolating in a filter context
editIn case you are not interested in the score, better performance can be expected by wrapping
the percolator query in a bool
query’s filter clause or in a constant_score
query:
GET /my-index-00001/_search { "query": { "constant_score": { "filter": { "percolate": { "field": "query", "document": { "message": "A new bonsai tree in the office" } } } } } }
At index time terms are extracted from the percolator query and the percolator
can often determine whether a query matches just by looking at those extracted
terms. However, computing scores requires to deserialize each matching query
and run it against the percolated document, which is a much more expensive
operation. Hence if computing scores is not required the percolate
query
should be wrapped in a constant_score
query or a bool
query’s filter clause.
Note that the percolate
query never gets cached by the query cache.
Percolating multiple documents
editThe percolate
query can match multiple documents simultaneously with the indexed percolator queries.
Percolating multiple documents in a single request can improve performance as queries only need to be parsed and
matched once instead of multiple times.
The _percolator_document_slot
field that is being returned with each matched percolator query is important when percolating
multiple documents simultaneously. It indicates which documents matched with a particular percolator query. The numbers
correlate with the slot in the documents
array specified in the percolate
query.
GET /my-index-00001/_search { "query": { "percolate": { "field": "query", "documents": [ { "message": "bonsai tree" }, { "message": "new tree" }, { "message": "the office" }, { "message": "office tree" } ] } } }
{ "took": 13, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped" : 0, "failed": 0 }, "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 0.7093853, "hits": [ { "_index": "my-index-00001", "_type": "_doc", "_id": "1", "_score": 0.7093853, "_source": { "query": { "match": { "message": "bonsai tree" } } }, "fields" : { "_percolator_document_slot" : [0, 1, 3] } } ] } }
The |
Percolating an Existing Document
editIn order to percolate a newly indexed document, the percolate
query can be used. Based on the response
from an index request, the _id
and other meta information can be used to immediately percolate the newly added
document.
Example
editBased on the previous example.
Index the document we want to percolate:
PUT /my-index-00001/_doc/2 { "message" : "A new bonsai tree in the office" }
Index response:
{ "_index": "my-index-00001", "_type": "_doc", "_id": "2", "_version": 1, "_shards": { "total": 2, "successful": 1, "failed": 0 }, "result": "created", "_seq_no" : 1, "_primary_term" : 1 }
Percolating an existing document, using the index response as basis to build to new search request:
GET /my-index-00001/_search { "query": { "percolate": { "field": "query", "index": "my-index-00001", "id": "2", "version": 1 } } }
The version is optional, but useful in certain cases. We can ensure that we are trying to percolate the document we just have indexed. A change may be made after we have indexed, and if that is the case the search request would fail with a version conflict error. |
The search response returned is identical as in the previous example.
Percolate query and highlighting
editThe percolate
query is handled in a special way when it comes to highlighting. The queries hits are used
to highlight the document that is provided in the percolate
query. Whereas with regular highlighting the query in
the search request is used to highlight the hits.
Example
editThis example is based on the mapping of the first example.
Save a query:
PUT /my-index-00001/_doc/3?refresh { "query": { "match": { "message": "brown fox" } } }
Save another query:
PUT /my-index-00001/_doc/4?refresh { "query": { "match": { "message": "lazy dog" } } }
Execute a search request with the percolate
query and highlighting enabled:
GET /my-index-00001/_search { "query": { "percolate": { "field": "query", "document": { "message": "The quick brown fox jumps over the lazy dog" } } }, "highlight": { "fields": { "message": {} } } }
This will yield the following response.
{ "took": 7, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped" : 0, "failed": 0 }, "hits": { "total" : { "value": 2, "relation": "eq" }, "max_score": 0.26152915, "hits": [ { "_index": "my-index-00001", "_type": "_doc", "_id": "3", "_score": 0.26152915, "_source": { "query": { "match": { "message": "brown fox" } } }, "highlight": { "message": [ "The quick <em>brown</em> <em>fox</em> jumps over the lazy dog" ] }, "fields" : { "_percolator_document_slot" : [0] } }, { "_index": "my-index-00001", "_type": "_doc", "_id": "4", "_score": 0.26152915, "_source": { "query": { "match": { "message": "lazy dog" } } }, "highlight": { "message": [ "The quick brown fox jumps over the <em>lazy</em> <em>dog</em>" ] }, "fields" : { "_percolator_document_slot" : [0] } } ] } }
Instead of the query in the search request highlighting the percolator hits, the percolator queries are highlighting
the document defined in the percolate
query.
When percolating multiple documents at the same time like the request below then the highlight response is different:
GET /my-index-00001/_search { "query": { "percolate": { "field": "query", "documents": [ { "message": "bonsai tree" }, { "message": "new tree" }, { "message": "the office" }, { "message": "office tree" } ] } }, "highlight": { "fields": { "message": {} } } }
The slightly different response:
{ "took": 13, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped" : 0, "failed": 0 }, "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 0.7093853, "hits": [ { "_index": "my-index-00001", "_type": "_doc", "_id": "1", "_score": 0.7093853, "_source": { "query": { "match": { "message": "bonsai tree" } } }, "fields" : { "_percolator_document_slot" : [0, 1, 3] }, "highlight" : { "0_message" : [ "<em>bonsai</em> <em>tree</em>" ], "3_message" : [ "office <em>tree</em>" ], "1_message" : [ "new <em>tree</em>" ] } } ] } }
The highlight fields have been prefixed with the document slot they belong to, in order to know which highlight field belongs to what document. |
Specifying multiple percolate queries
editIt is possible to specify multiple percolate
queries in a single search request:
GET /my-index-00001/_search { "query": { "bool": { "should": [ { "percolate": { "field": "query", "document": { "message": "bonsai tree" }, "name": "query1" } }, { "percolate": { "field": "query", "document": { "message": "tulip flower" }, "name": "query2" } } ] } } }
The |
The _percolator_document_slot
field name will be suffixed with what is specified in the _name
parameter.
If that isn’t specified then the field
parameter will be used, which in this case will result in ambiguity.
The above search request returns a response similar to this:
{ "took": 13, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped" : 0, "failed": 0 }, "hits": { "total" : { "value": 1, "relation": "eq" }, "max_score": 0.26152915, "hits": [ { "_index": "my-index-00001", "_type": "_doc", "_id": "1", "_score": 0.26152915, "_source": { "query": { "match": { "message": "bonsai tree" } } }, "fields" : { "_percolator_document_slot_query1" : [0] } } ] } }
The |
How it Works Under the Hood
editWhen indexing a document into an index that has the percolator field type mapping configured, the query part of the document gets parsed into a Lucene query and is stored into the Lucene index. A binary representation of the query gets stored, but also the query’s terms are analyzed and stored into an indexed field.
At search time, the document specified in the request gets parsed into a Lucene document and is stored in a in-memory temporary Lucene index. This in-memory index can just hold this one document and it is optimized for that. After this a special query is built based on the terms in the in-memory index that select candidate percolator queries based on their indexed query terms. These queries are then evaluated by the in-memory index if they actually match.
The selecting of candidate percolator queries matches is an important performance optimization during the execution
of the percolate
query as it can significantly reduce the number of candidate matches the in-memory index needs to
evaluate. The reason the percolate
query can do this is because during indexing of the percolator queries the query
terms are being extracted and indexed with the percolator query. Unfortunately the percolator cannot extract terms from
all queries (for example the wildcard
or geo_shape
query) and as a result of that in certain cases the percolator
can’t do the selecting optimization (for example if an unsupported query is defined in a required clause of a boolean query
or the unsupported query is the only query in the percolator document). These queries are marked by the percolator and
can be found by running the following search:
GET /_search { "query": { "term" : { "query.extraction_result" : "failed" } } }
The above example assumes that there is a query
field of type
percolator
in the mappings.
Given the design of percolation, it often makes sense to use separate indices for the percolate queries and documents being percolated, as opposed to a single index as we do in examples. There are a few benefits to this approach:
- Because percolate queries contain a different set of fields from the percolated documents, using two separate indices allows for fields to be stored in a denser, more efficient way.
- Percolate queries do not scale in the same way as other queries, so percolation performance may benefit from using a different index configuration, like the number of primary shards.
Notes
editAllow expensive queries
editPercolate queries will not be executed if search.allow_expensive_queries
is set to false.
On this page