- Elasticsearch Guide: other versions:
- Getting Started
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Maximum size virtual memory check
- Max file size check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- Stopping Elasticsearch
- Upgrade Elasticsearch
- Set up X-Pack
- Breaking changes
- Breaking changes in 6.0
- Aggregations changes
- Analysis changes
- Cat API changes
- Clients changes
- Cluster changes
- Document API changes
- Indices changes
- Ingest changes
- Java API changes
- Mapping changes
- Packaging changes
- Percolator changes
- Plugins changes
- Reindex changes
- REST changes
- Scripting changes
- Search and Query DSL changes
- Settings changes
- Stats and info changes
- Breaking changes in 6.1
- Breaking changes in 6.0
- X-Pack Breaking Changes
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Children Aggregation
- Composite Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Shrink Index
- Split Index
- Rollover Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- Minhash Token Filter
- Character Filters
- Modules
- Index Modules
- Ingest Node
- Pipeline Definition
- Ingest APIs
- Accessing Data in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Fail Processor
- Foreach Processor
- Grok Processor
- Gsub Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- Dot Expander Processor
- URL Decode Processor
- Monitoring Elasticsearch
- X-Pack APIs
- Info API
- Explore API
- Machine Learning APIs
- Close Jobs
- Create Datafeeds
- Create Jobs
- Delete Datafeeds
- Delete Jobs
- Delete Model Snapshots
- Flush Jobs
- Forecast Jobs
- Get Buckets
- Get Overall Buckets
- Get Categories
- Get Datafeeds
- Get Datafeed Statistics
- Get Influencers
- Get Jobs
- Get Job Statistics
- Get Model Snapshots
- Get Records
- Open Jobs
- Post Data to Jobs
- Preview Datafeeds
- Revert Model Snapshots
- Start Datafeeds
- Stop Datafeeds
- Update Datafeeds
- Update Jobs
- Update Model Snapshots
- Security APIs
- Watcher APIs
- Migration APIs
- Deprecation Info APIs
- Definitions
- X-Pack Commands
- How To
- Testing
- Glossary of terms
- Release Notes
- 6.1.4 Release Notes
- 6.1.3 Release Notes
- 6.1.2 Release Notes
- 6.1.1 Release Notes
- 6.1.0 Release Notes
- 6.0.1 Release Notes
- 6.0.0 Release Notes
- 6.0.0-rc2 Release Notes
- 6.0.0-rc1 Release Notes
- 6.0.0-beta2 Release Notes
- 6.0.0-beta1 Release Notes
- 6.0.0-alpha2 Release Notes
- 6.0.0-alpha1 Release Notes
- 6.0.0-alpha1 Release Notes (Changes previously released in 5.x)
- X-Pack Release Notes
WARNING: Version 6.1 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Phrase Suggester
editPhrase Suggester
editIn order to understand the format of suggestions, please read the Suggesters page first.
The term
suggester provides a very convenient API to access word
alternatives on a per token basis within a certain string distance. The API
allows accessing each token in the stream individually while
suggest-selection is left to the API consumer. Yet, often pre-selected
suggestions are required in order to present to the end-user. The
phrase
suggester adds additional logic on top of the term
suggester
to select entire corrected phrases instead of individual tokens weighted
based on ngram-language
models. In practice this suggester will be
able to make better decisions about which tokens to pick based on
co-occurrence and frequencies.
API Example
editIn general the phrase
suggester requires special mapping up front to work.
The phrase
suggester examples on this page need the following mapping to
work. The reverse
analyzer is used only in the last example.
PUT test { "settings": { "index": { "number_of_shards": 1, "analysis": { "analyzer": { "trigram": { "type": "custom", "tokenizer": "standard", "filter": ["standard", "shingle"] }, "reverse": { "type": "custom", "tokenizer": "standard", "filter": ["standard", "reverse"] } }, "filter": { "shingle": { "type": "shingle", "min_shingle_size": 2, "max_shingle_size": 3 } } } } }, "mappings": { "test": { "properties": { "title": { "type": "text", "fields": { "trigram": { "type": "text", "analyzer": "trigram" }, "reverse": { "type": "text", "analyzer": "reverse" } } } } } } } POST test/test?refresh=true {"title": "noble warriors"} POST test/test?refresh=true {"title": "nobel prize"}
Once you have the analyzers and mappings set up you can use the phrase
suggester in the same spot you’d use the term
suggester:
POST test/_search { "suggest": { "text": "noble prize", "simple_phrase": { "phrase": { "field": "title.trigram", "size": 1, "gram_size": 3, "direct_generator": [ { "field": "title.trigram", "suggest_mode": "always" } ], "highlight": { "pre_tag": "<em>", "post_tag": "</em>" } } } } }
The response contains suggestions scored by the most likely spell correction first. In this case we received the expected correction "nobel prize".
{ "_shards": ... "hits": ... "timed_out": false, "took": 3, "suggest": { "simple_phrase" : [ { "text" : "noble prize", "offset" : 0, "length" : 11, "options" : [ { "text" : "nobel prize", "highlighted": "<em>nobel</em> prize", "score" : 0.5962314 }] } ] } }
Basic Phrase suggest API parameters
edit
|
the name of the field used to do n-gram lookups for the language model, the suggester will use this field to gain statistics to score corrections. This field is mandatory. |
|
sets max size of the n-grams (shingles) in the |
|
the likelihood of a term being a
misspelled even if the term exists in the dictionary. The default is
|
|
The confidence level defines a factor applied to the
input phrases score which is used as a threshold for other suggest
candidates. Only candidates that score higher than the threshold will be
included in the result. For instance a confidence level of |
|
the maximum percentage of the terms that at most
considered to be misspellings in order to form a correction. This method
accepts a float value in the range |
|
the separator that is used to separate terms in the bigram field. If not set the whitespace character is used as a separator. |
|
the number of candidates that are generated for each
individual query term Low numbers like |
|
Sets the analyzer to analyse to suggest text with.
Defaults to the search analyzer of the suggest field passed via |
|
Sets the maximum number of suggested term to be
retrieved from each individual shard. During the reduce phase, only the
top N suggestions are returned based on the |
|
Sets the text / query to provide suggestions for. |
|
Sets up suggestion highlighting. If not provided then
no |
|
Checks each suggestion against the specified |
POST _search { "suggest": { "text" : "noble prize", "simple_phrase" : { "phrase" : { "field" : "title.trigram", "size" : 1, "direct_generator" : [ { "field" : "title.trigram", "suggest_mode" : "always", "min_word_length" : 1 } ], "collate": { "query": { "source" : { "match": { "{{field_name}}" : "{{suggestion}}" } } }, "params": {"field_name" : "title"}, "prune": true } } } } }
This query will be run once for every suggestion. |
|
The |
|
An additional |
|
All suggestions will be returned with an extra |
Smoothing Models
editThe phrase
suggester supports multiple smoothing models to balance
weight between infrequent grams (grams (shingles) are not existing in
the index) and frequent grams (appear at least once in the index).
|
a simple backoff model that backs off to lower
order n-gram models if the higher order count is |
|
a smoothing model that uses an additive smoothing where a
constant (typically |
|
a smoothing model that takes the weighted
mean of the unigrams, bigrams and trigrams based on user supplied
weights (lambdas). Linear Interpolation doesn’t have any default values.
All parameters ( |
Candidate Generators
editThe phrase
suggester uses candidate generators to produce a list of
possible terms per term in the given text. A single candidate generator
is similar to a term
suggester called for each individual term in the
text. The output of the generators is subsequently scored in combination
with the candidates from the other terms to for suggestion candidates.
Currently only one type of candidate generator is supported, the
direct_generator
. The Phrase suggest API accepts a list of generators
under the key direct_generator
each of the generators in the list are
called per term in the original text.
Direct Generators
editThe direct generators support the following parameters:
|
The field to fetch the candidate suggestions from. This is a required option that either needs to be set globally or per suggestion. |
|
The maximum corrections to be returned per suggest text token. |
|
The suggest mode controls what suggestions are included on the suggestions
generated on each shard. All values other than
|
|
The maximum edit distance candidate suggestions can have in order to be considered as a suggestion. Can only be a value between 1 and 2. Any other value result in an bad request error being thrown. Defaults to 2. |
|
The number of minimal prefix characters that must match in order be a candidate suggestions. Defaults to 1. Increasing this number improves spellcheck performance. Usually misspellings don’t occur in the beginning of terms. (Old name "prefix_len" is deprecated) |
|
The minimum length a suggest text term must have in order to be included. Defaults to 4. (Old name "min_word_len" is deprecated) |
|
A factor that is used to multiply with the
|
|
The minimal threshold in number of documents a suggestion should appear in. This can be specified as an absolute number or as a relative percentage of number of documents. This can improve quality by only suggesting high frequency terms. Defaults to 0f and is not enabled. If a value higher than 1 is specified then the number cannot be fractional. The shard level document frequencies are used for this option. |
|
The maximum threshold in number of documents a suggest text token can exist in order to be included. Can be a relative percentage number (e.g 0.4) or an absolute number to represent document frequencies. If an value higher than 1 is specified then fractional can not be specified. Defaults to 0.01f. This can be used to exclude high frequency terms from being spellchecked. High frequency terms are usually spelled correctly on top of this also improves the spellcheck performance. The shard level document frequencies are used for this option. |
|
a filter (analyzer) that is applied to each of the tokens passed to this candidate generator. This filter is applied to the original token before candidates are generated. |
|
a filter (analyzer) that is applied to each of the generated tokens before they are passed to the actual phrase scorer. |
The following example shows a phrase
suggest call with two generators,
the first one is using a field containing ordinary indexed terms and the
second one uses a field that uses terms indexed with a reverse
filter
(tokens are index in reverse order). This is used to overcome the limitation
of the direct generators to require a constant prefix to provide
high-performance suggestions. The pre_filter
and post_filter
options
accept ordinary analyzer names.
POST _search { "suggest": { "text" : "obel prize", "simple_phrase" : { "phrase" : { "field" : "title.trigram", "size" : 1, "direct_generator" : [ { "field" : "title.trigram", "suggest_mode" : "always" }, { "field" : "title.reverse", "suggest_mode" : "always", "pre_filter" : "reverse", "post_filter" : "reverse" } ] } } } }
pre_filter
and post_filter
can also be used to inject synonyms after
candidates are generated. For instance for the query captain usq
we
might generate a candidate usa
for term usq
which is a synonym for
america
which allows to present captain america
to the user if this
phrase scores high enough.
On this page