WARNING: Version 5.6 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
analyzer
editanalyzer
editThe values of analyzed
string fields are passed through an
analyzer to convert the string into a stream of tokens or
terms. For instance, the string "The quick Brown Foxes."
may, depending
on which analyzer is used, be analyzed to the tokens: quick
, brown
,
fox
. These are the actual terms that are indexed for the field, which makes
it possible to search efficiently for individual words within big blobs of
text.
This analysis process needs to happen not just at index time, but also at query time: the query string needs to be passed through the same (or a similar) analyzer so that the terms that it tries to find are in the same format as those that exist in the index.
Elasticsearch ships with a number of pre-defined analyzers, which can be used without further configuration. It also ships with many character filters, tokenizers, and Token Filters which can be combined to configure custom analyzers per index.
Analyzers can be specified per-query, per-field or per-index. At index time, Elasticsearch will look for an analyzer in this order:
-
The
analyzer
defined in the field mapping. -
An analyzer named
default
in the index settings. -
The
standard
analyzer.
At query time, there are a few more layers:
-
The
analyzer
defined in a full-text query. -
The
search_analyzer
defined in the field mapping. -
The
analyzer
defined in the field mapping. -
An analyzer named
default_search
in the index settings. -
An analyzer named
default
in the index settings. -
The
standard
analyzer.
The easiest way to specify an analyzer for a particular field is to define it in the field mapping, as follows:
PUT /my_index { "mappings": { "my_type": { "properties": { "text": { "type": "text", "fields": { "english": { "type": "text", "analyzer": "english" } } } } } } } GET my_index/_analyze { "field": "text", "text": "The quick Brown Foxes." } GET my_index/_analyze { "field": "text.english", "text": "The quick Brown Foxes." }
The |
|
The |
|
This returns the tokens: [ |
|
This returns the tokens: [ |
search_quote_analyzer
editThe search_quote_analyzer
setting allows you to specify an analyzer for phrases, this is particularly useful when dealing with disabling
stop words for phrase queries.
To disable stop words for phrases a field utilising three analyzer settings will be required:
-
An
analyzer
setting for indexing all terms including stop words -
A
search_analyzer
setting for non-phrase queries that will remove stop words -
A
search_quote_analyzer
setting for phrase queries that will not remove stop words
PUT my_index { "settings":{ "analysis":{ "analyzer":{ "my_analyzer":{ "type":"custom", "tokenizer":"standard", "filter":[ "lowercase" ] }, "my_stop_analyzer":{ "type":"custom", "tokenizer":"standard", "filter":[ "lowercase", "english_stop" ] } }, "filter":{ "english_stop":{ "type":"stop", "stopwords":"_english_" } } } }, "mappings":{ "my_type":{ "properties":{ "title": { "type":"text", "analyzer":"my_analyzer", "search_analyzer":"my_stop_analyzer", "search_quote_analyzer":"my_analyzer" } } } } } PUT my_index/my_type/1 { "title":"The Quick Brown Fox" } PUT my_index/my_type/2 { "title":"A Quick Brown Fox" } GET my_index/my_type/_search { "query":{ "query_string":{ "query":"\"the quick brown fox\"" } } }
|
|
|
|
|
|
|
|
|
|
Since the query is wrapped in quotes it is detected as a phrase query therefore the |