- Elasticsearch Guide: other versions:
- Getting Started
- Setup
- Breaking changes
- Breaking changes in 2.0
- Removed features
- Network changes
- Multiple
path.data
striping - Mapping changes
- CRUD and routing changes
- Query DSL changes
- Search changes
- Aggregation changes
- Parent/Child changes
- Scripting changes
- Index API changes
- Snapshot and Restore changes
- Plugin and packaging changes
- Setting changes
- Stats, info, and
cat
changes - Java API changes
- Breaking changes in 2.0
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Bucket Aggregations
- Children Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IPv4 Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Warmers
- Shadow replica indices
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Optimize
- Upgrade
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Field datatypes
- Meta-Fields
- Mapping parameters
analyzer
boost
coerce
copy_to
doc_values
dynamic
enabled
fielddata
format
geohash
geohash_precision
geohash_prefix
ignore_above
ignore_malformed
include_in_all
index
index_options
lat_lon
fields
norms
null_value
position_increment_gap
precision_step
properties
search_analyzer
similarity
store
term_vector
- Dynamic Mapping
- Transform
- Analysis
- Analyzers
- Tokenizers
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Compound Word Token Filter
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Classic Token Filter
- Apostrophe Token Filter
- Character Filters
- ICU Analysis Plugin
- Modules
- Index Modules
- Testing
- Glossary of terms
- Release Notes
WARNING: Version 2.0 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Explain API
editExplain API
editThe explain api computes a score explanation for a query and a specific document. This can give useful feedback whether a document matches or didn’t match a specific query.
The index
and type
parameters expect a single index and a single
type respectively.
Usage
editFull query example:
curl -XGET 'localhost:9200/twitter/tweet/1/_explain' -d '{ "query" : { "term" : { "message" : "search" } } }'
This will yield the following result:
{ "matches" : true, "explanation" : { "value" : 0.15342641, "description" : "fieldWeight(message:search in 0), product of:", "details" : [ { "value" : 1.0, "description" : "tf(termFreq(message:search)=1)" }, { "value" : 0.30685282, "description" : "idf(docFreq=1, maxDocs=1)" }, { "value" : 0.5, "description" : "fieldNorm(field=message, doc=0)" } ] } }
There is also a simpler way of specifying the query via the q
parameter. The specified q
parameter value is then parsed as if the
query_string
query was used. Example usage of the q
parameter in the
explain api:
curl -XGET 'localhost:9200/twitter/tweet/1/_explain?q=message:search'
This will yield the same result as the previous request.
All parameters:
edit
|
Set to |
|
Allows to control which stored fields to return as part of the document explained. |
|
Controls the routing in the case the routing was used during indexing. |
|
Same effect as setting the routing parameter. |
|
Controls on which shard the explain is executed. |
|
Allows the data of the request to be put in the query string of the url. |
|
The query string (maps to the query_string query). |
|
The default field to use when no field prefix is defined within the query. Defaults to _all field. |
|
The analyzer name to be used when analyzing the query string. Defaults to the analyzer of the _all field. |
|
Should wildcard and prefix queries be analyzed or not. Defaults to false. |
|
Should terms be automatically lowercased or not. Defaults to true. |
|
If set to true will cause format based failures (like providing text to a numeric field) to be ignored. Defaults to false. |
|
The default operator to be used, can be AND or OR. Defaults to OR. |
On this page