- Elasticsearch Guide: other versions:
- Elasticsearch basics
- Quick starts
- Set up Elasticsearch
- Run Elasticsearch locally
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Miscellaneous cluster settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Data stream lifecycle settings
- Field data cache settings
- Local gateway settings
- Health Diagnostic settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- Inference settings
- License settings
- Machine learning settings
- Monitoring settings
- Node settings
- Networking
- Node query cache settings
- Path settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Set JVM options
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Search your data
- The search API
- Sort search results
- Paginate search results
- Retrieve selected fields
- Search multiple data streams and indices using a query
- Collapse search results
- Filter search results
- Highlighting
- Long-running searches
- Near real-time search
- Retrieve inner hits
- Search shard routing
- Searching with query rules
- Search templates
- Full-text search
- Search relevance optimizations
- Retrievers
- kNN search
- Semantic search
- Retrieval augmented generation
- Search across clusters
- Search with synonyms
- Search Applications
- Search analytics
- The search API
- Re-ranking
- Index modules
- Index templates
- Aliases
- Mapping
- Dynamic mapping
- Explicit mapping
- Runtime fields
- Field data types
- Aggregate metric
- Alias
- Arrays
- Binary
- Boolean
- Completion
- Date
- Date nanoseconds
- Dense vector
- Flattened
- Geopoint
- Geoshape
- Histogram
- IP
- Join
- Keyword
- Nested
- Numeric
- Object
- Pass-through object
- Percolator
- Point
- Range
- Rank feature
- Rank features
- Rank Vectors
- Search-as-you-type
- Semantic text
- Shape
- Sparse vector
- Text
- Token count
- Unsigned long
- Version
- Metadata fields
- Mapping parameters
analyzer
coerce
copy_to
doc_values
dynamic
eager_global_ordinals
enabled
format
ignore_above
index.mapping.ignore_above
ignore_malformed
index
index_options
index_phrases
index_prefixes
meta
fields
normalizer
norms
null_value
position_increment_gap
properties
search_analyzer
similarity
store
subobjects
term_vector
- Mapping limit settings
- Removal of mapping types
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Attachment
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- Geo-grid
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- IP Location
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Redact
- Registered domain
- Remove
- Rename
- Reroute
- Script
- Set
- Set security user
- Sort
- Split
- Terminate
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Ingest pipelines in Search
- Connectors
- Data streams
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Roll up or transform your data
- Query DSL
- EQL
- ES|QL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Frequent item sets
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Time series
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- Geospatial analysis
- Watcher
- Monitor a cluster
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Role restriction
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Set up a cluster for high availability
- Optimizations
- Autoscaling
- Snapshot and restore
- Cross-cluster replication
- Data store architecture
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Behavioral Analytics APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat component templates
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Prevalidate node removal
- Nodes reload secure settings
- Nodes stats
- Cluster Info
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Get desired balance
- Reset desired balance
- Cross-cluster replication APIs
- Connector APIs
- Create connector
- Delete connector
- Get connector
- List connectors
- Update connector API key id
- Update connector configuration
- Update connector index name
- Update connector features
- Update connector filtering
- Update connector name and description
- Update connector pipeline
- Update connector scheduling
- Update connector service type
- Create connector sync job
- Cancel connector sync job
- Delete connector sync job
- Get connector sync job
- List connector sync jobs
- Check in a connector
- Update connector error
- Update connector last sync stats
- Update connector status
- Check in connector sync job
- Claim connector sync job
- Set connector sync job error
- Set connector sync job stats
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- ES|QL APIs
- Features APIs
- Fleet APIs
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Resolve cluster
- Advantages of using this endpoint before a cross-cluster search
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Inference APIs
- Delete inference API
- Get inference API
- Perform inference API
- Chat completion inference API
- Create inference API
- Stream inference API
- Update inference API
- Elastic Inference Service (EIS)
- AlibabaCloud AI Search inference integration
- Amazon Bedrock inference integration
- Anthropic inference integration
- Azure AI studio inference integration
- Azure OpenAI inference integration
- Cohere inference integration
- Elasticsearch inference integration
- ELSER inference integration
- Google AI Studio inference integration
- Google Vertex AI inference integration
- HuggingFace inference integration
- JinaAI inference integration
- Mistral inference integration
- OpenAI inference integration
- Watsonx inference integration
- Info API
- Ingest APIs
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Clear trained model deployment cache
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model
- Start trained model deployment
- Stop trained model deployment
- Update trained model deployment
- Migration APIs
- Node lifecycle APIs
- Query rules APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Root API
- Script APIs
- Search APIs
- Search Application APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Bulk create or update roles API
- Bulk delete roles API
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Query Role
- Get service accounts
- Get service account credentials
- Get Security settings
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- Query User
- Update API key
- Update Security settings
- Bulk update API keys
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profiles
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Create Cross-Cluster API key
- Update Cross-Cluster API key
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Synonyms APIs
- Text structure APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Fix data nodes out of disk
- Fix master nodes out of disk
- Fix other role nodes out of disk
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Troubleshooting broken repositories
- Addressing repeated snapshot policy failures
- Troubleshooting an unstable cluster
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- Troubleshooting searches
- Troubleshooting shards capacity health issues
- Troubleshooting an unbalanced cluster
- Capture diagnostics
- Upgrade Elasticsearch
- Migration guide
- Release notes
- Dependencies and versions
Highlighting
editHighlighting
editHighlighters enable you to get highlighted snippets from one or more fields
in your search results so you can show users where the query matches are.
When you request highlights, the response contains an additional highlight
element for each search hit that includes the highlighted fields and the
highlighted fragments.
Highlighters don’t reflect the boolean logic of a query when extracting
terms to highlight. Thus, for some complex boolean queries (e.g nested boolean
queries, queries using minimum_should_match
etc.), parts of documents may be
highlighted that don’t correspond to query matches.
Highlighting requires the actual content of a field. If the field is not
stored (the mapping does not set store
to true
), the actual _source
is
loaded and the relevant field is extracted from _source
.
For example, to get highlights for the content
field in each search hit
using the default highlighter, include a highlight
object in
the request body that specifies the content
field:
resp = client.search( query={ "match": { "content": "kimchy" } }, highlight={ "fields": { "content": {} } }, ) print(resp)
response = client.search( body: { query: { match: { content: 'kimchy' } }, highlight: { fields: { content: {} } } } ) puts response
const response = await client.search({ query: { match: { content: "kimchy", }, }, highlight: { fields: { content: {}, }, }, }); console.log(response);
GET /_search { "query": { "match": { "content": "kimchy" } }, "highlight": { "fields": { "content": {} } } }
Elasticsearch supports three highlighters: unified
, plain
, and fvh
(fast vector
highlighter) for text
and keyword
fields and the semantic
highlighter for semantic_text
fields.
You can specify the highlighter type
you want to use for each field or rely on the field type’s default highlighter.
Unified highlighter
editThe unified
highlighter uses the Lucene Unified Highlighter. This
highlighter breaks the text into sentences and uses the BM25 algorithm to score
individual sentences as if they were documents in the corpus. It also supports
accurate phrase and multi-term (fuzzy, prefix, regex) highlighting. The unified
highlighter can combine matches from multiple fields into one result (see
matched_fields
).
This is the default highlighter for all text
and keyword
fields.
Semantic Highlighter
editThe semantic
highlighter is specifically designed for use with the semantic_text
field.
It identifies and extracts the most relevant fragments from the field based on semantic
similarity between the query and each fragment.
By default, semantic_text
fields use the semantic highlighter.
Plain highlighter
editThe plain
highlighter uses the standard Lucene highlighter. It attempts to
reflect the query matching logic in terms of understanding word importance and
any word positioning criteria in phrase queries.
The plain
highlighter works best for highlighting simple query matches in a
single field. To accurately reflect query logic, it creates a tiny in-memory
index and re-runs the original query criteria through Lucene’s query execution
planner to get access to low-level match information for the current document.
This is repeated for every field and every document that needs to be highlighted.
If you want to highlight a lot of fields in a lot of documents with complex
queries, we recommend using the unified
highlighter on postings
or term_vector
fields.
Fast vector highlighter
editThe fvh
highlighter uses the Lucene Fast Vector highlighter.
This highlighter can be used on fields with term_vector
set to
with_positions_offsets
in the mapping. The fast vector highlighter:
-
Can be customized with a
boundary_scanner
. -
Requires setting
term_vector
towith_positions_offsets
which increases the size of the index -
Can combine matches from multiple fields into one result. See
matched_fields
- Can assign different weights to matches at different positions allowing for things like phrase matches being sorted above term matches when highlighting a Boosting Query that boosts phrase matches over term matches
The fvh
highlighter does not support span queries. If you need support for
span queries, try an alternative highlighter, such as the unified
highlighter.
Offsets strategy
editTo create meaningful search snippets from the terms being queried, the highlighter needs to know the start and end character offsets of each word in the original text. These offsets can be obtained from:
-
The postings list. If
index_options
is set tooffsets
in the mapping, theunified
highlighter uses this information to highlight documents without re-analyzing the text. It re-runs the original query directly on the postings and extracts the matching offsets from the index, limiting the collection to the highlighted documents. This is important if you have large fields because it doesn’t require reanalyzing the text to be highlighted. It also requires less disk space than usingterm_vectors
. -
Term vectors. If
term_vector
information is provided by settingterm_vector
towith_positions_offsets
in the mapping, theunified
highlighter automatically uses theterm_vector
to highlight the field. It’s fast especially for large fields (>1MB
) and for highlighting multi-term queries likeprefix
orwildcard
because it can access the dictionary of terms for each document. Thefvh
highlighter always uses term vectors. -
Plain highlighting. This mode is used by the
unified
when there is no other alternative. It creates a tiny in-memory index and re-runs the original query criteria through Lucene’s query execution planner to get access to low-level match information on the current document. This is repeated for every field and every document that needs highlighting. Theplain
highlighter always uses plain highlighting.
Plain highlighting for large texts may require substantial amount of time and memory.
To protect against this, the maximum number of text characters that will be analyzed has been
limited to 1000000. This default limit can be changed
for a particular index with the index setting index.highlight.max_analyzed_offset
.
Highlighting settings
editHighlighting settings can be set on a global level and overridden at the field level.
- boundary_chars
-
A string that contains each boundary character.
Defaults to
.,!? \t\n
. - boundary_max_scan
-
How far to scan for boundary characters. Defaults to
20
.
- boundary_scanner
-
Specifies how to break the highlighted fragments:
chars
,sentence
, orword
. Only valid for theunified
andfvh
highlighters. Defaults tosentence
for theunified
highlighter. Defaults tochars
for thefvh
highlighter.-
chars
-
Use the characters specified by
boundary_chars
as highlighting boundaries. Theboundary_max_scan
setting controls how far to scan for boundary characters. Only valid for thefvh
highlighter. -
sentence
-
Break highlighted fragments at the next sentence boundary, as determined by Java’s BreakIterator. You can specify the locale to use with
boundary_scanner_locale
.When used with the
unified
highlighter, thesentence
scanner splits sentences bigger thanfragment_size
at the first word boundary next tofragment_size
. You can setfragment_size
to 0 to never split any sentence. -
word
-
Break highlighted fragments at the next word boundary, as determined
by Java’s BreakIterator.
You can specify the locale to use with
boundary_scanner_locale
.
-
- boundary_scanner_locale
-
Controls which locale is used to search for sentence
and word boundaries. This parameter takes a form of a language tag,
e.g.
"en-US"
,"fr-FR"
,"ja-JP"
. More info can be found in the Locale Language Tag documentation. The default value is Locale.ROOT. - encoder
-
Indicates if the snippet should be HTML encoded:
default
(no encoding) orhtml
(HTML-escape the snippet text and then insert the highlighting tags) - fields
-
Specifies the fields to retrieve highlights for. You can use wildcards to specify fields. For example, you could specify
comment_*
to get highlights for all text, match_only_text, and keyword fields that start withcomment_
.Only text, match_only_text, and keyword fields are highlighted when you use wildcards. If you use a custom mapper and want to highlight on a field anyway, you must explicitly specify that field name.
- fragmenter
-
Specifies how text should be broken up in highlight snippets:
simple
orspan
. Only valid for theplain
highlighter. Defaults tospan
.-
simple
- Breaks up text into same-sized fragments.
-
span
- Breaks up text into same-sized fragments, but tries to avoid breaking up text between highlighted terms. This is helpful when you’re querying for phrases. Default.
-
- fragment_offset
-
Controls the margin from which you want to start
highlighting. Only valid when using the
fvh
highlighter. - fragment_size
- The size of the highlighted fragment in characters. Defaults to 100.
- highlight_query
-
Highlight matches for a query other than the search query. This is especially useful if you use a rescore query because those are not taken into account by highlighting by default.
Elasticsearch does not validate that
highlight_query
contains the search query in any way so it is possible to define it so legitimate query results are not highlighted. Generally, you should include the search query as part of thehighlight_query
. - matched_fields
-
Combine matches on multiple fields to highlight a single field.
This is most intuitive for multifields that analyze the same string in different
ways. Valid for the
unified
and fvh` highlighters, but the behavior of this option is different for each highlighter.
For the unified
highlighter:
-
matched_fields
array should not contain the original field that you want to highlight. The original field will be automatically added to thematched_fields
, and there is no way to exclude its matches when highlighting. -
matched_fields
and the original field can be indexed with different strategies (with or withoutoffsets
, with or withoutterm_vectors
). -
only the original field to which the matches are combined is loaded so only that field
benefits from having
store
set toyes
For the fvh
highlighter:
-
matched_fields
array may or may not contain the original field depending on your needs. If you want to include the original field’s matches in highlighting, add it to thematched_fields
array. -
all
matched_fields
must haveterm_vector
set towith_positions_offsets
-
only the original field to which the matches are combined is loaded so only that field benefits from having
store
set toyes
.- no_match_size
- The amount of text you want to return from the beginning of the field if there are no matching fragments to highlight. Defaults to 0 (nothing is returned).
- number_of_fragments
-
The maximum number of fragments to return. If the
number of fragments is set to 0, no fragments are returned. Instead,
the entire field contents are highlighted and returned. This can be
handy when you need to highlight short texts such as a title or
address, but fragmentation is not required. If
number_of_fragments
is 0,fragment_size
is ignored. Defaults to 5. - order
-
Sorts highlighted fragments by score when set to
score
. By default, fragments will be output in the order they appear in the field (order:none
). Setting this option toscore
will output the most relevant fragments first. Each highlighter applies its own logic to compute relevancy scores. See the document How highlighters work internally for more details how different highlighters find the best fragments. - phrase_limit
-
Controls the number of matching phrases in a document that are
considered. Prevents the
fvh
highlighter from analyzing too many phrases and consuming too much memory. When usingmatched_fields
,phrase_limit
phrases per matched field are considered. Raising the limit increases query time and consumes more memory. Only supported by thefvh
highlighter. Defaults to 256. - pre_tags
-
Use in conjunction with
post_tags
to define the HTML tags to use for the highlighted text. By default, highlighted text is wrapped in<em>
and</em>
tags. Specify as an array of strings. - post_tags
-
Use in conjunction with
pre_tags
to define the HTML tags to use for the highlighted text. By default, highlighted text is wrapped in<em>
and</em>
tags. Specify as an array of strings. - require_field_match
-
By default, only fields that contains a query match are
highlighted. Set
require_field_match
tofalse
to highlight all fields. Defaults totrue
.
- max_analyzed_offset
-
By default, the maximum number of characters
analyzed for a highlight request is bounded by the value defined in the
index.highlight.max_analyzed_offset
setting, and when the number of characters exceeds this limit an error is returned. If this setting is set to a positive value, the highlighting stops at this defined maximum limit, and the rest of the text is not processed, thus not highlighted and no error is returned. If it is specifically set to -1 then the value ofindex.highlight.max_analyzed_offset
is used instead. For values < -1 or 0, an error is returned. Themax_analyzed_offset
query setting does not override theindex.highlight.max_analyzed_offset
which prevails when it’s set to lower value than the query setting. - tags_schema
-
Set to
styled
to use the built-in tag schema. Thestyled
schema defines the followingpre_tags
and definespost_tags
as</em>
.<em class="hlt1">, <em class="hlt2">, <em class="hlt3">, <em class="hlt4">, <em class="hlt5">, <em class="hlt6">, <em class="hlt7">, <em class="hlt8">, <em class="hlt9">, <em class="hlt10">
Highlighting examples
edit- Override global settings
- Specify a highlight query
- Set highlighter type
- Configure highlighting tags
- Highlight all fields
- Combine matches on multiple fields
- Explicitly order highlighted fields
- Control highlighted fragments
- Highlight using the postings list
- Specify a fragmenter for the plain highlighter
Override global settings
editYou can specify highlighter settings globally and selectively override them for individual fields.
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "number_of_fragments": 3, "fragment_size": 150, "fields": { "body": { "pre_tags": [ "<em>" ], "post_tags": [ "</em>" ] }, "blog.title": { "number_of_fragments": 0 }, "blog.author": { "number_of_fragments": 0 }, "blog.comment": { "number_of_fragments": 5, "order": "score" } } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { number_of_fragments: 3, fragment_size: 150, fields: { body: { pre_tags: [ '<em>' ], post_tags: [ '</em>' ] }, 'blog.title' => { number_of_fragments: 0 }, 'blog.author' => { number_of_fragments: 0 }, 'blog.comment' => { number_of_fragments: 5, order: 'score' } } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { number_of_fragments: 3, fragment_size: 150, fields: { body: { pre_tags: ["<em>"], post_tags: ["</em>"], }, "blog.title": { number_of_fragments: 0, }, "blog.author": { number_of_fragments: 0, }, "blog.comment": { number_of_fragments: 5, order: "score", }, }, }, }); console.log(response);
GET /_search { "query" : { "match": { "user.id": "kimchy" } }, "highlight" : { "number_of_fragments" : 3, "fragment_size" : 150, "fields" : { "body" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] }, "blog.title" : { "number_of_fragments" : 0 }, "blog.author" : { "number_of_fragments" : 0 }, "blog.comment" : { "number_of_fragments" : 5, "order" : "score" } } } }
Specify a highlight query
editYou can specify a highlight_query
to take additional information into account
when highlighting. For example, the following query includes both the search
query and rescore query in the highlight_query
. Without the highlight_query
,
highlighting would only take the search query into account.
resp = client.search( query={ "match": { "comment": { "query": "foo bar" } } }, rescore={ "window_size": 50, "query": { "rescore_query": { "match_phrase": { "comment": { "query": "foo bar", "slop": 1 } } }, "rescore_query_weight": 10 } }, source=False, highlight={ "order": "score", "fields": { "comment": { "fragment_size": 150, "number_of_fragments": 3, "highlight_query": { "bool": { "must": { "match": { "comment": { "query": "foo bar" } } }, "should": { "match_phrase": { "comment": { "query": "foo bar", "slop": 1, "boost": 10 } } }, "minimum_should_match": 0 } } } } }, ) print(resp)
response = client.search( body: { query: { match: { comment: { query: 'foo bar' } } }, rescore: { window_size: 50, query: { rescore_query: { match_phrase: { comment: { query: 'foo bar', slop: 1 } } }, rescore_query_weight: 10 } }, _source: false, highlight: { order: 'score', fields: { comment: { fragment_size: 150, number_of_fragments: 3, highlight_query: { bool: { must: { match: { comment: { query: 'foo bar' } } }, should: { match_phrase: { comment: { query: 'foo bar', slop: 1, boost: 10 } } }, minimum_should_match: 0 } } } } } } ) puts response
const response = await client.search({ query: { match: { comment: { query: "foo bar", }, }, }, rescore: { window_size: 50, query: { rescore_query: { match_phrase: { comment: { query: "foo bar", slop: 1, }, }, }, rescore_query_weight: 10, }, }, _source: false, highlight: { order: "score", fields: { comment: { fragment_size: 150, number_of_fragments: 3, highlight_query: { bool: { must: { match: { comment: { query: "foo bar", }, }, }, should: { match_phrase: { comment: { query: "foo bar", slop: 1, boost: 10, }, }, }, minimum_should_match: 0, }, }, }, }, }, }); console.log(response);
GET /_search { "query": { "match": { "comment": { "query": "foo bar" } } }, "rescore": { "window_size": 50, "query": { "rescore_query": { "match_phrase": { "comment": { "query": "foo bar", "slop": 1 } } }, "rescore_query_weight": 10 } }, "_source": false, "highlight": { "order": "score", "fields": { "comment": { "fragment_size": 150, "number_of_fragments": 3, "highlight_query": { "bool": { "must": { "match": { "comment": { "query": "foo bar" } } }, "should": { "match_phrase": { "comment": { "query": "foo bar", "slop": 1, "boost": 10.0 } } }, "minimum_should_match": 0 } } } } } }
Set highlighter type
editThe type
field allows to force a specific highlighter type.
The allowed values are: unified
, plain
and fvh
.
The following is an example that forces the use of the plain highlighter:
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "fields": { "comment": { "type": "plain" } } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { fields: { comment: { type: 'plain' } } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { fields: { comment: { type: "plain", }, }, }, }); console.log(response);
GET /_search { "query": { "match": { "user.id": "kimchy" } }, "highlight": { "fields": { "comment": { "type": "plain" } } } }
Configure highlighting tags
editBy default, the highlighting will wrap highlighted text in <em>
and
</em>
. This can be controlled by setting pre_tags
and post_tags
,
for example:
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "pre_tags": [ "<tag1>" ], "post_tags": [ "</tag1>" ], "fields": { "body": {} } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { pre_tags: [ '<tag1>' ], post_tags: [ '</tag1>' ], fields: { body: {} } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { pre_tags: ["<tag1>"], post_tags: ["</tag1>"], fields: { body: {}, }, }, }); console.log(response);
GET /_search { "query" : { "match": { "user.id": "kimchy" } }, "highlight" : { "pre_tags" : ["<tag1>"], "post_tags" : ["</tag1>"], "fields" : { "body" : {} } } }
When using the fast vector highlighter, you can specify additional tags and the "importance" is ordered.
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "pre_tags": [ "<tag1>", "<tag2>" ], "post_tags": [ "</tag1>", "</tag2>" ], "fields": { "body": {} } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { pre_tags: [ '<tag1>', '<tag2>' ], post_tags: [ '</tag1>', '</tag2>' ], fields: { body: {} } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { pre_tags: ["<tag1>", "<tag2>"], post_tags: ["</tag1>", "</tag2>"], fields: { body: {}, }, }, }); console.log(response);
GET /_search { "query" : { "match": { "user.id": "kimchy" } }, "highlight" : { "pre_tags" : ["<tag1>", "<tag2>"], "post_tags" : ["</tag1>", "</tag2>"], "fields" : { "body" : {} } } }
You can also use the built-in styled
tag schema:
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "tags_schema": "styled", "fields": { "comment": {} } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { tags_schema: 'styled', fields: { comment: {} } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { tags_schema: "styled", fields: { comment: {}, }, }, }); console.log(response);
GET /_search { "query" : { "match": { "user.id": "kimchy" } }, "highlight" : { "tags_schema" : "styled", "fields" : { "comment" : {} } } }
Highlight in all fields
editBy default, only fields that contains a query match are highlighted. Set
require_field_match
to false
to highlight all fields.
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "require_field_match": False, "fields": { "body": { "pre_tags": [ "<em>" ], "post_tags": [ "</em>" ] } } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { require_field_match: false, fields: { body: { pre_tags: [ '<em>' ], post_tags: [ '</em>' ] } } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { require_field_match: false, fields: { body: { pre_tags: ["<em>"], post_tags: ["</em>"], }, }, }, }); console.log(response);
GET /_search { "query" : { "match": { "user.id": "kimchy" } }, "highlight" : { "require_field_match": false, "fields": { "body" : { "pre_tags" : ["<em>"], "post_tags" : ["</em>"] } } } }
Combine matches on multiple fields
editSupported by the unified
and fvh
highlighters.
The Unified and Fast Vector Highlighter can combine matches on multiple fields to highlight a single field. This is most intuitive for multifields that analyze the same string in different ways.
In the following examples, comment
is analyzed by the standard
analyzer and comment.english
is analyzed by the english
analyzer.
resp = client.indices.create( index="index1", mappings={ "properties": { "comment": { "type": "text", "analyzer": "standard", "fields": { "english": { "type": "text", "analyzer": "english" } } } } }, ) print(resp)
const response = await client.indices.create({ index: "index1", mappings: { properties: { comment: { type: "text", analyzer: "standard", fields: { english: { type: "text", analyzer: "english", }, }, }, }, }, }); console.log(response);
PUT index1 { "mappings": { "properties": { "comment": { "type": "text", "analyzer": "standard", "fields": { "english": { "type": "text", "analyzer": "english" } } } } } }
resp = client.bulk( index="index1", refresh=True, operations=[ { "index": { "_id": "doc1" } }, { "comment": "run with scissors" }, { "index": { "_id": "doc2" } }, { "comment": "running with scissors" } ], ) print(resp)
const response = await client.bulk({ index: "index1", refresh: "true", operations: [ { index: { _id: "doc1", }, }, { comment: "run with scissors", }, { index: { _id: "doc2", }, }, { comment: "running with scissors", }, ], }); console.log(response);
PUT index1/_bulk?refresh=true {"index": {"_id": "doc1" }} {"comment": "run with scissors"} { "index" : {"_id": "doc2"} } {"comment": "running with scissors"}
resp = client.search( index="index1", query={ "query_string": { "query": "running with scissors", "fields": [ "comment", "comment.english" ] } }, highlight={ "order": "score", "fields": { "comment": {} } }, ) print(resp)
const response = await client.search({ index: "index1", query: { query_string: { query: "running with scissors", fields: ["comment", "comment.english"], }, }, highlight: { order: "score", fields: { comment: {}, }, }, }); console.log(response);
GET index1/_search { "query": { "query_string": { "query": "running with scissors", "fields": ["comment", "comment.english"] } }, "highlight": { "order": "score", "fields": { "comment": {} } } }
The above request matches both "run with scissors" and "running with scissors" and would highlight "running" and "scissors" but not "run". If both phrases appear in a large document then "running with scissors" is sorted above "run with scissors" in the fragments list because there are more matches in that fragment.
{ ... "hits" : { "total" : { "value" : 2, "relation" : "eq" }, "max_score": 1.0577903, "hits" : [ { "_index" : "index1", "_id" : "doc2", "_score" : 1.0577903, "_source" : { "comment" : "running with scissors" }, "highlight" : { "comment" : [ "<em>running</em> <em>with</em> <em>scissors</em>" ] } }, { "_index" : "index1", "_id" : "doc1", "_score" : 0.36464313, "_source" : { "comment" : "run with scissors" }, "highlight" : { "comment" : [ "run <em>with</em> <em>scissors</em>" ] } } ] } }
The below request highlights "run" as well as "running" and "scissors",
because the matched_fields
parameter instructs that for highlighting
we need to combine matches from the comment.english
field with
the matches from the original comment
field.
resp = client.search( index="index1", query={ "query_string": { "query": "running with scissors", "fields": [ "comment", "comment.english" ] } }, highlight={ "order": "score", "fields": { "comment": { "matched_fields": [ "comment.english" ] } } }, ) print(resp)
const response = await client.search({ index: "index1", query: { query_string: { query: "running with scissors", fields: ["comment", "comment.english"], }, }, highlight: { order: "score", fields: { comment: { matched_fields: ["comment.english"], }, }, }, }); console.log(response);
GET index1/_search { "query": { "query_string": { "query": "running with scissors", "fields": ["comment", "comment.english"] } }, "highlight": { "order": "score", "fields": { "comment": { "matched_fields": ["comment.english"] } } } }
{ ... "hits" : { "total" : { "value" : 2, "relation" : "eq" }, "max_score": 1.0577903, "hits" : [ { "_index" : "index1", "_id" : "doc2", "_score" : 1.0577903, "_source" : { "comment" : "running with scissors" }, "highlight" : { "comment" : [ "<em>running</em> <em>with</em> <em>scissors</em>" ] } }, { "_index" : "index1", "_id" : "doc1", "_score" : 0.36464313, "_source" : { "comment" : "run with scissors" }, "highlight" : { "comment" : [ "<em>run</em> <em>with</em> <em>scissors</em>" ] } } ] } }
In the following examples, comment
is analyzed by the standard
analyzer and comment.english
is analyzed by the english
analyzer.
resp = client.indices.create( index="index2", mappings={ "properties": { "comment": { "type": "text", "analyzer": "standard", "term_vector": "with_positions_offsets", "fields": { "english": { "type": "text", "analyzer": "english", "term_vector": "with_positions_offsets" } } } } }, ) print(resp)
const response = await client.indices.create({ index: "index2", mappings: { properties: { comment: { type: "text", analyzer: "standard", term_vector: "with_positions_offsets", fields: { english: { type: "text", analyzer: "english", term_vector: "with_positions_offsets", }, }, }, }, }, }); console.log(response);
PUT index2 { "mappings": { "properties": { "comment": { "type": "text", "analyzer": "standard", "term_vector": "with_positions_offsets", "fields": { "english": { "type": "text", "analyzer": "english", "term_vector": "with_positions_offsets" } } } } } }
resp = client.bulk( index="index2", refresh=True, operations=[ { "index": { "_id": "doc1" } }, { "comment": "run with scissors" }, { "index": { "_id": "doc2" } }, { "comment": "running with scissors" } ], ) print(resp)
const response = await client.bulk({ index: "index2", refresh: "true", operations: [ { index: { _id: "doc1", }, }, { comment: "run with scissors", }, { index: { _id: "doc2", }, }, { comment: "running with scissors", }, ], }); console.log(response);
PUT index2/_bulk?refresh=true {"index": {"_id": "doc1" }} {"comment": "run with scissors"} { "index" : {"_id": "doc2"} } {"comment": "running with scissors"}
resp = client.search( index="index2", query={ "query_string": { "query": "running with scissors", "fields": [ "comment", "comment.english" ] } }, highlight={ "order": "score", "fields": { "comment": { "type": "fvh" } } }, ) print(resp)
const response = await client.search({ index: "index2", query: { query_string: { query: "running with scissors", fields: ["comment", "comment.english"], }, }, highlight: { order: "score", fields: { comment: { type: "fvh", }, }, }, }); console.log(response);
GET index2/_search { "query": { "query_string": { "query": "running with scissors", "fields": ["comment", "comment.english"] } }, "highlight": { "order": "score", "fields": { "comment": { "type" : "fvh" } } } }
The above request matches both "run with scissors" and "running with scissors" and would highlight "running" and "scissors" but not "run". If both phrases appear in a large document then "running with scissors" is sorted above "run with scissors" in the fragments list because there are more matches in that fragment.
{ ... "hits" : { "total" : { "value" : 2, "relation" : "eq" }, "max_score": 1.0577903, "hits" : [ { "_index" : "index2", "_id" : "doc2", "_score" : 1.0577903, "_source" : { "comment" : "running with scissors" }, "highlight" : { "comment" : [ "<em>running</em> <em>with</em> <em>scissors</em>" ] } }, { "_index" : "index2", "_id" : "doc1", "_score" : 0.36464313, "_source" : { "comment" : "run with scissors" }, "highlight" : { "comment" : [ "run <em>with</em> <em>scissors</em>" ] } } ] } }
The below request highlights "run" as well as "running" and "scissors",
because the matched_fields
parameter instructs that for highlighting
we need to combine matches from the comment
and comment.english
fields.
resp = client.search( index="index2", query={ "query_string": { "query": "running with scissors", "fields": [ "comment", "comment.english" ] } }, highlight={ "order": "score", "fields": { "comment": { "type": "fvh", "matched_fields": [ "comment", "comment.english" ] } } }, ) print(resp)
const response = await client.search({ index: "index2", query: { query_string: { query: "running with scissors", fields: ["comment", "comment.english"], }, }, highlight: { order: "score", fields: { comment: { type: "fvh", matched_fields: ["comment", "comment.english"], }, }, }, }); console.log(response);
GET index2/_search { "query": { "query_string": { "query": "running with scissors", "fields": ["comment", "comment.english"] } }, "highlight": { "order": "score", "fields": { "comment": { "type" : "fvh", "matched_fields": ["comment", "comment.english"] } } } }
{ ... "hits" : { "total" : { "value" : 2, "relation" : "eq" }, "max_score": 1.0577903, "hits" : [ { "_index" : "index2", "_id" : "doc2", "_score" : 1.0577903, "_source" : { "comment" : "running with scissors" }, "highlight" : { "comment" : [ "<em>running</em> <em>with</em> <em>scissors</em>" ] } }, { "_index" : "index2", "_id" : "doc1", "_score" : 0.36464313, "_source" : { "comment" : "run with scissors" }, "highlight" : { "comment" : [ "<em>run</em> <em>with</em> <em>scissors</em>" ] } } ] } }
The below request wouldn’t highlight "run" or "scissor" but shows that
it is just fine not to list the field to which the matches are combined
(comment.english
) in the matched fields.
resp = client.search( index="index2", query={ "query_string": { "query": "running with scissors", "fields": [ "comment", "comment.english" ] } }, highlight={ "order": "score", "fields": { "comment.english": { "type": "fvh", "matched_fields": [ "comment" ] } } }, ) print(resp)
const response = await client.search({ index: "index2", query: { query_string: { query: "running with scissors", fields: ["comment", "comment.english"], }, }, highlight: { order: "score", fields: { "comment.english": { type: "fvh", matched_fields: ["comment"], }, }, }, }); console.log(response);
GET index2/_search { "query": { "query_string": { "query": "running with scissors", "fields": ["comment", "comment.english"] } }, "highlight": { "order": "score", "fields": { "comment.english": { "type" : "fvh", "matched_fields": ["comment"] } } } }
{ ... "hits" : { "total" : { "value" : 2, "relation" : "eq" }, "max_score": 1.0577903, "hits" : [ { "_index" : "index2", "_id" : "doc2", "_score" : 1.0577903, "_source" : { "comment" : "running with scissors" }, "highlight" : { "comment.english" : [ "<em>running</em> <em>with</em> <em>scissors</em>" ] } }, { "_index" : "index2", "_id" : "doc1", "_score" : 0.36464313, "_source" : { "comment" : "run with scissors" }, "highlight" : { "comment.english" : [ "run <em>with</em> <em>scissors</em>" ] } } ] } }
There is a small amount of overhead involved with setting
matched_fields
to a non-empty array so always prefer
"highlight": { "fields": { "comment": {} } }
to
"highlight": { "fields": { "comment": { "matched_fields": ["comment"], "type" : "fvh" } } }
Technically it is also fine to add fields to matched_fields
that
don’t share the same underlying string as the field to which the matches
are combined. The results might not make much sense and if one of the
matches is off the end of the text then the whole query will fail.
Explicitly order highlighted fields
editElasticsearch highlights the fields in the order that they are sent, but per the
JSON spec, objects are unordered. If you need to be explicit about the order
in which fields are highlighted specify the fields
as an array:
resp = client.search( highlight={ "fields": [ { "title": {} }, { "text": {} } ] }, ) print(resp)
response = client.search( body: { highlight: { fields: [ { title: {} }, { text: {} } ] } } ) puts response
const response = await client.search({ highlight: { fields: [ { title: {}, }, { text: {}, }, ], }, }); console.log(response);
GET /_search { "highlight": { "fields": [ { "title": {} }, { "text": {} } ] } }
None of the highlighters built into Elasticsearch care about the order that the fields are highlighted but a plugin might.
Control highlighted fragments
editEach field highlighted can control the size of the highlighted fragment
in characters (defaults to 100
), and the maximum number of fragments
to return (defaults to 5
).
For example:
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "fields": { "comment": { "fragment_size": 150, "number_of_fragments": 3 } } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { fields: { comment: { fragment_size: 150, number_of_fragments: 3 } } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { fields: { comment: { fragment_size: 150, number_of_fragments: 3, }, }, }, }); console.log(response);
GET /_search { "query" : { "match": { "user.id": "kimchy" } }, "highlight" : { "fields" : { "comment" : {"fragment_size" : 150, "number_of_fragments" : 3} } } }
On top of this it is possible to specify that highlighted fragments need to be sorted by score:
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "order": "score", "fields": { "comment": { "fragment_size": 150, "number_of_fragments": 3 } } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { order: 'score', fields: { comment: { fragment_size: 150, number_of_fragments: 3 } } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { order: "score", fields: { comment: { fragment_size: 150, number_of_fragments: 3, }, }, }, }); console.log(response);
GET /_search { "query" : { "match": { "user.id": "kimchy" } }, "highlight" : { "order" : "score", "fields" : { "comment" : {"fragment_size" : 150, "number_of_fragments" : 3} } } }
If the number_of_fragments
value is set to 0
then no fragments are
produced, instead the whole content of the field is returned, and of
course it is highlighted. This can be very handy if short texts (like
document title or address) need to be highlighted but no fragmentation
is required. Note that fragment_size
is ignored in this case.
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "fields": { "body": {}, "blog.title": { "number_of_fragments": 0 } } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { fields: { body: {}, 'blog.title' => { number_of_fragments: 0 } } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { fields: { body: {}, "blog.title": { number_of_fragments: 0, }, }, }, }); console.log(response);
GET /_search { "query" : { "match": { "user.id": "kimchy" } }, "highlight" : { "fields" : { "body" : {}, "blog.title" : {"number_of_fragments" : 0} } } }
When using fvh
one can use fragment_offset
parameter to control the margin to start highlighting from.
In the case where there is no matching fragment to highlight, the default is
to not return anything. Instead, we can return a snippet of text from the
beginning of the field by setting no_match_size
(default 0
) to the length
of the text that you want returned. The actual length may be shorter or longer than
specified as it tries to break on a word boundary.
resp = client.search( query={ "match": { "user.id": "kimchy" } }, highlight={ "fields": { "comment": { "fragment_size": 150, "number_of_fragments": 3, "no_match_size": 150 } } }, ) print(resp)
response = client.search( body: { query: { match: { 'user.id' => 'kimchy' } }, highlight: { fields: { comment: { fragment_size: 150, number_of_fragments: 3, no_match_size: 150 } } } } ) puts response
const response = await client.search({ query: { match: { "user.id": "kimchy", }, }, highlight: { fields: { comment: { fragment_size: 150, number_of_fragments: 3, no_match_size: 150, }, }, }, }); console.log(response);
GET /_search { "query": { "match": { "user.id": "kimchy" } }, "highlight": { "fields": { "comment": { "fragment_size": 150, "number_of_fragments": 3, "no_match_size": 150 } } } }
Highlight using the postings list
editHere is an example of setting the comment
field in the index mapping to
allow for highlighting using the postings:
resp = client.indices.create( index="example", mappings={ "properties": { "comment": { "type": "text", "index_options": "offsets" } } }, ) print(resp)
response = client.indices.create( index: 'example', body: { mappings: { properties: { comment: { type: 'text', index_options: 'offsets' } } } } ) puts response
const response = await client.indices.create({ index: "example", mappings: { properties: { comment: { type: "text", index_options: "offsets", }, }, }, }); console.log(response);
PUT /example { "mappings": { "properties": { "comment" : { "type": "text", "index_options" : "offsets" } } } }
Here is an example of setting the comment
field to allow for
highlighting using the term_vectors
(this will cause the index to be bigger):
resp = client.indices.create( index="example", mappings={ "properties": { "comment": { "type": "text", "term_vector": "with_positions_offsets" } } }, ) print(resp)
response = client.indices.create( index: 'example', body: { mappings: { properties: { comment: { type: 'text', term_vector: 'with_positions_offsets' } } } } ) puts response
const response = await client.indices.create({ index: "example", mappings: { properties: { comment: { type: "text", term_vector: "with_positions_offsets", }, }, }, }); console.log(response);
PUT /example { "mappings": { "properties": { "comment" : { "type": "text", "term_vector" : "with_positions_offsets" } } } }
Specify a fragmenter for the plain highlighter
editWhen using the plain
highlighter, you can choose between the simple
and
span
fragmenters:
resp = client.search( index="my-index-000001", query={ "match_phrase": { "message": "number 1" } }, highlight={ "fields": { "message": { "type": "plain", "fragment_size": 15, "number_of_fragments": 3, "fragmenter": "simple" } } }, ) print(resp)
response = client.search( index: 'my-index-000001', body: { query: { match_phrase: { message: 'number 1' } }, highlight: { fields: { message: { type: 'plain', fragment_size: 15, number_of_fragments: 3, fragmenter: 'simple' } } } } ) puts response
const response = await client.search({ index: "my-index-000001", query: { match_phrase: { message: "number 1", }, }, highlight: { fields: { message: { type: "plain", fragment_size: 15, number_of_fragments: 3, fragmenter: "simple", }, }, }, }); console.log(response);
GET my-index-000001/_search { "query": { "match_phrase": { "message": "number 1" } }, "highlight": { "fields": { "message": { "type": "plain", "fragment_size": 15, "number_of_fragments": 3, "fragmenter": "simple" } } } }
Response:
{ ... "hits": { "total": { "value": 1, "relation": "eq" }, "max_score": 1.6011951, "hits": [ { "_index": "my-index-000001", "_id": "1", "_score": 1.6011951, "_source": { "message": "some message with the number 1", "context": "bar" }, "highlight": { "message": [ " with the <em>number</em>", " <em>1</em>" ] } } ] } }
resp = client.search( index="my-index-000001", query={ "match_phrase": { "message": "number 1" } }, highlight={ "fields": { "message": { "type": "plain", "fragment_size": 15, "number_of_fragments": 3, "fragmenter": "span" } } }, ) print(resp)
response = client.search( index: 'my-index-000001', body: { query: { match_phrase: { message: 'number 1' } }, highlight: { fields: { message: { type: 'plain', fragment_size: 15, number_of_fragments: 3, fragmenter: 'span' } } } } ) puts response
const response = await client.search({ index: "my-index-000001", query: { match_phrase: { message: "number 1", }, }, highlight: { fields: { message: { type: "plain", fragment_size: 15, number_of_fragments: 3, fragmenter: "span", }, }, }, }); console.log(response);
GET my-index-000001/_search { "query": { "match_phrase": { "message": "number 1" } }, "highlight": { "fields": { "message": { "type": "plain", "fragment_size": 15, "number_of_fragments": 3, "fragmenter": "span" } } } }
Response:
{ ... "hits": { "total": { "value": 1, "relation": "eq" }, "max_score": 1.6011951, "hits": [ { "_index": "my-index-000001", "_id": "1", "_score": 1.6011951, "_source": { "message": "some message with the number 1", "context": "bar" }, "highlight": { "message": [ " with the <em>number</em> <em>1</em>" ] } } ] } }
If the number_of_fragments
option is set to 0
,
NullFragmenter
is used which does not fragment the text at all.
This is useful for highlighting the entire contents of a document or field.
How highlighters work internally
editGiven a query and a text (the content of a document field), the goal of a highlighter is to find the best text fragments for the query, and highlight the query terms in the found fragments. For this, a highlighter needs to address several questions:
- How to break a text into fragments?
- How to find the best fragments among all fragments?
- How to highlight the query terms in a fragment?
How to break a text into fragments?
editRelevant settings: fragment_size
, fragmenter
, type
of highlighter,
boundary_chars
, boundary_max_scan
, boundary_scanner
, boundary_scanner_locale
.
Plain highlighter begins with analyzing the text using the given analyzer,
and creating a token stream from it. Plain highlighter uses a very simple
algorithm to break the token stream into fragments. It loops through terms in the token stream,
and every time the current term’s end_offset exceeds fragment_size
multiplied by the number of
created fragments, a new fragment is created. A little more computation is done with using span
fragmenter to avoid breaking up text between highlighted terms. But overall, since the breaking is
done only by fragment_size
, some fragments can be quite odd, e.g. beginning
with a punctuation mark.
Unified or FVH highlighters do a better job of breaking up a text into
fragments by utilizing Java’s BreakIterator
. This ensures that a fragment
is a valid sentence as long as fragment_size
allows for this.
How to find the best fragments?
editRelevant settings: number_of_fragments
.
To find the best, most relevant, fragments, a highlighter needs to score each fragment in respect to the given query. The goal is to score only those terms that participated in generating the hit on the document. For some complex queries, this is still work in progress.
The plain highlighter creates an in-memory index from the current token stream, and re-runs the original query criteria through Lucene’s query execution planner to get access to low-level match information for the current text. For more complex queries the original query could be converted to a span query, as span queries can handle phrases more accurately. Then this obtained low-level match information is used to score each individual fragment. The scoring method of the plain highlighter is quite simple. Each fragment is scored by the number of unique query terms found in this fragment. The score of individual term is equal to its boost, which is by default is 1. Thus, by default, a fragment that contains one unique query term, will get a score of 1; and a fragment that contains two unique query terms, will get a score of 2 and so on. The fragments are then sorted by their scores, so the highest scored fragments will be output first.
FVH doesn’t need to analyze the text and build an in-memory index, as it uses pre-indexed document term vectors, and finds among them terms that correspond to the query. FVH scores each fragment by the number of query terms found in this fragment. Similarly to plain highlighter, score of individual term is equal to its boost value. In contrast to plain highlighter, all query terms are counted, not only unique terms.
Unified highlighter can use pre-indexed term vectors or pre-indexed terms offsets, if they are available. Otherwise, similar to Plain Highlighter, it has to create an in-memory index from the text. Unified highlighter uses the BM25 scoring model to score fragments.
How to highlight the query terms in a fragment?
editRelevant settings: pre-tags
, post-tags
.
The goal is to highlight only those terms that participated in generating the hit on the document. For some complex boolean queries, this is still work in progress, as highlighters don’t reflect the boolean logic of a query and only extract leaf (terms, phrases, prefix etc) queries.
Plain highlighter given the token stream and the original text, recomposes the original text to highlight only terms from the token stream that are contained in the low-level match information structure from the previous step.
FVH and unified highlighter use intermediate data structures to represent fragments in some raw form, and then populate them with actual text.
A highlighter uses pre-tags
, post-tags
to encode highlighted terms.
An example of the work of the unified highlighter
editLet’s look in more details how unified highlighter works.
First, we create a index with a text field content
, that will be indexed
using english
analyzer, and will be indexed without offsets or term vectors.
PUT test_index { "mappings": { "properties": { "content": { "type": "text", "analyzer": "english" } } } }
We put the following document into the index:
PUT test_index/_doc/doc1 { "content" : "For you I'm only a fox like a hundred thousand other foxes. But if you tame me, we'll need each other. You'll be the only boy in the world for me. I'll be the only fox in the world for you." }
And we ran the following query with a highlight request:
GET test_index/_search { "query": { "match_phrase" : {"content" : "only fox"} }, "highlight": { "type" : "unified", "number_of_fragments" : 3, "fields": { "content": {} } } }
After doc1
is found as a hit for this query, this hit will be passed to the
unified highlighter for highlighting the field content
of the document.
Since the field content
was not indexed either with offsets or term vectors,
its raw field value will be analyzed, and in-memory index will be built from
the terms that match the query:
{"token":"onli","start_offset":12,"end_offset":16,"position":3}, {"token":"fox","start_offset":19,"end_offset":22,"position":5}, {"token":"fox","start_offset":53,"end_offset":58,"position":11}, {"token":"onli","start_offset":117,"end_offset":121,"position":24}, {"token":"onli","start_offset":159,"end_offset":163,"position":34}, {"token":"fox","start_offset":164,"end_offset":167,"position":35}
Our complex phrase query will be converted to the span query:
spanNear([text:onli, text:fox], 0, true)
, meaning that we are looking for
terms "onli: and "fox" within 0 distance from each other, and in the given
order. The span query will be run against the created before in-memory index,
to find the following match:
{"term":"onli", "start_offset":159, "end_offset":163}, {"term":"fox", "start_offset":164, "end_offset":167}
In our example, we have got a single match, but there could be several matches.
Given the matches, the unified highlighter breaks the text of the field into
so called "passages". Each passage must contain at least one match.
The unified highlighter with the use of Java’s BreakIterator
ensures that each
passage represents a full sentence as long as it doesn’t exceed fragment_size
.
For our example, we have got a single passage with the following properties
(showing only a subset of the properties here):
Passage: startOffset: 147 endOffset: 189 score: 3.7158387 matchStarts: [159, 164] matchEnds: [163, 167] numMatches: 2
Notice how a passage has a score, calculated using the BM25 scoring formula
adapted for passages. Scores allow us to choose the best scoring
passages if there are more passages available than the requested
by the user number_of_fragments
. Scores also let us to sort passages by
order: "score"
if requested by the user.
As the final step, the unified highlighter will extract from the field’s text a string corresponding to each passage:
"I'll be the only fox in the world for you."
and will format with the tags <em> and </em> all matches in this string
using the passages’s matchStarts
and matchEnds
information:
I'll be the <em>only</em> <em>fox</em> in the world for you.
This kind of formatted strings are the final result of the highlighter returned to the user.
On this page
- Unified highlighter
- Semantic Highlighter
- Plain highlighter
- Fast vector highlighter
- Offsets strategy
- Highlighting settings
- Highlighting examples
- Override global settings
- Specify a highlight query
- Set highlighter type
- Configure highlighting tags
- Highlight in all fields
- Combine matches on multiple fields
- Explicitly order highlighted fields
- Control highlighted fragments
- Highlight using the postings list
- Specify a fragmenter for the plain highlighter
- How highlighters work internally
- How to break a text into fragments?
- How to find the best fragments?
- How to highlight the query terms in a fragment?
- An example of the work of the unified highlighter