- Elasticsearch Guide: other versions:
- Getting Started
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Installing X-Pack
- Set up X-Pack
- Configuring X-Pack Java Clients
- X-Pack Settings
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Auto-interval Date Histogram Aggregation
- Intervals
- Children Aggregation
- Composite Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Shrink Index
- Split Index
- Rollover Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Standard Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- Whitespace Tokenizer
- UAX URL Email Tokenizer
- Classic Tokenizer
- Thai Tokenizer
- NGram Tokenizer
- Edge NGram Tokenizer
- Keyword Tokenizer
- Pattern Tokenizer
- Char Group Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Multiplexer Token Filter
- Conditional Token Filter
- Predicate Token Filter Script
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Exclude mode settings example
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- Minhash Token Filter
- Remove Duplicates Token Filter
- Character Filters
- Modules
- Index Modules
- Ingest Node
- Pipeline Definition
- Ingest APIs
- Accessing Data in Pipelines
- Conditional Execution in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Bytes Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Dissect Processor
- Drop Processor
- Dot Expander Processor
- Fail Processor
- Foreach Processor
- Grok Processor
- Gsub Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Pipeline Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Set Security User Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- URL Decode Processor
- SQL Access
- Monitor a cluster
- Rolling up historical data
- Set up a cluster for high availability
- Secure a cluster
- Overview
- Configuring security
- Encrypting communications in Elasticsearch
- Encrypting communications in an Elasticsearch Docker Container
- Enabling cipher suites for stronger encryption
- Separating node-to-node and client traffic
- Configuring an Active Directory realm
- Configuring a file realm
- Configuring an LDAP realm
- Configuring a native realm
- Configuring a PKI realm
- Configuring a SAML realm
- Configuring a Kerberos realm
- FIPS 140-2
- Security settings
- Security files
- Auditing settings
- How security works
- User authentication
- Built-in users
- Internal users
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- User authorization
- Auditing security events
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, tribe, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Can’t log in after upgrading to 6.5.4
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on Cluster and Index Events
- Command line tools
- How To
- Testing
- Glossary of terms
- X-Pack APIs
- Info API
- Cross-cluster replication APIs
- Explore API
- Licensing APIs
- Migration APIs
- Machine learning APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create calendar
- Create datafeeds
- Create filter
- Create jobs
- Delete calendar
- Delete datafeeds
- Delete events from calendar
- Delete filter
- Delete forecast
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Find file structure
- Flush jobs
- Forecast jobs
- Get calendars
- Get buckets
- Get overall buckets
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filter
- Update jobs
- Update model snapshots
- Rollup APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get application privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Has privileges
- Invalidate token
- SSL certificate
- Watcher APIs
- Definitions
- Release Highlights
- Breaking changes
- Release Notes
- Elasticsearch version 6.5.4
- Elasticsearch version 6.5.3
- Elasticsearch version 6.5.2
- Elasticsearch version 6.5.1
- Elasticsearch version 6.5.0
- Elasticsearch version 6.4.3
- Elasticsearch version 6.4.2
- Elasticsearch version 6.4.1
- Elasticsearch version 6.4.0
- Elasticsearch version 6.3.2
- Elasticsearch version 6.3.1
- Elasticsearch version 6.3.0
- Elasticsearch version 6.2.4
- Elasticsearch version 6.2.3
- Elasticsearch version 6.2.2
- Elasticsearch version 6.2.1
- Elasticsearch version 6.2.0
- Elasticsearch version 6.1.4
- Elasticsearch version 6.1.3
- Elasticsearch version 6.1.2
- Elasticsearch version 6.1.1
- Elasticsearch version 6.1.0
- Elasticsearch version 6.0.1
- Elasticsearch version 6.0.0
- Elasticsearch version 6.0.0-rc2
- Elasticsearch version 6.0.0-rc1
- Elasticsearch version 6.0.0-beta2
- Elasticsearch version 6.0.0-beta1
- Elasticsearch version 6.0.0-alpha2
- Elasticsearch version 6.0.0-alpha1
- Elasticsearch version 6.0.0-alpha1 (Changes previously released in 5.x)
Tune for search speed
editTune for search speed
editGive memory to the filesystem cache
editElasticsearch heavily relies on the filesystem cache in order to make search fast. In general, you should make sure that at least half the available memory goes to the filesystem cache so that Elasticsearch can keep hot regions of the index in physical memory.
Use faster hardware
editIf your search is I/O bound, you should investigate giving more memory to the
filesystem cache (see above) or buying faster drives. In particular SSD drives
are known to perform better than spinning disks. Always use local storage,
remote filesystems such as NFS
or SMB
should be avoided. Also beware of
virtualized storage such as Amazon’s Elastic Block Storage
. Virtualized
storage works very well with Elasticsearch, and it is appealing since it is so
fast and simple to set up, but it is also unfortunately inherently slower on an
ongoing basis when compared to dedicated local storage. If you put an index on
EBS
, be sure to use provisioned IOPS otherwise operations could be quickly
throttled.
If your search is CPU-bound, you should investigate buying faster CPUs.
Document modeling
editDocuments should be modeled so that search-time operations are as cheap as possible.
In particular, joins should be avoided. nested
can make queries
several times slower and parent-child relations can make
queries hundreds of times slower. So if the same questions can be answered without
joins by denormalizing documents, significant speedups can be expected.
Search as few fields as possible
editThe more fields a query_string
or
multi_match
query targets, the slower it is.
A common technique to improve search speed over multiple fields is to copy
their values into a single field at index time, and then use this field at
search time. This can be automated with the copy-to
directive of
mappings without having to change the source of documents. Here is an example
of an index containing movies that optimizes queries that search over both the
name and the plot of the movie by indexing both values into the name_and_plot
field.
PUT movies { "mappings": { "_doc": { "properties": { "name_and_plot": { "type": "text" }, "name": { "type": "text", "copy_to": "name_and_plot" }, "plot": { "type": "text", "copy_to": "name_and_plot" } } } } }
Pre-index data
editYou should leverage patterns in your queries to optimize the way data is indexed.
For instance, if all your documents have a price
field and most queries run
range
aggregations on a fixed
list of ranges, you could make this aggregation faster by pre-indexing the ranges
into the index and using a terms
aggregations.
For instance, if documents look like:
PUT index/_doc/1 { "designation": "spoon", "price": 13 }
and search requests look like:
GET index/_search { "aggs": { "price_ranges": { "range": { "field": "price", "ranges": [ { "to": 10 }, { "from": 10, "to": 100 }, { "from": 100 } ] } } } }
Then documents could be enriched by a price_range
field at index time, which
should be mapped as a keyword
:
PUT index { "mappings": { "_doc": { "properties": { "price_range": { "type": "keyword" } } } } } PUT index/_doc/1 { "designation": "spoon", "price": 13, "price_range": "10-100" }
And then search requests could aggregate this new field rather than running a
range
aggregation on the price
field.
GET index/_search { "aggs": { "price_ranges": { "terms": { "field": "price_range" } } } }
Consider mapping identifiers as keyword
editThe fact that some data is numeric does not mean it should always be mapped as a
numeric field. The way that Elasticsearch indexes numbers optimizes
for range
queries while keyword
fields are better at term
queries. Typically,
fields storing identifiers such as an ISBN
or any number identifying a record
from another database are rarely used in range
queries or aggregations. This is
why they might benefit from being mapped as keyword
rather than as
integer
or long
.
Avoid scripts
editIn general, scripts should be avoided. If they are absolutely needed, you
should prefer the painless
and expressions
engines.
Search rounded dates
editQueries on date fields that use now
are typically not cacheable since the
range that is being matched changes all the time. However switching to a
rounded date is often acceptable in terms of user experience, and has the
benefit of making better use of the query cache.
For instance the below query:
PUT index/_doc/1 { "my_date": "2016-05-11T16:30:55.328Z" } GET index/_search { "query": { "constant_score": { "filter": { "range": { "my_date": { "gte": "now-1h", "lte": "now" } } } } } }
could be replaced with the following query:
GET index/_search { "query": { "constant_score": { "filter": { "range": { "my_date": { "gte": "now-1h/m", "lte": "now/m" } } } } } }
In that case we rounded to the minute, so if the current time is 16:31:29
,
the range query will match everything whose value of the my_date
field is
between 15:31:00
and 16:31:59
. And if several users run a query that
contains this range in the same minute, the query cache could help speed things
up a bit. The longer the interval that is used for rounding, the more the query
cache can help, but beware that too aggressive rounding might also hurt user
experience.
It might be tempting to split ranges into a large cacheable part and smaller not cacheable parts in order to be able to leverage the query cache, as shown below:
GET index/_search { "query": { "constant_score": { "filter": { "bool": { "should": [ { "range": { "my_date": { "gte": "now-1h", "lte": "now-1h/m" } } }, { "range": { "my_date": { "gt": "now-1h/m", "lt": "now/m" } } }, { "range": { "my_date": { "gte": "now/m", "lte": "now" } } } ] } } } } }
However such practice might make the query run slower in some cases since the
overhead introduced by the bool
query may defeat the savings from better
leveraging the query cache.
Force-merge read-only indices
editIndices that are read-only would benefit from being merged down to a single segment. This is typically the case with time-based indices: only the index for the current time frame is getting new documents while older indices are read-only.
Don’t force-merge indices that are still being written to — leave merging to the background merge process.
Warm up global ordinals
editGlobal ordinals are a data-structure that is used in order to run
terms
aggregations on
keyword
fields. They are loaded lazily in memory because
Elasticsearch does not know which fields will be used in terms
aggregations
and which fields won’t. You can tell Elasticsearch to load global ordinals
eagerly at refresh-time by configuring mappings as described below:
PUT index { "mappings": { "_doc": { "properties": { "foo": { "type": "keyword", "eager_global_ordinals": true } } } } }
Warm up the filesystem cache
editIf the machine running Elasticsearch is restarted, the filesystem cache will be
empty, so it will take some time before the operating system loads hot regions
of the index into memory so that search operations are fast. You can explicitly
tell the operating system which files should be loaded into memory eagerly
depending on the file extension using the index.store.preload
setting.
Loading data into the filesystem cache eagerly on too many indices or too many files will make search slower if the filesystem cache is not large enough to hold all the data. Use with caution.
Use index sorting to speed up conjunctions
editIndex sorting can be useful in order to make conjunctions faster at the cost of slightly slower indexing. Read more about it in the index sorting documentation.
Use preference
to optimize cache utilization
editThere are multiple caches that can help with search performance, such as the filesystem cache, the request cache or the query cache. Yet all these caches are maintained at the node level, meaning that if you run the same request twice in a row, have 1 replica or more and use round-robin, the default routing algorithm, then those two requests will go to different shard copies, preventing node-level caches from helping.
Since it is common for users of a search application to run similar requests one after another, for instance in order to analyze a narrower subset of the index, using a preference value that identifies the current user or session could help optimize usage of the caches.
Replicas might help with throughput, but not always
editIn addition to improving resiliency, replicas can help improve throughput. For instance if you have a single-shard index and three nodes, you will need to set the number of replicas to 2 in order to have 3 copies of your shard in total so that all nodes are utilized.
Now imagine that you have a 2-shards index and two nodes. In one case, the number of replicas is 0, meaning that each node holds a single shard. In the second case the number of replicas is 1, meaning that each node has two shards. Which setup is going to perform best in terms of search performance? Usually, the setup that has fewer shards per node in total will perform better. The reason for that is that it gives a greater share of the available filesystem cache to each shard, and the filesystem cache is probably Elasticsearch’s number 1 performance factor. At the same time, beware that a setup that does not have replicas is subject to failure in case of a single node failure, so there is a trade-off between throughput and availability.
So what is the right number of replicas? If you have a cluster that has
num_nodes
nodes, num_primaries
primary shards in total and if you want to
be able to cope with max_failures
node failures at once at most, then the
right number of replicas for you is
max(max_failures, ceil(num_nodes / num_primaries) - 1)
.
Turn on adaptive replica selection
editWhen multiple copies of data are present, elasticsearch can use a set of criteria called adaptive replica selection to select the best copy of the data based on response time, service time, and queue size of the node containing each copy of the shard. This can improve query throughput and reduce latency for search-heavy applications.
On this page
- Give memory to the filesystem cache
- Use faster hardware
- Document modeling
- Search as few fields as possible
- Pre-index data
- Consider mapping identifiers as
keyword
- Avoid scripts
- Search rounded dates
- Force-merge read-only indices
- Warm up global ordinals
- Warm up the filesystem cache
- Use index sorting to speed up conjunctions
- Use
preference
to optimize cache utilization - Replicas might help with throughput, but not always
- Turn on adaptive replica selection