- Elasticsearch Guide: other versions:
- What’s new in 8.17
- Elasticsearch basics
- Quick starts
- Set up Elasticsearch
- Run Elasticsearch locally
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Miscellaneous cluster settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Data stream lifecycle settings
- Field data cache settings
- Local gateway settings
- Health Diagnostic settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- Inference settings
- License settings
- Machine learning settings
- Monitoring settings
- Node settings
- Networking
- Node query cache settings
- Path settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Set JVM options
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Dynamic mapping
- Explicit mapping
- Runtime fields
- Field data types
- Aggregate metric
- Alias
- Arrays
- Binary
- Boolean
- Completion
- Date
- Date nanoseconds
- Dense vector
- Flattened
- Geopoint
- Geoshape
- Histogram
- IP
- Join
- Keyword
- Nested
- Numeric
- Object
- Pass-through object
- Percolator
- Point
- Range
- Rank feature
- Rank features
- Search-as-you-type
- Semantic text
- Shape
- Sparse vector
- Text
- Token count
- Unsigned long
- Version
- Metadata fields
- Mapping parameters
analyzer
coerce
copy_to
doc_values
dynamic
eager_global_ordinals
enabled
format
ignore_above
index.mapping.ignore_above
ignore_malformed
index
index_options
index_phrases
index_prefixes
meta
fields
normalizer
norms
null_value
position_increment_gap
properties
search_analyzer
similarity
store
subobjects
term_vector
- Mapping limit settings
- Removal of mapping types
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Attachment
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- Geo-grid
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- IP Location
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Redact
- Registered domain
- Remove
- Rename
- Reroute
- Script
- Set
- Set security user
- Sort
- Split
- Terminate
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Ingest pipelines in Search
- Aliases
- Search your data
- Re-ranking
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Frequent item sets
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Time series
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- Geospatial analysis
- Connectors
- EQL
- ES|QL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Role restriction
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Cross-cluster replication
- Data store architecture
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Behavioral Analytics APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat component templates
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Prevalidate node removal
- Nodes reload secure settings
- Nodes stats
- Cluster Info
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Get desired balance
- Reset desired balance
- Cross-cluster replication APIs
- Connector APIs
- Create connector
- Delete connector
- Get connector
- List connectors
- Update connector API key id
- Update connector configuration
- Update connector index name
- Update connector features
- Update connector filtering
- Update connector name and description
- Update connector pipeline
- Update connector scheduling
- Update connector service type
- Create connector sync job
- Cancel connector sync job
- Delete connector sync job
- Get connector sync job
- List connector sync jobs
- Check in a connector
- Update connector error
- Update connector last sync stats
- Update connector status
- Check in connector sync job
- Claim connector sync job
- Set connector sync job error
- Set connector sync job stats
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- ES|QL APIs
- Features APIs
- Fleet APIs
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Resolve cluster
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Inference APIs
- Delete inference API
- Get inference API
- Perform inference API
- Create inference API
- Stream inference API
- Update inference API
- AlibabaCloud AI Search inference service
- Amazon Bedrock inference service
- Anthropic inference service
- Azure AI studio inference service
- Azure OpenAI inference service
- Cohere inference service
- Elasticsearch inference service
- ELSER inference service
- Google AI Studio inference service
- Google Vertex AI inference service
- HuggingFace inference service
- Mistral inference service
- OpenAI inference service
- Watsonx inference service
- Info API
- Ingest APIs
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Clear trained model deployment cache
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model
- Start trained model deployment
- Stop trained model deployment
- Update trained model deployment
- Migration APIs
- Node lifecycle APIs
- Query rules APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Root API
- Script APIs
- Search APIs
- Search Application APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Bulk create or update roles API
- Bulk delete roles API
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Query Role
- Get service accounts
- Get service account credentials
- Get Security settings
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- Query User
- Update API key
- Update Security settings
- Bulk update API keys
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profiles
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Create Cross-Cluster API key
- Update Cross-Cluster API key
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Synonyms APIs
- Text structure APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- Optimizations
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Fix data nodes out of disk
- Fix master nodes out of disk
- Fix other role nodes out of disk
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Troubleshooting broken repositories
- Addressing repeated snapshot policy failures
- Troubleshooting an unstable cluster
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- Troubleshooting searches
- Troubleshooting shards capacity health issues
- Troubleshooting an unbalanced cluster
- Capture diagnostics
- Migration guide
- Release notes
- Elasticsearch version 8.17.1
- Elasticsearch version 8.17.0
- Elasticsearch version 8.16.2
- Elasticsearch version 8.16.1
- Elasticsearch version 8.16.0
- Elasticsearch version 8.15.5
- Elasticsearch version 8.15.4
- Elasticsearch version 8.15.3
- Elasticsearch version 8.15.2
- Elasticsearch version 8.15.1
- Elasticsearch version 8.15.0
- Elasticsearch version 8.14.3
- Elasticsearch version 8.14.2
- Elasticsearch version 8.14.1
- Elasticsearch version 8.14.0
- Elasticsearch version 8.13.4
- Elasticsearch version 8.13.3
- Elasticsearch version 8.13.2
- Elasticsearch version 8.13.1
- Elasticsearch version 8.13.0
- Elasticsearch version 8.12.2
- Elasticsearch version 8.12.1
- Elasticsearch version 8.12.0
- Elasticsearch version 8.11.4
- Elasticsearch version 8.11.3
- Elasticsearch version 8.11.2
- Elasticsearch version 8.11.1
- Elasticsearch version 8.11.0
- Elasticsearch version 8.10.4
- Elasticsearch version 8.10.3
- Elasticsearch version 8.10.2
- Elasticsearch version 8.10.1
- Elasticsearch version 8.10.0
- Elasticsearch version 8.9.2
- Elasticsearch version 8.9.1
- Elasticsearch version 8.9.0
- Elasticsearch version 8.8.2
- Elasticsearch version 8.8.1
- Elasticsearch version 8.8.0
- Elasticsearch version 8.7.1
- Elasticsearch version 8.7.0
- Elasticsearch version 8.6.2
- Elasticsearch version 8.6.1
- Elasticsearch version 8.6.0
- Elasticsearch version 8.5.3
- Elasticsearch version 8.5.2
- Elasticsearch version 8.5.1
- Elasticsearch version 8.5.0
- Elasticsearch version 8.4.3
- Elasticsearch version 8.4.2
- Elasticsearch version 8.4.1
- Elasticsearch version 8.4.0
- Elasticsearch version 8.3.3
- Elasticsearch version 8.3.2
- Elasticsearch version 8.3.1
- Elasticsearch version 8.3.0
- Elasticsearch version 8.2.3
- Elasticsearch version 8.2.2
- Elasticsearch version 8.2.1
- Elasticsearch version 8.2.0
- Elasticsearch version 8.1.3
- Elasticsearch version 8.1.2
- Elasticsearch version 8.1.1
- Elasticsearch version 8.1.0
- Elasticsearch version 8.0.1
- Elasticsearch version 8.0.0
- Elasticsearch version 8.0.0-rc2
- Elasticsearch version 8.0.0-rc1
- Elasticsearch version 8.0.0-beta1
- Elasticsearch version 8.0.0-alpha2
- Elasticsearch version 8.0.0-alpha1
- Dependencies and versions
Size your shards
editSize your shards
editEach index in Elasticsearch is divided into one or more shards, each of which may be replicated across multiple nodes to protect against hardware failures. If you are using Data streams then each data stream is backed by a sequence of indices. There is a limit to the amount of data you can store on a single node so you can increase the capacity of your cluster by adding nodes and increasing the number of indices and shards to match. However, each index and shard has some overhead and if you divide your data across too many shards then the overhead can become overwhelming. A cluster with too many indices or shards is said to suffer from oversharding. An oversharded cluster will be less efficient at responding to searches and in extreme cases it may even become unstable.
Create a sharding strategy
editThe best way to prevent oversharding and other shard-related issues is to create a sharding strategy. A sharding strategy helps you determine and maintain the optimal number of shards for your cluster while limiting the size of those shards.
Unfortunately, there is no one-size-fits-all sharding strategy. A strategy that works in one environment may not scale in another. A good sharding strategy must account for your infrastructure, use case, and performance expectations.
The best way to create a sharding strategy is to benchmark your production data on production hardware using the same queries and indexing loads you’d see in production. For our recommended methodology, watch the quantitative cluster sizing video. As you test different shard configurations, use Kibana’s Elasticsearch monitoring tools to track your cluster’s stability and performance.
The performance of an Elasticsearch node is often limited by the performance of the underlying storage. Review our recommendations for optimizing your storage for indexing and search.
The following sections provide some reminders and guidelines you should consider when designing your sharding strategy. If your cluster is already oversharded, see Reduce a cluster’s shard count.
Sizing considerations
editKeep the following things in mind when building your sharding strategy.
Searches run on a single thread per shard
editMost searches hit multiple shards. Each shard runs the search on a single CPU thread. While a shard can run multiple concurrent searches, searches across a large number of shards can deplete a node’s search thread pool. This can result in low throughput and slow search speeds.
Each index, shard, segment and field has overhead
editEvery index and every shard requires some memory and CPU resources. In most cases, a small set of large shards uses fewer resources than many small shards.
Segments play a big role in a shard’s resource usage. Most shards contain several segments, which store its index data. Elasticsearch keeps some segment metadata in heap memory so it can be quickly retrieved for searches. As a shard grows, its segments are merged into fewer, larger segments. This decreases the number of segments, which means less metadata is kept in heap memory.
Every mapped field also carries some overhead in terms of memory usage and disk space. By default Elasticsearch will automatically create a mapping for every field in every document it indexes, but you can switch off this behaviour to take control of your mappings.
Moreover every segment requires a small amount of heap memory for each mapped field. This per-segment-per-field heap overhead includes a copy of the field name, encoded using ISO-8859-1 if applicable or UTF-16 otherwise. Usually this is not noticeable, but you may need to account for this overhead if your shards have high segment counts and the corresponding mappings contain high field counts and/or very long field names.
Elasticsearch automatically balances shards within a data tier
editA cluster’s nodes are grouped into data tiers. Within each tier, Elasticsearch attempts to spread an index’s shards across as many nodes as possible. When you add a new node or a node fails, Elasticsearch automatically rebalances the index’s shards across the tier’s remaining nodes.
Best practices
editWhere applicable, use the following best practices as starting points for your sharding strategy.
Delete indices, not documents
editDeleted documents aren’t immediately removed from Elasticsearch’s file system. Instead, Elasticsearch marks the document as deleted on each related shard. The marked document will continue to use resources until it’s removed during a periodic segment merge.
When possible, delete entire indices instead. Elasticsearch can immediately remove deleted indices directly from the file system and free up resources.
Use data streams and ILM for time series data
editData streams let you store time series data across multiple, time-based backing indices. You can use index lifecycle management (ILM) to automatically manage these backing indices.
One advantage of this setup is
automatic rollover, which creates
a new write index when the current one meets a defined max_primary_shard_size
,
max_age
, max_docs
, or max_size
threshold. When an index is no longer
needed, you can use ILM to automatically delete it and free up resources.
ILM also makes it easy to change your sharding strategy over time:
-
Want to decrease the shard count for new indices?
Change theindex.number_of_shards
setting in the data stream’s matching index template. -
Want larger shards or fewer backing indices?
Increase your ILM policy’s rollover threshold. -
Need indices that span shorter intervals?
Offset the increased shard count by deleting older indices sooner. You can do this by lowering themin_age
threshold for your policy’s delete phase.
Every new backing index is an opportunity to further tune your strategy.
Aim for shards of up to 200M documents, or with sizes between 10GB and 50GB
editThere is some overhead associated with each shard, both in terms of cluster management and search performance. Searching a thousand 50MB shards will be substantially more expensive than searching a single 50GB shard containing the same data. However, very large shards can also cause slower searches and will take longer to recover after a failure.
There is no hard limit on the physical size of a shard, and each shard can in theory contain up to just over two billion documents. However, experience shows that shards between 10GB and 50GB typically work well for many use cases, as long as the per-shard document count is kept below 200 million.
You may be able to use larger shards depending on your network and use case, and smaller shards may be appropriate for Enterprise Search and similar use cases.
If you use ILM, set the rollover action's
max_primary_shard_size
threshold to 50gb
to avoid shards larger than 50GB
and min_primary_shard_size
threshold to 10gb
to avoid shards smaller than 10GB.
To see the current size of your shards, use the cat shards API.
resp = client.cat.shards( v=True, h="index,prirep,shard,store", s="prirep,store", bytes="gb", ) print(resp)
response = client.cat.shards( v: true, h: 'index,prirep,shard,store', s: 'prirep,store', bytes: 'gb' ) puts response
const response = await client.cat.shards({ v: "true", h: "index,prirep,shard,store", s: "prirep,store", bytes: "gb", }); console.log(response);
GET _cat/shards?v=true&h=index,prirep,shard,store&s=prirep,store&bytes=gb
The pri.store.size
value shows the combined size of all primary shards for
the index.
index prirep shard store .ds-my-data-stream-2099.05.06-000001 p 0 50gb ...
If an index’s shard is experiencing degraded performance from surpassing the recommended 50GB size, you may consider fixing the index’s shards' sizing. Shards are immutable and therefore their size is fixed in place, so indices must be copied with corrected settings. This requires first ensuring sufficient disk to copy the data. Afterwards, you can copy the index’s data with corrected settings via one of the following options:
- running Split Index to increase number of primary shards
- creating a destination index with corrected settings and then running Reindex
Kindly note performing a Restore Snapshot and/or Clone Index would be insufficient to resolve shards' sizing.
Once a source index’s data is copied into its destination index, the source index can be removed. You may then consider setting Create Alias against the destination index for the source index’s name to point to it for continuity.
Master-eligible nodes should have at least 1GB of heap per 3000 indices
editThe number of indices a master node can manage is proportional to its heap size. The exact amount of heap memory needed for each index depends on various factors such as the size of the mapping and the number of shards per index.
As a general rule of thumb, you should have fewer than 3000 indices per GB of heap on master nodes. For example, if your cluster has dedicated master nodes with 4GB of heap each then you should have fewer than 12000 indices. If your master nodes are not dedicated master nodes then the same sizing guidance applies: you should reserve at least 1GB of heap on each master-eligible node for every 3000 indices in your cluster.
Note that this rule defines the absolute maximum number of indices that a master node can manage, but does not guarantee the performance of searches or indexing involving this many indices. You must also ensure that your data nodes have adequate resources for your workload and that your overall sharding strategy meets all your performance requirements. See also Searches run on a single thread per shard and Each index, shard, segment and field has overhead.
To check the configured size of each node’s heap, use the cat nodes API.
resp = client.cat.nodes( v=True, h="heap.max", ) print(resp)
response = client.cat.nodes( v: true, h: 'heap.max' ) puts response
const response = await client.cat.nodes({ v: "true", h: "heap.max", }); console.log(response);
GET _cat/nodes?v=true&h=heap.max
You can use the cat shards API to check the number of shards per node.
resp = client.cat.shards( v=True, ) print(resp)
response = client.cat.shards( v: true ) puts response
const response = await client.cat.shards({ v: "true", }); console.log(response);
GET _cat/shards?v=true
Add enough nodes to stay within the cluster shard limits
editCluster shard limits prevent creation of more than 1000 non-frozen shards per node, and 3000 frozen shards per dedicated frozen node. Make sure you have enough nodes of each type in your cluster to handle the number of shards you need.
Allow enough heap for field mappers and overheads
editMapped fields consume some heap memory on each node, and require extra heap on data nodes. Ensure each node has enough heap for mappings, and also allow extra space for overheads associated with its workload. The following sections show how to determine these heap requirements.
Mapping metadata in the cluster state
editEach node in the cluster has a copy of the cluster state. The cluster state includes information about the field mappings for each index. This information has heap overhead. You can use the Cluster stats API to get the heap overhead of the total size of all mappings after deduplication and compression.
resp = client.cluster.stats( human=True, filter_path="indices.mappings.total_deduplicated_mapping_size*", ) print(resp)
response = client.cluster.stats( human: true, filter_path: 'indices.mappings.total_deduplicated_mapping_size*' ) puts response
const response = await client.cluster.stats({ human: "true", filter_path: "indices.mappings.total_deduplicated_mapping_size*", }); console.log(response);
GET _cluster/stats?human&filter_path=indices.mappings.total_deduplicated_mapping_size*
This will show you information like in this example output:
{ "indices": { "mappings": { "total_deduplicated_mapping_size": "1gb", "total_deduplicated_mapping_size_in_bytes": 1073741824 } } }
Retrieving heap size and field mapper overheads
editYou can use the Nodes stats API to get two relevant metrics for each node:
- The size of the heap on each node.
- Any additional estimated heap overhead for the fields per node. This is specific to data nodes, where apart from the cluster state field information mentioned above, there is additional heap overhead for each mapped field of an index held by the data node. For nodes which are not data nodes, this field may be zero.
resp = client.nodes.stats( human=True, filter_path="nodes.*.name,nodes.*.indices.mappings.total_estimated_overhead*,nodes.*.jvm.mem.heap_max*", ) print(resp)
response = client.nodes.stats( human: true, filter_path: 'nodes.*.name,nodes.*.indices.mappings.total_estimated_overhead*,nodes.*.jvm.mem.heap_max*' ) puts response
const response = await client.nodes.stats({ human: "true", filter_path: "nodes.*.name,nodes.*.indices.mappings.total_estimated_overhead*,nodes.*.jvm.mem.heap_max*", }); console.log(response);
GET _nodes/stats?human&filter_path=nodes.*.name,nodes.*.indices.mappings.total_estimated_overhead*,nodes.*.jvm.mem.heap_max*
For each node, this will show you information like in this example output:
{ "nodes": { "USpTGYaBSIKbgSUJR2Z9lg": { "name": "node-0", "indices": { "mappings": { "total_estimated_overhead": "1gb", "total_estimated_overhead_in_bytes": 1073741824 } }, "jvm": { "mem": { "heap_max": "4gb", "heap_max_in_bytes": 4294967296 } } } } }
Consider additional heap overheads
editApart from the two field overhead metrics above, you must additionally allow enough heap for Elasticsearch’s baseline usage as well as your workload such as indexing, searches and aggregations. 0.5GB of extra heap will suffice for many reasonable workloads, and you may need even less if your workload is very light while heavy workloads may require more.
Example
editAs an example, consider the outputs above for a data node. The heap of the node will need at least:
- 1 GB for the cluster state field information.
- 1 GB for the additional estimated heap overhead for the fields of the data node.
- 0.5 GB of extra heap for other overheads.
Since the node has a 4GB heap max size in the example, it is thus sufficient for the total required heap of 2.5GB.
If the heap max size for a node is not sufficient, consider avoiding unnecessary fields, or scaling up the cluster, or redistributing index shards.
Note that the above rules do not necessarily guarantee the performance of searches or indexing involving a very high number of indices. You must also ensure that your data nodes have adequate resources for your workload and that your overall sharding strategy meets all your performance requirements. See also Searches run on a single thread per shard and Each index, shard, segment and field has overhead.
Avoid node hotspots
editIf too many shards are allocated to a specific node, the node can become a hotspot. For example, if a single node contains too many shards for an index with a high indexing volume, the node is likely to have issues.
To prevent hotspots, use the
index.routing.allocation.total_shards_per_node
index
setting to explicitly limit the number of shards on a single node. You can
configure index.routing.allocation.total_shards_per_node
using the
update index settings API.
resp = client.indices.put_settings( index="my-index-000001", settings={ "index": { "routing.allocation.total_shards_per_node": 5 } }, ) print(resp)
response = client.indices.put_settings( index: 'my-index-000001', body: { index: { 'routing.allocation.total_shards_per_node' => 5 } } ) puts response
const response = await client.indices.putSettings({ index: "my-index-000001", settings: { index: { "routing.allocation.total_shards_per_node": 5, }, }, }); console.log(response);
PUT my-index-000001/_settings { "index" : { "routing.allocation.total_shards_per_node" : 5 } }
Avoid unnecessary mapped fields
editBy default Elasticsearch automatically creates a mapping for every
field in every document it indexes. Every mapped field corresponds to some data
structures on disk which are needed for efficient search, retrieval, and
aggregations on this field. Details about each mapped field are also held in
memory. In many cases this overhead is unnecessary because a field is not used
in any searches or aggregations. Use Explicit mapping instead of dynamic
mapping to avoid creating fields that are never used. If a collection of fields
are typically used together, consider using copy_to
to consolidate them at
index time. If a field is only rarely used, it may be better to make it a
Runtime field instead.
You can get information about which fields are being used with the Field usage stats API, and you can analyze the disk usage of mapped fields using the Analyze index disk usage API. Note however that unnecessary mapped fields also carry some memory overhead as well as their disk usage.
Reduce a cluster’s shard count
editIf your cluster is already oversharded, you can use one or more of the following methods to reduce its shard count.
Create indices that cover longer time periods
editIf you use ILM and your retention policy allows it, avoid using a
max_age
threshold for the rollover action. Instead, use
max_primary_shard_size
to avoid creating empty indices or many small shards.
If your retention policy requires a max_age
threshold, increase it to create
indices that cover longer time intervals. For example, instead of creating daily
indices, you can create indices on a weekly or monthly basis.
Delete empty or unneeded indices
editIf you’re using ILM and roll over indices based on a max_age
threshold,
you can inadvertently create indices with no documents. These empty indices
provide no benefit but still consume resources.
You can find these empty indices using the cat count API.
resp = client.cat.count( index="my-index-000001", v=True, ) print(resp)
response = client.cat.count( index: 'my-index-000001', v: true ) puts response
const response = await client.cat.count({ index: "my-index-000001", v: "true", }); console.log(response);
GET _cat/count/my-index-000001?v=true
Once you have a list of empty indices, you can delete them using the delete index API. You can also delete any other unneeded indices.
resp = client.indices.delete( index="my-index-000001", ) print(resp)
response = client.indices.delete( index: 'my-index-000001' ) puts response
const response = await client.indices.delete({ index: "my-index-000001", }); console.log(response);
DELETE my-index-000001
Force merge during off-peak hours
editIf you no longer write to an index, you can use the force merge API to merge smaller segments into larger ones. This can reduce shard overhead and improve search speeds. However, force merges are resource-intensive. If possible, run the force merge during off-peak hours.
resp = client.indices.forcemerge( index="my-index-000001", ) print(resp)
response = client.indices.forcemerge( index: 'my-index-000001' ) puts response
const response = await client.indices.forcemerge({ index: "my-index-000001", }); console.log(response);
POST my-index-000001/_forcemerge
Shrink an existing index to fewer shards
editIf you no longer write to an index, you can use the shrink index API to reduce its shard count.
ILM also has a shrink action for indices in the warm phase.
Combine smaller indices
editYou can also use the reindex API to combine indices
with similar mappings into a single large index. For time series data, you could
reindex indices for short time periods into a new index covering a
longer period. For example, you could reindex daily indices from October with a
shared index pattern, such as my-index-2099.10.11
, into a monthly
my-index-2099.10
index. After the reindex, delete the smaller indices.
resp = client.reindex( source={ "index": "my-index-2099.10.*" }, dest={ "index": "my-index-2099.10" }, ) print(resp)
response = client.reindex( body: { source: { index: 'my-index-2099.10.*' }, dest: { index: 'my-index-2099.10' } } ) puts response
const response = await client.reindex({ source: { index: "my-index-2099.10.*", }, dest: { index: "my-index-2099.10", }, }); console.log(response);
POST _reindex { "source": { "index": "my-index-2099.10.*" }, "dest": { "index": "my-index-2099.10" } }
Troubleshoot shard-related errors
editHere’s how to resolve common shard-related errors.
this action would add [x] total shards, but this cluster currently has [y]/[z] maximum shards open;
editThe cluster.max_shards_per_node
cluster
setting limits the maximum number of open shards for a cluster. This error
indicates an action would exceed this limit.
If you’re confident your changes won’t destabilize the cluster, you can temporarily increase the limit using the cluster update settings API and retry the action.
resp = client.cluster.put_settings( persistent={ "cluster.max_shards_per_node": 1200 }, ) print(resp)
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => 1200 } } ) puts response
const response = await client.cluster.putSettings({ persistent: { "cluster.max_shards_per_node": 1200, }, }); console.log(response);
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": 1200 } }
This increase should only be temporary. As a long-term solution, we recommend you add nodes to the oversharded data tier or reduce your cluster’s shard count. To get a cluster’s current shard count after making changes, use the cluster stats API.
resp = client.cluster.stats( filter_path="indices.shards.total", ) print(resp)
response = client.cluster.stats( filter_path: 'indices.shards.total' ) puts response
const response = await client.cluster.stats({ filter_path: "indices.shards.total", }); console.log(response);
GET _cluster/stats?filter_path=indices.shards.total
When a long-term solution is in place, we recommend you reset the
cluster.max_shards_per_node
limit.
resp = client.cluster.put_settings( persistent={ "cluster.max_shards_per_node": None }, ) print(resp)
response = client.cluster.put_settings( body: { persistent: { 'cluster.max_shards_per_node' => nil } } ) puts response
const response = await client.cluster.putSettings({ persistent: { "cluster.max_shards_per_node": null, }, }); console.log(response);
PUT _cluster/settings { "persistent" : { "cluster.max_shards_per_node": null } }
Number of documents in the shard cannot exceed [2147483519]
editEach Elasticsearch shard is a separate Lucene index, so it shares Lucene’s
MAX_DOC
limit of having at most
2,147,483,519 ((2^31)-129
) documents. This per-shard limit applies to the sum
of docs.count
plus docs.deleted
as reported by the Index
stats API. Exceeding this limit will result in errors like the following:
Elasticsearch exception [type=illegal_argument_exception, reason=Number of documents in the shard cannot exceed [2147483519]]
This calculation may differ from the Count API’s calculation, because the Count API does not include nested documents and does not count deleted documents.
This limit is much higher than the recommended maximum document count of approximately 200M documents per shard.
If you encounter this problem, try to mitigate it by using the Force Merge API to merge away some deleted docs. For example:
resp = client.indices.forcemerge( index="my-index-000001", only_expunge_deletes=True, ) print(resp)
const response = await client.indices.forcemerge({ index: "my-index-000001", only_expunge_deletes: "true", }); console.log(response);
POST my-index-000001/_forcemerge?only_expunge_deletes=true
This will launch an asynchronous task which can be monitored via the Task Management API.
It may also be helpful to delete unneeded documents, or to split or reindex the index into one with a larger number of shards.
On this page
- Create a sharding strategy
- Sizing considerations
- Searches run on a single thread per shard
- Each index, shard, segment and field has overhead
- Elasticsearch automatically balances shards within a data tier
- Best practices
- Delete indices, not documents
- Use data streams and ILM for time series data
- Aim for shards of up to 200M documents, or with sizes between 10GB and 50GB
- Master-eligible nodes should have at least 1GB of heap per 3000 indices
- Add enough nodes to stay within the cluster shard limits
- Allow enough heap for field mappers and overheads
- Mapping metadata in the cluster state
- Retrieving heap size and field mapper overheads
- Consider additional heap overheads
- Example
- Avoid node hotspots
- Avoid unnecessary mapped fields
- Reduce a cluster’s shard count
- Create indices that cover longer time periods
- Delete empty or unneeded indices
- Force merge during off-peak hours
- Shrink an existing index to fewer shards
- Combine smaller indices
- Troubleshoot shard-related errors
- this action would add [x] total shards, but this cluster currently has [y]/[z] maximum shards open;
- Number of documents in the shard cannot exceed [2147483519]