- Elasticsearch Guide: other versions:
- Getting Started
- Setup
- Breaking changes
- Breaking changes in 2.3
- Breaking changes in 2.2
- Breaking changes in 2.1
- Breaking changes in 2.0
- Removed features
- Network changes
- Multiple
path.data
striping - Mapping changes
- CRUD and routing changes
- Query DSL changes
- Search changes
- Aggregation changes
- Parent/Child changes
- Scripting changes
- Index API changes
- Snapshot and Restore changes
- Plugin and packaging changes
- Setting changes
- Stats, info, and
cat
changes - Java API changes
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top hits Aggregation
- Value Count Aggregation
- Bucket Aggregations
- Children Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IPv4 Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Serial Differencing Aggregation
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Warmers
- Shadow replica indices
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- Optimize
- Upgrade
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Field datatypes
- Meta-Fields
- Mapping parameters
analyzer
boost
coerce
copy_to
doc_values
dynamic
enabled
fielddata
format
geohash
geohash_precision
geohash_prefix
ignore_above
ignore_malformed
include_in_all
index
index_options
lat_lon
fields
norms
null_value
position_increment_gap
precision_step
properties
search_analyzer
similarity
store
term_vector
- Dynamic Mapping
- Transform
- Analysis
- Analyzers
- Tokenizers
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Compound Word Token Filter
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Character Filters
- Modules
- Index Modules
- Testing
- Glossary of terms
- Release Notes
- 2.3.5 Release Notes
- 2.3.4 Release Notes
- 2.3.3 Release Notes
- 2.3.2 Release Notes
- 2.3.1 Release Notes
- 2.3.0 Release Notes
- 2.2.2 Release Notes
- 2.2.1 Release Notes
- 2.2.0 Release Notes
- 2.1.2 Release Notes
- 2.1.1 Release Notes
- 2.1.0 Release Notes
- 2.0.2 Release Notes
- 2.0.1 Release Notes
- 2.0.0 Release Notes
- 2.0.0-rc1 Release Notes
- 2.0.0-beta2 Release Notes
- 2.0.0-beta1 Release Notes
WARNING: Version 2.3 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Rolling upgrades
editRolling upgrades
editA rolling upgrade allows the Elasticsearch cluster to be upgraded one node at a time, with no downtime for end users. Running multiple versions of Elasticsearch in the same cluster for any length of time beyond that required for an upgrade is not supported, as shards will not be replicated from the more recent version to the older version.
Consult this table to verify that rolling upgrades are supported for your version of Elasticsearch.
To perform a rolling upgrade:
Step 1: Disable shard allocation
editWhen you shut down a node, the allocation process will immediately try to replicate the shards that were on that node to other nodes in the cluster, causing a lot of wasted I/O. This can be avoided by disabling allocation before shutting down a node:
PUT /_cluster/settings { "transient": { "cluster.routing.allocation.enable": "none" } }
Step 2: Stop non-essential indexing and perform a synced flush (Optional)
editYou may happily continue indexing during the upgrade. However, shard recovery will be much faster if you temporarily stop non-essential indexing and issue a synced-flush request:
POST /_flush/synced
A synced flush request is a “best effort” operation. It will fail if there are any pending indexing operations, but it is safe to reissue the request multiple times if necessary.
Step 3: Stop and upgrade a single node
editShut down one of the nodes in the cluster before starting the upgrade.
When using the zip or tarball packages, the config
, data
, logs
and
plugins
directories are placed within the Elasticsearch home directory by
default.
It is a good idea to place these directories in a different location so that
there is no chance of deleting them when upgrading Elasticsearch. These
custom paths can be configured with the path.conf
and
path.data
settings.
The Debian and RPM packages place these directories in the appropriate place for each operating system.
To upgrade using a Debian or RPM package:
-
Use
rpm
ordpkg
to install the new package. All files should be placed in their proper locations, and config files should not be overwritten.
To upgrade using a zip or compressed tarball:
-
Extract the zip or tarball to a new directory, to be sure that you don’t
overwrite the
config
ordata
directories. -
Either copy the files in the
config
directory from your old installation to your new installation, or use the--path.conf
option on the command line to point to an external config directory. -
Either copy the files in the
data
directory from your old installation to your new installation, or configure the location of the data directory in theconfig/elasticsearch.yml
file, with thepath.data
setting.
Step 4: Start the upgraded node
editStart the now upgraded node and confirm that it joins the cluster by checking the log file or by checking the output of this request:
GET _cat/nodes
Step 5: Reenable shard allocation
editOnce the node has joined the cluster, reenable shard allocation to start using the node:
PUT /_cluster/settings { "transient": { "cluster.routing.allocation.enable": "all" } }
Step 6: Wait for the node to recover
editYou should wait for the cluster to finish shard allocation before upgrading
the next node. You can check on progress with the _cat/health
request:
GET _cat/health
Wait for the status
column to move from yellow
to green
. Status green
means that all primary and replica shards have been allocated.
During a rolling upgrade, primary shards assigned to a node with the higher version will never have their replicas assigned to a node with the lower version, because the newer version may have a different data format which is not understood by the older version.
If it is not possible to assign the replica shards to another node with the
higher version — e.g. if there is only one node with the higher version in
the cluster — then the replica shards will remain unassigned and the
cluster health will remain status yellow
.
In this case, check that there are no initializing or relocating shards (the
init
and relo
columns) before proceding.
As soon as another node is upgraded, the replicas should be assigned and the
cluster health will reach status green
.
Shards that have not been sync-flushed may take some time to
recover. The recovery status of individual shards can be monitored with the
_cat/recovery
request:
GET _cat/recovery
If you stopped indexing, then it is safe to resume indexing as soon as recovery has completed.
Step 7: Repeat
editWhen the cluster is stable and the node has recovered, repeat the above steps for all remaining nodes.
On this page