WARNING: Version 2.3 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Full cluster restart upgrade
editFull cluster restart upgrade
editElasticsearch requires a full cluster restart when upgrading across major versions: from 0.x to 1.x or from 1.x to 2.x. Rolling upgrades are not supported across major versions.
The process to perform an upgrade with a full cluster restart is as follows:
Step 1: Disable shard allocation
editWhen you shut down a node, the allocation process will immediately try to replicate the shards that were on that node to other nodes in the cluster, causing a lot of wasted I/O. This can be avoided by disabling allocation before shutting down a node:
PUT /_cluster/settings { "persistent": { "cluster.routing.allocation.enable": "none" } }
If upgrading from 0.90.x to 1.x, then use these settings instead:
PUT /_cluster/settings { "persistent": { "cluster.routing.allocation.disable_allocation": true, "cluster.routing.allocation.enable": "none" } }
Step 2: Perform a synced flush
editShard recovery will be much faster if you stop indexing and issue a synced-flush request:
POST /_flush/synced
A synced flush request is a “best effort” operation. It will fail if there are any pending indexing operations, but it is safe to reissue the request multiple times if necessary.
Step 3: Shutdown and upgrade all nodes
editStop all Elasticsearch services on all nodes in the cluster. Each node can be upgraded following the same procedure described in Stop and upgrade a single node.
Step 4: Start the cluster
editIf you have dedicated master nodes — nodes with node.master
set to
true
(the default) and node.data
set to false
— then it is a good idea
to start them first. Wait for them to form a cluster and to elect a master
before proceeding with the data nodes. You can check progress by looking at the
logs.
As soon as the minimum number of master-eligible nodes
have discovered each other, they will form a cluster and elect a master. From
that point on, the _cat/health
and _cat/nodes
APIs can be used to monitor nodes joining the cluster:
GET _cat/health GET _cat/nodes
Use these APIs to check that all nodes have successfully joined the cluster.
Step 5: Wait for yellow
editAs soon as each node has joined the cluster, it will start to recover any
primary shards that are stored locally. Initially, the
_cat/health
request will report a status
of red
, meaning
that not all primary shards have been allocated.
Once each node has recovered its local shards, the status
will become
yellow
, meaning all primary shards have been recovered, but not all replica
shards are allocated. This is to be expected because allocation is still
disabled.
Step 6: Reenable allocation
editDelaying the allocation of replicas until all nodes have joined the cluster allows the master to allocate replicas to nodes which already have local shard copies. At this point, with all the nodes in the cluster, it is safe to reenable shard allocation:
PUT /_cluster/settings { "persistent": { "cluster.routing.allocation.enable": "all" } }
If upgrading from 0.90.x to 1.x, then use these settings instead:
PUT /_cluster/settings { "persistent": { "cluster.routing.allocation.disable_allocation": false, "cluster.routing.allocation.enable": "all" } }
The cluster will now start allocating replica shards to all data nodes. At this point it is safe to resume indexing and searching, but your cluster will recover more quickly if you can delay indexing and searching until all shards have recovered.
You can monitor progress with the _cat/health
and
_cat/recovery
APIs:
GET _cat/health GET _cat/recovery
Once the status
column in the _cat/health
output has reached green
, all
primary and replica shards have been successfully allocated.