- Elasticsearch Guide: other versions:
- What is Elasticsearch?
- What’s new in 8.3
- Quick start
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Field data cache settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- License settings
- Local gateway settings
- Logging
- Machine learning settings
- Monitoring settings
- Node
- Networking
- Node query cache settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Advanced configuration
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Registered domain
- Remove
- Rename
- Script
- Set
- Set security user
- Sort
- Split
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Aliases
- Search your data
- Collapse search results
- Filter search results
- Highlighting
- Long-running searches
- Near real-time search
- Paginate search results
- Retrieve inner hits
- Retrieve selected fields
- Search across clusters
- Search multiple data streams and indices
- Search shard routing
- Search templates
- Sort search results
- kNN search
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- EQL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- How to
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Multiple deployments writing to the same snapshot repository
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Nodes reload secure settings
- Nodes stats
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Cross-cluster replication APIs
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- Features APIs
- Fleet APIs
- Find structure API
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Ingest APIs
- Info API
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Migration APIs
- Node lifecycle APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Script APIs
- Search APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get service accounts
- Get service account credentials
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profile
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Migration guide
- Release notes
- Elasticsearch version 8.3.3
- Elasticsearch version 8.3.2
- Elasticsearch version 8.3.1
- Elasticsearch version 8.3.0
- Elasticsearch version 8.2.3
- Elasticsearch version 8.2.2
- Elasticsearch version 8.2.1
- Elasticsearch version 8.2.0
- Elasticsearch version 8.1.3
- Elasticsearch version 8.1.2
- Elasticsearch version 8.1.1
- Elasticsearch version 8.1.0
- Elasticsearch version 8.0.1
- Elasticsearch version 8.0.0
- Elasticsearch version 8.0.0-rc2
- Elasticsearch version 8.0.0-rc1
- Elasticsearch version 8.0.0-beta1
- Elasticsearch version 8.0.0-alpha2
- Elasticsearch version 8.0.0-alpha1
- Dependencies and versions
elasticsearch-node
editelasticsearch-node
editThe elasticsearch-node
command enables you to perform certain unsafe
operations on a node that are only possible while it is shut down. This command
allows you to adjust the role of a node, unsafely edit cluster
settings and may be able to recover some data after a disaster or start a node
even if it is incompatible with the data on disk.
Synopsis
editbin/elasticsearch-node repurpose|unsafe-bootstrap|detach-cluster|override-version [-E <KeyValuePair>] [-h, --help] ([-s, --silent] | [-v, --verbose])
Description
editThis tool has a number of modes:
-
elasticsearch-node repurpose
can be used to delete unwanted data from a node if it used to be a data node or a master-eligible node but has been repurposed not to have one or other of these roles. -
elasticsearch-node remove-settings
can be used to remove persistent settings from the cluster state in case where it contains incompatible settings that prevent the cluster from forming. -
elasticsearch-node remove-customs
can be used to remove custom metadata from the cluster state in case where it contains broken metadata that prevents the cluster state from being loaded. -
elasticsearch-node unsafe-bootstrap
can be used to perform unsafe cluster bootstrapping. It forces one of the nodes to form a brand-new cluster on its own, using its local copy of the cluster metadata. -
elasticsearch-node detach-cluster
enables you to move nodes from one cluster to another. This can be used to move nodes into a new cluster created with theelasticsearch-node unsafe-bootstrap
command. If unsafe cluster bootstrapping was not possible, it also enables you to move nodes into a brand-new cluster. -
elasticsearch-node override-version
enables you to start up a node even if the data in the data path was written by an incompatible version of Elasticsearch. This may sometimes allow you to downgrade to an earlier version of Elasticsearch.
Changing the role of a node
editThere may be situations where you want to repurpose a node without following
the proper repurposing processes. The elasticsearch-node
repurpose
tool allows you to delete any excess on-disk data and start a node
after repurposing it.
The intended use is:
- Stop the node
-
Update
elasticsearch.yml
by settingnode.roles
as desired. -
Run
elasticsearch-node repurpose
on the node - Start the node
If you run elasticsearch-node repurpose
on a node without the data
role and
with the master
role then it will delete any remaining shard data on that
node, but it will leave the index and cluster metadata alone. If you run
elasticsearch-node repurpose
on a node without the data
and master
roles
then it will delete any remaining shard data and index metadata, but it will
leave the cluster metadata alone.
Running this command can lead to data loss for the indices mentioned if the data contained is not available on other nodes in the cluster. Only run this tool if you understand and accept the possible consequences, and only after determining that the node cannot be repurposed cleanly.
The tool provides a summary of the data to be deleted and asks for confirmation
before making any changes. You can get detailed information about the affected
indices and shards by passing the verbose (-v
) option.
Removing persistent cluster settings
editThere may be situations where a node contains persistent cluster settings that prevent the cluster from forming. Since the cluster cannot form, it is not possible to remove these settings using the Cluster update settings API.
The elasticsearch-node remove-settings
tool allows you to forcefully remove
those persistent settings from the on-disk cluster state. The tool takes a
list of settings as parameters that should be removed, and also supports
wildcard patterns.
The intended use is:
- Stop the node
-
Run
elasticsearch-node remove-settings name-of-setting-to-remove
on the node - Repeat for all other master-eligible nodes
- Start the nodes
Removing custom metadata from the cluster state
editThere may be situations where a node contains custom metadata, typically provided by plugins, that prevent the node from starting up and loading the cluster from disk.
The elasticsearch-node remove-customs
tool allows you to forcefully remove
the problematic custom metadata. The tool takes a list of custom metadata names
as parameters that should be removed, and also supports wildcard patterns.
The intended use is:
- Stop the node
-
Run
elasticsearch-node remove-customs name-of-custom-to-remove
on the node - Repeat for all other master-eligible nodes
- Start the nodes
Recovering data after a disaster
editSometimes Elasticsearch nodes are temporarily stopped, perhaps because of the need to perform some maintenance activity or perhaps because of a hardware failure. After you resolve the temporary condition and restart the node, it will rejoin the cluster and continue normally. Depending on your configuration, your cluster may be able to remain completely available even while one or more of its nodes are stopped.
Sometimes it might not be possible to restart a node after it has stopped. For example, the node’s host may suffer from a hardware problem that cannot be repaired. If the cluster is still available then you can start up a fresh node on another host and Elasticsearch will bring this node into the cluster in place of the failed node.
Each node stores its data in the data directories defined by the
path.data
setting. This means that in a disaster you can
also restart a node by moving its data directories to another host, presuming
that those data directories can be recovered from the faulty host.
Elasticsearch requires a response from a majority of the master-eligible nodes in order to elect a master and to update the cluster state. This means that if you have three master-eligible nodes then the cluster will remain available even if one of them has failed. However if two of the three master-eligible nodes fail then the cluster will be unavailable until at least one of them is restarted.
In very rare circumstances it may not be possible to restart enough nodes to restore the cluster’s availability. If such a disaster occurs, you should build a new cluster from a recent snapshot and re-import any data that was ingested since that snapshot was taken.
However, if the disaster is serious enough then it may not be possible to
recover from a recent snapshot either. Unfortunately in this case there is no
way forward that does not risk data loss, but it may be possible to use the
elasticsearch-node
tool to construct a new cluster that contains some of the
data from the failed cluster.
Bypassing version checks
editThe data that Elasticsearch writes to disk is designed to be read by the current version and a limited set of future versions. It cannot generally be read by older versions, nor by versions that are more than one major version newer. The data stored on disk includes the version of the node that wrote it, and Elasticsearch checks that it is compatible with this version when starting up.
In rare circumstances it may be desirable to bypass this check and start up an Elasticsearch node using data that was written by an incompatible version. This may not work if the format of the stored data has changed, and it is a risky process because it is possible for the format to change in ways that Elasticsearch may misinterpret, silently leading to data loss.
To bypass this check, you can use the elasticsearch-node override-version
tool to overwrite the version number stored in the data path with the current
version, causing Elasticsearch to believe that it is compatible with the on-disk data.
Unsafe cluster bootstrapping
editIf there is at least one remaining master-eligible node, but it is not possible
to restart a majority of them, then the elasticsearch-node unsafe-bootstrap
command will unsafely override the cluster’s voting
configuration as if performing another
cluster bootstrapping process.
The target node can then form a new cluster on its own by using
the cluster metadata held locally on the target node.
These steps can lead to arbitrary data loss since the target node may not hold the latest cluster metadata, and this out-of-date metadata may make it impossible to use some or all of the indices in the cluster.
Since unsafe bootstrapping forms a new cluster containing a single node, once
you have run it you must use the elasticsearch-node
detach-cluster
tool to migrate any other surviving nodes from the failed
cluster into this new cluster.
When you run the elasticsearch-node unsafe-bootstrap
tool it will analyse the
state of the node and ask for confirmation before taking any action. Before
asking for confirmation it reports the term and version of the cluster state on
the node on which it runs as follows:
Current node cluster state (term, version) pair is (4, 12)
If you have a choice of nodes on which to run this tool then you should choose
one with a term that is as large as possible. If there is more than one
node with the same term, pick the one with the largest version.
This information identifies the node with the freshest cluster state, which minimizes the
quantity of data that might be lost. For example, if the first node reports
(4, 12)
and a second node reports (5, 3)
, then the second node is preferred
since its term is larger. However if the second node reports (3, 17)
then
the first node is preferred since its term is larger. If the second node
reports (4, 10)
then it has the same term as the first node, but has a
smaller version, so the first node is preferred.
Running this command can lead to arbitrary data loss. Only run this tool if you understand and accept the possible consequences and have exhausted all other possibilities for recovery of your cluster.
The sequence of operations for using this tool are as follows:
- Make sure you have really lost access to at least half of the master-eligible nodes in the cluster, and they cannot be repaired or recovered by moving their data paths to healthy hardware.
- Stop all remaining nodes.
- Choose one of the remaining master-eligible nodes to become the new elected master as described above.
-
On this node, run the
elasticsearch-node unsafe-bootstrap
command as shown below. Verify that the tool reportedMaster node was successfully bootstrapped
. - Start this node and verify that it is elected as the master node.
-
Run the
elasticsearch-node detach-cluster
tool, described below, on every other node in the cluster. - Start all other nodes and verify that each one joins the cluster.
- Investigate the data in the cluster to discover if any was lost during this process.
When you run the tool it will make sure that the node that is being used to bootstrap the cluster is not running. It is important that all other master-eligible nodes are also stopped while this tool is running, but the tool does not check this.
The message Master node was successfully bootstrapped
does not mean that
there has been no data loss, it just means that tool was able to complete its
job.
Detaching nodes from their cluster
editIt is unsafe for nodes to move between clusters, because different clusters have completely different cluster metadata. There is no way to safely merge the metadata from two clusters together.
To protect against inadvertently joining the wrong cluster, each cluster creates a unique identifier, known as the cluster UUID, when it first starts up. Every node records the UUID of its cluster and refuses to join a cluster with a different UUID.
However, if a node’s cluster has permanently failed then it may be desirable to
try and move it into a new cluster. The elasticsearch-node detach-cluster
command lets you detach a node from its cluster by resetting its cluster UUID.
It can then join another cluster with a different UUID.
For example, after unsafe cluster bootstrapping you will need to detach all the other surviving nodes from their old cluster so they can join the new, unsafely-bootstrapped cluster.
Unsafe cluster bootstrapping is only possible if there is at least one
surviving master-eligible node. If there are no remaining master-eligible nodes
then the cluster metadata is completely lost. However, the individual data
nodes also contain a copy of the index metadata corresponding with their
shards. This sometimes allows a new cluster to import these shards as
dangling indices. You can sometimes
recover some indices after the loss of all master-eligible nodes in a cluster
by creating a new cluster and then using the elasticsearch-node
detach-cluster
command to move any surviving nodes into this new cluster.
There is a risk of data loss when importing a dangling index because data nodes may not have the most recent copy of the index metadata and do not have any information about which shard copies are in-sync. This means that a stale shard copy may be selected to be the primary, and some of the shards may be incompatible with the imported mapping.
Execution of this command can lead to arbitrary data loss. Only run this tool if you understand and accept the possible consequences and have exhausted all other possibilities for recovery of your cluster.
The sequence of operations for using this tool are as follows:
- Make sure you have really lost access to every one of the master-eligible nodes in the cluster, and they cannot be repaired or recovered by moving their data paths to healthy hardware.
- Start a new cluster and verify that it is healthy. This cluster may comprise one or more brand-new master-eligible nodes, or may be an unsafely-bootstrapped cluster formed as described above.
- Stop all remaining data nodes.
-
On each data node, run the
elasticsearch-node detach-cluster
tool as shown below. Verify that the tool reportedNode was successfully detached from the cluster
. - If necessary, configure each data node to discover the new cluster.
- Start each data node and verify that it has joined the new cluster.
- Wait for all recoveries to have completed, and investigate the data in the cluster to discover if any was lost during this process.
The message Node was successfully detached from the cluster
does not mean
that there has been no data loss, it just means that tool was able to complete
its job.
Parameters
edit-
repurpose
- Delete excess data when a node’s roles are changed.
-
unsafe-bootstrap
- Specifies to unsafely bootstrap this node as a new one-node cluster.
-
detach-cluster
- Specifies to unsafely detach this node from its cluster so it can join a different cluster.
-
override-version
- Overwrites the version number stored in the data path so that a node can start despite being incompatible with the on-disk data.
-
remove-settings
- Forcefully removes the provided persistent cluster settings from the on-disk cluster state.
-
-E <KeyValuePair>
- Configures a setting.
-
-h, --help
- Returns all of the command parameters.
-
-s, --silent
- Shows minimal output.
-
-v, --verbose
- Shows verbose output.
Examples
editRepurposing a node as a dedicated master node
editIn this example, a former data node is repurposed as a dedicated master node.
First update the node’s settings to node.roles: [ "master" ]
in its
elasticsearch.yml
config file. Then run the elasticsearch-node repurpose
command to find and remove excess shard data:
node$ ./bin/elasticsearch-node repurpose WARNING: Elasticsearch MUST be stopped before running this tool. Found 2 shards in 2 indices to clean up Use -v to see list of paths and indices affected Node is being re-purposed as master and no-data. Clean-up of shard data will be performed. Do you want to proceed? Confirm [y/N] y Node successfully repurposed to master and no-data.
Repurposing a node as a coordinating-only node
editIn this example, a node that previously held data is repurposed as a
coordinating-only node. First update the node’s settings to node.roles: []
in
its elasticsearch.yml
config file. Then run the elasticsearch-node repurpose
command to find and remove excess shard data and index metadata:
node$./bin/elasticsearch-node repurpose WARNING: Elasticsearch MUST be stopped before running this tool. Found 2 indices (2 shards and 2 index meta data) to clean up Use -v to see list of paths and indices affected Node is being re-purposed as no-master and no-data. Clean-up of index data will be performed. Do you want to proceed? Confirm [y/N] y Node successfully repurposed to no-master and no-data.
Removing persistent cluster settings
editIf your nodes contain persistent cluster settings that prevent the cluster from forming, i.e., can’t be removed using the Cluster update settings API, you can run the following commands to remove one or more cluster settings.
node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.exporters.my_exporter.host WARNING: Elasticsearch MUST be stopped before running this tool. The following settings will be removed: xpack.monitoring.exporters.my_exporter.host: "10.1.2.3" You should only run this tool if you have incompatible settings in the cluster state that prevent the cluster from forming. This tool can cause data loss and its use should be your last resort. Do you want to proceed? Confirm [y/N] y Settings were successfully removed from the cluster state
You can also use wildcards to remove multiple settings, for example using
node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.*
Removing custom metadata from the cluster state
editIf the on-disk cluster state contains custom metadata that prevents the node from starting up and loading the cluster state, you can run the following commands to remove this custom metadata.
node$ ./bin/elasticsearch-node remove-customs snapshot_lifecycle WARNING: Elasticsearch MUST be stopped before running this tool. The following customs will be removed: snapshot_lifecycle You should only run this tool if you have broken custom metadata in the cluster state that prevents the cluster state from being loaded. This tool can cause data loss and its use should be your last resort. Do you want to proceed? Confirm [y/N] y Customs were successfully removed from the cluster state
Unsafe cluster bootstrapping
editSuppose your cluster had five master-eligible nodes and you have permanently lost three of them, leaving two nodes remaining.
-
Run the tool on the first remaining node, but answer
n
at the confirmation step.
node_1$ ./bin/elasticsearch-node unsafe-bootstrap WARNING: Elasticsearch MUST be stopped before running this tool. Current node cluster state (term, version) pair is (4, 12) You should only run this tool if you have permanently lost half or more of the master-eligible nodes in this cluster, and you cannot restore the cluster from a snapshot. This tool can cause arbitrary data loss and its use should be your last resort. If you have multiple surviving master eligible nodes, you should run this tool on the node with the highest cluster state (term, version) pair. Do you want to proceed? Confirm [y/N] n
-
Run the tool on the second remaining node, and again answer
n
at the confirmation step.
node_2$ ./bin/elasticsearch-node unsafe-bootstrap WARNING: Elasticsearch MUST be stopped before running this tool. Current node cluster state (term, version) pair is (5, 3) You should only run this tool if you have permanently lost half or more of the master-eligible nodes in this cluster, and you cannot restore the cluster from a snapshot. This tool can cause arbitrary data loss and its use should be your last resort. If you have multiple surviving master eligible nodes, you should run this tool on the node with the highest cluster state (term, version) pair. Do you want to proceed? Confirm [y/N] n
- Since the second node has a greater term it has a fresher cluster state, so it is better to unsafely bootstrap the cluster using this node:
node_2$ ./bin/elasticsearch-node unsafe-bootstrap WARNING: Elasticsearch MUST be stopped before running this tool. Current node cluster state (term, version) pair is (5, 3) You should only run this tool if you have permanently lost half or more of the master-eligible nodes in this cluster, and you cannot restore the cluster from a snapshot. This tool can cause arbitrary data loss and its use should be your last resort. If you have multiple surviving master eligible nodes, you should run this tool on the node with the highest cluster state (term, version) pair. Do you want to proceed? Confirm [y/N] y Master node was successfully bootstrapped
Detaching nodes from their cluster
editAfter unsafely bootstrapping a new cluster, run the elasticsearch-node
detach-cluster
command to detach all remaining nodes from the failed cluster
so they can join the new cluster:
node_3$ ./bin/elasticsearch-node detach-cluster WARNING: Elasticsearch MUST be stopped before running this tool. You should only run this tool if you have permanently lost all of the master-eligible nodes in this cluster and you cannot restore the cluster from a snapshot, or you have already unsafely bootstrapped a new cluster by running `elasticsearch-node unsafe-bootstrap` on a master-eligible node that belonged to the same cluster as this node. This tool can cause arbitrary data loss and its use should be your last resort. Do you want to proceed? Confirm [y/N] y Node was successfully detached from the cluster
Bypassing version checks
editRun the elasticsearch-node override-version
command to overwrite the version
stored in the data path so that a node can start despite being incompatible
with the data stored in the data path:
node$ ./bin/elasticsearch-node override-version WARNING: Elasticsearch MUST be stopped before running this tool. This data path was last written by Elasticsearch version [x.x.x] and may no longer be compatible with Elasticsearch version [y.y.y]. This tool will bypass this compatibility check, allowing a version [y.y.y] node to start on this data path, but a version [y.y.y] node may not be able to read this data or may read it incorrectly leading to data loss. You should not use this tool. Instead, continue to use a version [x.x.x] node on this data path. If necessary, you can use reindex-from-remote to copy the data from here into an older cluster. Do you want to proceed? Confirm [y/N] y Successfully overwrote this node's metadata to bypass its version compatibility checks.
On this page
- Synopsis
- Description
- Changing the role of a node
- Removing persistent cluster settings
- Removing custom metadata from the cluster state
- Recovering data after a disaster
- Bypassing version checks
- Unsafe cluster bootstrapping
- Detaching nodes from their cluster
- Parameters
- Examples
- Repurposing a node as a dedicated master node
- Repurposing a node as a coordinating-only node
- Removing persistent cluster settings
- Removing custom metadata from the cluster state
- Unsafe cluster bootstrapping
- Detaching nodes from their cluster
- Bypassing version checks
ElasticON events are back!
Learn about the Elastic Search AI Platform from the experts at our live events.
Register now