- Elasticsearch Guide: other versions:
- What is Elasticsearch?
- What’s new in 8.10
- Set up Elasticsearch
- Installing Elasticsearch
- Run Elasticsearch locally
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Secure settings
- Auditing settings
- Circuit breaker settings
- Cluster-level shard allocation and routing settings
- Miscellaneous cluster settings
- Cross-cluster replication settings
- Discovery and cluster formation settings
- Field data cache settings
- Health Diagnostic settings
- Index lifecycle management settings
- Index management settings
- Index recovery settings
- Indexing buffer settings
- License settings
- Local gateway settings
- Logging
- Machine learning settings
- Monitoring settings
- Node
- Networking
- Node query cache settings
- Search settings
- Security settings
- Shard request cache settings
- Snapshot and restore settings
- Transforms settings
- Thread pools
- Watcher settings
- Advanced configuration
- Important system configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- All permission check
- Discovery configuration check
- Bootstrap Checks for X-Pack
- Starting Elasticsearch
- Stopping Elasticsearch
- Discovery and cluster formation
- Add and remove nodes in your cluster
- Full-cluster restart and rolling restart
- Remote clusters
- Plugins
- Upgrade Elasticsearch
- Index modules
- Mapping
- Text analysis
- Overview
- Concepts
- Configure text analysis
- Built-in analyzer reference
- Tokenizer reference
- Token filter reference
- Apostrophe
- ASCII folding
- CJK bigram
- CJK width
- Classic
- Common grams
- Conditional
- Decimal digit
- Delimited payload
- Dictionary decompounder
- Edge n-gram
- Elision
- Fingerprint
- Flatten graph
- Hunspell
- Hyphenation decompounder
- Keep types
- Keep words
- Keyword marker
- Keyword repeat
- KStem
- Length
- Limit token count
- Lowercase
- MinHash
- Multiplexer
- N-gram
- Normalization
- Pattern capture
- Pattern replace
- Phonetic
- Porter stem
- Predicate script
- Remove duplicates
- Reverse
- Shingle
- Snowball
- Stemmer
- Stemmer override
- Stop
- Synonym
- Synonym graph
- Trim
- Truncate
- Unique
- Uppercase
- Word delimiter
- Word delimiter graph
- Character filters reference
- Normalizers
- Index templates
- Data streams
- Ingest pipelines
- Example: Parse logs
- Enrich your data
- Processor reference
- Append
- Attachment
- Bytes
- Circle
- Community ID
- Convert
- CSV
- Date
- Date index name
- Dissect
- Dot expander
- Drop
- Enrich
- Fail
- Fingerprint
- Foreach
- Geo-grid
- GeoIP
- Grok
- Gsub
- HTML strip
- Inference
- Join
- JSON
- KV
- Lowercase
- Network direction
- Pipeline
- Redact
- Registered domain
- Remove
- Rename
- Reroute
- Script
- Set
- Set security user
- Sort
- Split
- Trim
- Uppercase
- URL decode
- URI parts
- User agent
- Aliases
- Search your data
- Collapse search results
- Filter search results
- Highlighting
- Long-running searches
- Near real-time search
- Paginate search results
- Retrieve inner hits
- Retrieve selected fields
- Search across clusters
- Search multiple data streams and indices
- Search shard routing
- Search templates
- Search with synonyms
- Sort search results
- kNN search
- Semantic search
- Searching with query rules
- Query DSL
- Aggregations
- Bucket aggregations
- Adjacency matrix
- Auto-interval date histogram
- Categorize text
- Children
- Composite
- Date histogram
- Date range
- Diversified sampler
- Filter
- Filters
- Frequent item sets
- Geo-distance
- Geohash grid
- Geohex grid
- Geotile grid
- Global
- Histogram
- IP prefix
- IP range
- Missing
- Multi Terms
- Nested
- Parent
- Random sampler
- Range
- Rare terms
- Reverse nested
- Sampler
- Significant terms
- Significant text
- Terms
- Time series
- Variable width histogram
- Subtleties of bucketing range fields
- Metrics aggregations
- Pipeline aggregations
- Average bucket
- Bucket script
- Bucket count K-S test
- Bucket correlation
- Bucket selector
- Bucket sort
- Change point
- Cumulative cardinality
- Cumulative sum
- Derivative
- Extended stats bucket
- Inference bucket
- Max bucket
- Min bucket
- Moving function
- Moving percentiles
- Normalize
- Percentiles bucket
- Serial differencing
- Stats bucket
- Sum bucket
- Bucket aggregations
- Geospatial analysis
- EQL
- SQL
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Scripting
- Data management
- ILM: Manage the index lifecycle
- Tutorial: Customize built-in policies
- Tutorial: Automate rollover
- Index management in Kibana
- Overview
- Concepts
- Index lifecycle actions
- Configure a lifecycle policy
- Migrate index allocation filters to node roles
- Troubleshooting index lifecycle management errors
- Start and stop index lifecycle management
- Manage existing indices
- Skip rollover
- Restore a managed data stream or index
- Data tiers
- Autoscaling
- Monitor a cluster
- Roll up or transform your data
- Set up a cluster for high availability
- Snapshot and restore
- Secure the Elastic Stack
- Elasticsearch security principles
- Start the Elastic Stack with security enabled automatically
- Manually configure security
- Updating node security certificates
- User authentication
- Built-in users
- Service accounts
- Internal users
- Token-based authentication services
- User profiles
- Realms
- Realm chains
- Security domains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- JWT authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Looking up users without authentication
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Built-in roles
- Defining roles
- Role restriction
- Security privileges
- Document level security
- Field level security
- Granting privileges for data streams and aliases
- Mapping users and groups to roles
- Setting up field and document level security
- Submitting requests on behalf of other users
- Configuring authorization delegation
- Customizing roles and authorization
- Enable audit logging
- Restricting connections with IP filtering
- Securing clients and integrations
- Operator privileges
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Watcher
- Command line tools
- elasticsearch-certgen
- elasticsearch-certutil
- elasticsearch-create-enrollment-token
- elasticsearch-croneval
- elasticsearch-keystore
- elasticsearch-node
- elasticsearch-reconfigure-node
- elasticsearch-reset-password
- elasticsearch-saml-metadata
- elasticsearch-service-tokens
- elasticsearch-setup-passwords
- elasticsearch-shard
- elasticsearch-syskeygen
- elasticsearch-users
- How to
- Troubleshooting
- Fix common cluster issues
- Diagnose unassigned shards
- Add a missing tier to the system
- Allow Elasticsearch to allocate the data in the system
- Allow Elasticsearch to allocate the index
- Indices mix index allocation filters with data tiers node roles to move through data tiers
- Not enough nodes to allocate all shard replicas
- Total number of shards for an index on a single node exceeded
- Total number of shards per node has been reached
- Troubleshooting corruption
- Fix data nodes out of disk
- Fix master nodes out of disk
- Fix other role nodes out of disk
- Start index lifecycle management
- Start Snapshot Lifecycle Management
- Restore from snapshot
- Multiple deployments writing to the same snapshot repository
- Addressing repeated snapshot policy failures
- Troubleshooting an unstable cluster
- Troubleshooting discovery
- Troubleshooting monitoring
- Troubleshooting transforms
- Troubleshooting Watcher
- Troubleshooting searches
- Troubleshooting shards capacity health issues
- REST APIs
- API conventions
- Common options
- REST API compatibility
- Autoscaling APIs
- Behavioral Analytics APIs
- Compact and aligned text (CAT) APIs
- cat aliases
- cat allocation
- cat anomaly detectors
- cat component templates
- cat count
- cat data frame analytics
- cat datafeeds
- cat fielddata
- cat health
- cat indices
- cat master
- cat nodeattrs
- cat nodes
- cat pending tasks
- cat plugins
- cat recovery
- cat repositories
- cat segments
- cat shards
- cat snapshots
- cat task management
- cat templates
- cat thread pool
- cat trained model
- cat transforms
- Cluster APIs
- Cluster allocation explain
- Cluster get settings
- Cluster health
- Health
- Cluster reroute
- Cluster state
- Cluster stats
- Cluster update settings
- Nodes feature usage
- Nodes hot threads
- Nodes info
- Prevalidate node removal
- Nodes reload secure settings
- Nodes stats
- Cluster Info
- Pending cluster tasks
- Remote cluster info
- Task management
- Voting configuration exclusions
- Create or update desired nodes
- Get desired nodes
- Delete desired nodes
- Get desired balance
- Reset desired balance
- Cross-cluster replication APIs
- Data stream APIs
- Document APIs
- Enrich APIs
- EQL APIs
- Features APIs
- Fleet APIs
- Find structure API
- Graph explore API
- Index APIs
- Alias exists
- Aliases
- Analyze
- Analyze index disk usage
- Clear cache
- Clone index
- Close index
- Create index
- Create or update alias
- Create or update component template
- Create or update index template
- Create or update index template (legacy)
- Delete component template
- Delete dangling index
- Delete alias
- Delete index
- Delete index template
- Delete index template (legacy)
- Exists
- Field usage stats
- Flush
- Force merge
- Get alias
- Get component template
- Get field mapping
- Get index
- Get index settings
- Get index template
- Get index template (legacy)
- Get mapping
- Import dangling index
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists (legacy)
- List dangling indices
- Open index
- Refresh
- Resolve index
- Rollover
- Shrink index
- Simulate index
- Simulate template
- Split index
- Unfreeze index
- Update index settings
- Update mapping
- Index lifecycle management APIs
- Create or update lifecycle policy
- Get policy
- Delete policy
- Move to step
- Remove policy
- Retry policy
- Get index lifecycle management status
- Explain lifecycle
- Start index lifecycle management
- Stop index lifecycle management
- Migrate indices, ILM policies, and legacy, composable and component templates to data tiers routing
- Ingest APIs
- Info API
- Licensing APIs
- Logstash APIs
- Machine learning APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendars
- Create datafeeds
- Create filters
- Delete calendars
- Delete datafeeds
- Delete events from calendar
- Delete filters
- Delete forecasts
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Estimate model memory
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get model snapshots
- Get model snapshot upgrade statistics
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Reset jobs
- Revert model snapshots
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filters
- Update jobs
- Update model snapshots
- Upgrade model snapshots
- Machine learning data frame analytics APIs
- Create data frame analytics jobs
- Delete data frame analytics jobs
- Evaluate data frame analytics
- Explain data frame analytics
- Get data frame analytics jobs
- Get data frame analytics jobs stats
- Preview data frame analytics
- Start data frame analytics jobs
- Stop data frame analytics jobs
- Update data frame analytics jobs
- Machine learning trained model APIs
- Clear trained model deployment cache
- Create or update trained model aliases
- Create part of a trained model
- Create trained models
- Create trained model vocabulary
- Delete trained model aliases
- Delete trained models
- Get trained models
- Get trained models stats
- Infer trained model
- Start trained model deployment
- Stop trained model deployment
- Update trained model deployment
- Migration APIs
- Node lifecycle APIs
- Query rules APIs
- Reload search analyzers API
- Repositories metering APIs
- Rollup APIs
- Script APIs
- Search APIs
- Search Application APIs
- Searchable snapshots APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Clear privileges cache
- Clear API key cache
- Clear service account token caches
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Create service account tokens
- Delegate PKI authentication
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete service account token
- Delete users
- Disable users
- Enable users
- Enroll Kibana
- Enroll node
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get service accounts
- Get service account credentials
- Get token
- Get user privileges
- Get users
- Grant API keys
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect prepare authentication
- OpenID Connect authenticate
- OpenID Connect logout
- Query API key information
- Update API key
- Bulk update API keys
- SAML prepare authentication
- SAML authenticate
- SAML logout
- SAML invalidate
- SAML complete logout
- SAML service provider metadata
- SSL certificate
- Activate user profile
- Disable user profile
- Enable user profile
- Get user profiles
- Suggest user profile
- Update user profile data
- Has privileges user profile
- Create Cross-Cluster API key
- Update Cross-Cluster API key
- Snapshot and restore APIs
- Snapshot lifecycle management APIs
- SQL APIs
- Synonyms APIs
- Transform APIs
- Usage API
- Watcher APIs
- Definitions
- Migration guide
- Release notes
- Elasticsearch version 8.10.4
- Elasticsearch version 8.10.3
- Elasticsearch version 8.10.2
- Elasticsearch version 8.10.1
- Elasticsearch version 8.10.0
- Elasticsearch version 8.9.2
- Elasticsearch version 8.9.1
- Elasticsearch version 8.9.0
- Elasticsearch version 8.8.2
- Elasticsearch version 8.8.1
- Elasticsearch version 8.8.0
- Elasticsearch version 8.7.1
- Elasticsearch version 8.7.0
- Elasticsearch version 8.6.2
- Elasticsearch version 8.6.1
- Elasticsearch version 8.6.0
- Elasticsearch version 8.5.3
- Elasticsearch version 8.5.2
- Elasticsearch version 8.5.1
- Elasticsearch version 8.5.0
- Elasticsearch version 8.4.3
- Elasticsearch version 8.4.2
- Elasticsearch version 8.4.1
- Elasticsearch version 8.4.0
- Elasticsearch version 8.3.3
- Elasticsearch version 8.3.2
- Elasticsearch version 8.3.1
- Elasticsearch version 8.3.0
- Elasticsearch version 8.2.3
- Elasticsearch version 8.2.2
- Elasticsearch version 8.2.1
- Elasticsearch version 8.2.0
- Elasticsearch version 8.1.3
- Elasticsearch version 8.1.2
- Elasticsearch version 8.1.1
- Elasticsearch version 8.1.0
- Elasticsearch version 8.0.1
- Elasticsearch version 8.0.0
- Elasticsearch version 8.0.0-rc2
- Elasticsearch version 8.0.0-rc1
- Elasticsearch version 8.0.0-beta1
- Elasticsearch version 8.0.0-alpha2
- Elasticsearch version 8.0.0-alpha1
- Dependencies and versions
Restore a snapshot
editRestore a snapshot
editThis guide shows you how to restore a snapshot. Snapshots are a convenient way to store a copy of your data outside of a cluster. You can restore a snapshot to recover indices and data streams after deletion or a hardware failure. You can also use snapshots to transfer data between clusters.
In this guide, you’ll learn how to:
- Get a list of available snapshots
- Restore an index or data stream from a snapshot
- Restore a feature state
- Restore an entire cluster
- Monitor the restore operation
- Cancel an ongoing restore
This guide also provides tips for restoring to another cluster and troubleshooting common restore errors.
Prerequisites
edit-
To use Kibana’s Snapshot and Restore feature, you must have the following permissions:
-
Cluster privileges:
monitor
,manage_slm
,cluster:admin/snapshot
, andcluster:admin/repository
-
Index privilege:
all
on themonitor
index
-
Cluster privileges:
- You can only restore a snapshot to a running cluster with an elected master node. The snapshot’s repository must be registered and available to the cluster.
- The snapshot and cluster versions must be compatible. See Snapshot compatibility.
- To restore a snapshot, the cluster’s global metadata must be writable. Ensure there aren’t any cluster blocks that prevent writes. The restore operation ignores index blocks.
-
Before you restore a data stream, ensure the cluster contains a matching index template with data stream enabled. To check, use Kibana’s Index Management feature or the get index template API:
response = client.indices.get_index_template( name: '*', filter_path: 'index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream' ) puts response
GET _index_template/*?filter_path=index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream
If no such template exists, you can create one or restore a cluster state that contains one. Without a matching index template, a data stream can’t roll over or create backing indices.
- If your snapshot contains data from App Search or Workplace Search, ensure you’ve restored the Enterprise Search encryption key before restoring the snapshot.
Considerations
editWhen restoring data from a snapshot, keep the following in mind:
- If you restore a data stream, you also restore its backing indices.
- You can only restore an existing index if it’s closed and the index in the snapshot has the same number of primary shards.
- You can’t restore an existing open index. This includes backing indices for a data stream.
- The restore operation automatically opens restored indices, including backing indices.
- You can restore only a specific backing index from a data stream. However, the restore operation doesn’t add the restored backing index to any existing data stream.
Get a list of available snapshots
editTo view a list of available snapshots in Kibana, go to the main menu and click Stack Management > Snapshot and Restore.
You can also use the get repository API and the get snapshot API to find snapshots that are available to restore. First, use the get repository API to fetch a list of registered snapshot repositories.
response = client.snapshot.get_repository puts response
GET _snapshot
Then use the get snapshot API to get a list of snapshots in a specific repository. This also returns each snapshot’s contents.
response = client.snapshot.get( repository: 'my_repository', snapshot: '*', verbose: false ) puts response
GET _snapshot/my_repository/*?verbose=false
Restore an index or data stream
editYou can restore a snapshot using Kibana’s Snapshot and Restore feature or the restore snapshot API.
By default, a restore request attempts to restore all regular indices and regular data streams in a snapshot. In most cases, you only need to restore a specific index or data stream from a snapshot. However, you can’t restore an existing open index.
If you’re restoring data to a pre-existing cluster, use one of the following methods to avoid conflicts with existing indices and data streams:
Delete and restore
editThe simplest way to avoid conflicts is to delete an existing index or data stream before restoring it. To prevent the accidental re-creation of the index or data stream, we recommend you temporarily stop all indexing until the restore operation is complete.
If the
action.destructive_requires_name
cluster
setting is false
, don’t use the delete index API to
target the *
or .*
wildcard pattern. If you use Elasticsearch’s security features,
this will delete system indices required for authentication. Instead, target the
*,-.*
wildcard pattern to exclude these system indices and other index names
that begin with a dot (.
).
response = client.indices.delete( index: 'my-index' ) puts response response = client.indices.delete_data_stream( name: 'logs-my_app-default' ) puts response
# Delete an index DELETE my-index # Delete a data stream DELETE _data_stream/logs-my_app-default
In the restore request, explicitly specify any indices and data streams to restore.
response = client.snapshot.restore( repository: 'my_repository', snapshot: 'my_snapshot_2099.05.06', body: { indices: 'my-index,logs-my_app-default' } ) puts response
POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "indices": "my-index,logs-my_app-default" }
Rename on restore
editIf you want to avoid deleting existing data, you can instead rename the indices and data streams you restore. You typically use this method to compare existing data to historical data from a snapshot. For example, you can use this method to review documents after an accidental update or deletion.
Before you start, ensure the cluster has enough capacity for both the existing and restored data.
The following restore snapshot API request prepends restored-
to the name of
any restored index or data stream.
response = client.snapshot.restore( repository: 'my_repository', snapshot: 'my_snapshot_2099.05.06', body: { indices: 'my-index,logs-my_app-default', rename_pattern: '(.+)', rename_replacement: 'restored-$1' } ) puts response
POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "indices": "my-index,logs-my_app-default", "rename_pattern": "(.+)", "rename_replacement": "restored-$1" }
If the rename options produce two or more indices or data streams with the same name, the restore operation fails.
If you rename a data stream, its backing indices are also renamed. For example,
if you rename the logs-my_app-default
data stream to
restored-logs-my_app-default
, the backing index
.ds-logs-my_app-default-2099.03.09-000005
is renamed to
.ds-restored-logs-my_app-default-2099.03.09-000005
.
When the restore operation is complete, you can compare the original and restored data. If you no longer need an original index or data stream, you can delete it and use a reindex to rename the restored one.
response = client.indices.delete( index: 'my-index' ) puts response response = client.reindex( body: { source: { index: 'restored-my-index' }, dest: { index: 'my-index' } } ) puts response response = client.indices.delete_data_stream( name: 'logs-my_app-default' ) puts response response = client.reindex( body: { source: { index: 'restored-logs-my_app-default' }, dest: { index: 'logs-my_app-default', op_type: 'create' } } ) puts response
# Delete the original index DELETE my-index # Reindex the restored index to rename it POST _reindex { "source": { "index": "restored-my-index" }, "dest": { "index": "my-index" } } # Delete the original data stream DELETE _data_stream/logs-my_app-default # Reindex the restored data stream to rename it POST _reindex { "source": { "index": "restored-logs-my_app-default" }, "dest": { "index": "logs-my_app-default", "op_type": "create" } }
Restore a feature state
editYou can restore a feature state to recover system indices, system data streams, and other configuration data for a feature from a snapshot.
If you restore a snapshot’s cluster state, the operation restores all feature states in the snapshot by default. Similarly, if you don’t restore a snapshot’s cluster state, the operation doesn’t restore any feature states by default. You can also choose to restore only specific feature states from a snapshot, regardless of the cluster state.
To view a snapshot’s feature states, use the get snapshot API.
response = client.snapshot.get( repository: 'my_repository', snapshot: 'my_snapshot_2099.05.06' ) puts response
GET _snapshot/my_repository/my_snapshot_2099.05.06
The response’s feature_states
property contains a list of features in the
snapshot as well as each feature’s indices.
To restore a specific feature state from the snapshot, specify the
feature_name
from the response in the restore snapshot API’s
feature_states
parameter.
When you restore a feature state, Elasticsearch closes and overwrites the feature’s existing indices.
Restoring the security
feature state overwrites system indices
used for authentication. If you use Elasticsearch Service, ensure you have access to the Elasticsearch Service
Console before restoring the security
feature state. If you run Elasticsearch on your
own hardware, create a superuser in the file
realm to ensure you’ll still be able to access your cluster.
response = client.snapshot.restore( repository: 'my_repository', snapshot: 'my_snapshot_2099.05.06', body: { feature_states: [ 'geoip' ], include_global_state: false, indices: '-*' } ) puts response
POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "feature_states": [ "geoip" ], "include_global_state": false, "indices": "-*" }
Exclude the cluster state from the restore operation. |
|
Exclude the other indices and data streams in the snapshot from the restore operation. |
Restore an entire cluster
editIn some cases, you need to restore an entire cluster from a snapshot, including the cluster state and all feature states. These cases should be rare, such as in the event of a catastrophic failure.
Restoring an entire cluster involves deleting important system indices, including those used for authentication. Consider whether you can restore specific indices or data streams instead.
If you’re restoring to a different cluster, see Restore to a different cluster before you start.
-
If you backed up the cluster’s configuration files, you can restore them to each node. This step is optional and requires a full cluster restart.
After you shut down a node, copy the backed-up configuration files over to the node’s
$ES_PATH_CONF
directory. Before restarting the node, ensureelasticsearch.yml
contains the appropriate node roles, node name, and other node-specific settings.If you choose to perform this step, you must repeat this process on each node in the cluster.
-
Temporarily stop indexing and turn off the following features:
-
GeoIP database downloader and ILM history store
response = client.cluster.put_settings( body: { persistent: { "ingest.geoip.downloader.enabled": false, "indices.lifecycle.history_index_enabled": false } } ) puts response
PUT _cluster/settings { "persistent": { "ingest.geoip.downloader.enabled": false, "indices.lifecycle.history_index_enabled": false } }
-
ILM
response = client.ilm.stop puts response
POST _ilm/stop
-
-
-
Universal Profiling
Check if Universal Profiling index template management is enabled:
GET /_cluster/settings?filter_path=**.xpack.profiling.templates.enabled&include_defaults=true
If the value is
true
, disable Universal Profiling index template management:PUT _cluster/settings { "persistent": { "xpack.profiling.templates.enabled": false } }
If you use Elasticsearch security features, log in to a node host, navigate to the Elasticsearch installation directory, and add a user with the
superuser
role to the file realm using theelasticsearch-users
tool.For example, the following command creates a user named
restore_user
../bin/elasticsearch-users useradd restore_user -p my_password -r superuser
Use this file realm user to authenticate requests until the restore operation is complete.
-
-
Use the cluster update settings API to set
action.destructive_requires_name
tofalse
. This lets you delete data streams and indices using wildcards.response = client.cluster.put_settings( body: { persistent: { "action.destructive_requires_name": false } } ) puts response
PUT _cluster/settings { "persistent": { "action.destructive_requires_name": false } }
-
Delete all existing data streams on the cluster.
response = client.indices.delete_data_stream( name: '*', expand_wildcards: 'all' ) puts response
DELETE _data_stream/*?expand_wildcards=all
-
Delete all existing indices on the cluster.
response = client.indices.delete( index: '*', expand_wildcards: 'all' ) puts response
DELETE *?expand_wildcards=all
-
Restore the entire snapshot, including the cluster state. By default, restoring the cluster state also restores any feature states in the snapshot.
response = client.snapshot.restore( repository: 'my_repository', snapshot: 'my_snapshot_2099.05.06', body: { indices: '*', include_global_state: true } ) puts response
POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "indices": "*", "include_global_state": true }
-
When the restore operation is complete, resume indexing and restart any features you stopped:
-
GeoIP database downloader and ILM history store
response = client.cluster.put_settings( body: { persistent: { "ingest.geoip.downloader.enabled": true, "indices.lifecycle.history_index_enabled": true } } ) puts response
PUT _cluster/settings { "persistent": { "ingest.geoip.downloader.enabled": true, "indices.lifecycle.history_index_enabled": true } }
-
ILM
response = client.ilm.start puts response
POST _ilm/start
-
Machine Learning
response = client.ml.set_upgrade_mode( enabled: false ) puts response
POST _ml/set_upgrade_mode?enabled=false
-
Monitoring
response = client.cluster.put_settings( body: { persistent: { "xpack.monitoring.collection.enabled": true } } ) puts response
PUT _cluster/settings { "persistent": { "xpack.monitoring.collection.enabled": true } }
-
Watcher
response = client.watcher.start puts response
POST _watcher/_start
-
-
If wanted, reset the
action.destructive_requires_name
cluster setting.response = client.cluster.put_settings( body: { persistent: { "action.destructive_requires_name": nil } } ) puts response
PUT _cluster/settings { "persistent": { "action.destructive_requires_name": null } }
Monitor a restore
editThe restore operation uses the shard recovery process to
restore an index’s primary shards from a snapshot. While the restore operation
recovers primary shards, the cluster will have a yellow
health status.
After all primary shards are recovered, the replication process creates and
distributes replicas across eligible data nodes. When replication is complete,
the cluster health status typically becomes green
.
Once you start a restore in Kibana, you’re navigated to the Restore Status page. You can use this page to track the current state for each shard in the snapshot.
You can also monitor snapshot recover using Elasticsearch APIs. To monitor the cluster health status, use the cluster health API.
$response = $client->cluster()->health();
resp = client.cluster.health() print(resp)
response = client.cluster.health puts response
res, err := es.Cluster.Health() fmt.Println(res, err)
const response = await client.cluster.health() console.log(response)
GET _cluster/health
To get detailed information about ongoing shard recoveries, use the index recovery API.
response = client.indices.recovery( index: 'my-index' ) puts response
GET my-index/_recovery
To view any unassigned shards, use the cat shards API.
response = client.cat.shards( v: true, h: 'index,shard,prirep,state,node,unassigned.reason', s: 'state' ) puts response
GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
Unassigned shards have a state
of UNASSIGNED
. The prirep
value is p
for
primary shards and r
for replicas. The unassigned.reason
describes why the
shard remains unassigned.
To get a more in-depth explanation of an unassigned shard’s allocation status, use the cluster allocation explanation API.
response = client.cluster.allocation_explain( body: { index: 'my-index', shard: 0, primary: false, current_node: 'my-node' } ) puts response
GET _cluster/allocation/explain { "index": "my-index", "shard": 0, "primary": false, "current_node": "my-node" }
Cancel a restore
editYou can delete an index or data stream to cancel its ongoing restore. This also deletes any existing data in the cluster for the index or data stream. Deleting an index or data stream doesn’t affect the snapshot or its data.
response = client.indices.delete( index: 'my-index' ) puts response response = client.indices.delete_data_stream( name: 'logs-my_app-default' ) puts response
# Delete an index DELETE my-index # Delete a data stream DELETE _data_stream/logs-my_app-default
Restore to a different cluster
editElasticsearch Service can help you restore snapshots from other deployments. See Work with snapshots.
Snapshots aren’t tied to a particular cluster or a cluster name. You can create a snapshot in one cluster and restore it in another compatible cluster. Any data stream or index you restore from a snapshot must also be compatible with the current cluster’s version. The topology of the clusters doesn’t need to match.
To restore a snapshot, its repository must be registered and available to the new cluster. If the original cluster still has write access to the repository, register the repository as read-only. This prevents multiple clusters from writing to the repository at the same time and corrupting the repository’s contents. It also prevents Elasticsearch from caching the repository’s contents, which means that changes made by other clusters will become visible straight away.
Before you start a restore operation, ensure the new cluster has enough capacity for any data streams or indices you want to restore. If the new cluster has a smaller capacity, you can:
- Add nodes or upgrade your hardware to increase capacity.
- Restore fewer indices and data streams.
-
Reduce the number of replicas for restored indices.
For example, the following restore snapshot API request uses the
index_settings
option to setindex.number_of_replicas
to1
.response = client.snapshot.restore( repository: 'my_repository', snapshot: 'my_snapshot_2099.05.06', body: { indices: 'my-index,logs-my_app-default', index_settings: { "index.number_of_replicas": 1 } } ) puts response
POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore { "indices": "my-index,logs-my_app-default", "index_settings": { "index.number_of_replicas": 1 } }
If indices or backing indices in the original cluster were assigned to particular nodes using shard allocation filtering, the same rules will be enforced in the new cluster. If the new cluster does not contain nodes with appropriate attributes that a restored index can be allocated on, the index will not be successfully restored unless these index allocation settings are changed during the restore operation.
The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally restoring incompatible settings. If you need to restore a snapshot with incompatible persistent settings, try restoring it without the global cluster state.
Troubleshoot restore errors
editHere’s how to resolve common errors returned by restore requests.
Cannot restore index [<index>] because an open index with same name already exists in the cluster
editYou can’t restore an open index that already exists. To resolve this error, try one of the methods in Restore an index or data stream.
Cannot restore index [<index>] with [x] shards from a snapshot of index [<snapshot-index>] with [y] shards
editYou can only restore an existing index if it’s closed and the index in the snapshot has the same number of primary shards. This error indicates the index in the snapshot has a different number of primary shards.
To resolve this error, try one of the methods in Restore an index or data stream.
On this page
- Prerequisites
- Considerations
- Get a list of available snapshots
- Restore an index or data stream
- Delete and restore
- Rename on restore
- Restore a feature state
- Restore an entire cluster
- Monitor a restore
- Cancel a restore
- Restore to a different cluster
- Troubleshoot restore errors
- Cannot restore index [<index>] because an open index with same name already exists in the cluster
- Cannot restore index [<index>] with [x] shards from a snapshot of index [<snapshot-index>] with [y] shards