- Elasticsearch Guide: other versions:
- Elasticsearch introduction
- Getting started with Elasticsearch
- Set up Elasticsearch
- Installing Elasticsearch
- Configuring Elasticsearch
- Important Elasticsearch configuration
- Important System Configuration
- Bootstrap Checks
- Heap size check
- File descriptor check
- Memory lock check
- Maximum number of threads check
- Max file size check
- Maximum size virtual memory check
- Maximum map count check
- Client JVM check
- Use serial collector check
- System call filter check
- OnError and OnOutOfMemoryError checks
- Early-access check
- G1GC check
- All permission check
- Discovery configuration check
- Starting Elasticsearch
- Stopping Elasticsearch
- Adding nodes to your cluster
- Set up X-Pack
- Configuring X-Pack Java Clients
- Bootstrap Checks for X-Pack
- Upgrade Elasticsearch
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Weighted Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top Hits Aggregation
- Value Count Aggregation
- Median Absolute Deviation Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Auto-interval Date Histogram Aggregation
- Children Aggregation
- Composite Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- GeoTile Grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Parent Aggregation
- Range Aggregation
- Rare Terms Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Significant Text Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Moving Function Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Bucket Sort Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Query DSL
- Search across clusters
- Scripting
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Standard Tokenizer
- Letter Tokenizer
- Lowercase Tokenizer
- Whitespace Tokenizer
- UAX URL Email Tokenizer
- Classic Tokenizer
- Thai Tokenizer
- NGram Tokenizer
- Edge NGram Tokenizer
- Keyword Tokenizer
- Pattern Tokenizer
- Char Group Tokenizer
- Simple Pattern Tokenizer
- Simple Pattern Split Tokenizer
- Path Hierarchy Tokenizer
- Path Hierarchy Tokenizer Examples
- Token Filters
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Multiplexer Token Filter
- Conditional Token Filter
- Predicate Token Filter Script
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Parsing synonym files
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Exclude mode settings example
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- MinHash Token Filter
- Remove Duplicates Token Filter
- Character Filters
- Modules
- Index modules
- Ingest node
- Pipeline Definition
- Accessing Data in Pipelines
- Conditional Execution in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Bytes Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Dissect Processor
- Dot Expander Processor
- Drop Processor
- Fail Processor
- Foreach Processor
- GeoIP Processor
- Grok Processor
- Gsub Processor
- HTML Strip Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Pipeline Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Set Security User Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- URL Decode Processor
- User Agent processor
- Managing the index lifecycle
- Getting started with index lifecycle management
- Policy phases and actions
- Set up index lifecycle management policy
- Using policies to manage index rollover
- Update policy
- Index lifecycle error handling
- Restoring snapshots of managed indices
- Start and stop index lifecycle management
- Using ILM with existing indices
- SQL access
- Overview
- Getting Started with SQL
- Conventions and Terminology
- Security
- SQL REST API
- SQL Translate API
- SQL CLI
- SQL JDBC
- SQL ODBC
- SQL Client Applications
- SQL Language
- Functions and Operators
- Comparison Operators
- Logical Operators
- Math Operators
- Cast Operators
- LIKE and RLIKE Operators
- Aggregate Functions
- Grouping Functions
- Date/Time and Interval Functions and Operators
- Full-Text Search Functions
- Mathematical Functions
- String Functions
- Type Conversion Functions
- Geo Functions
- Conditional Functions And Expressions
- System Functions
- Reserved keywords
- SQL Limitations
- Monitor a cluster
- Frozen indices
- Roll up or transform your data
- Set up a cluster for high availability
- Secure a cluster
- Overview
- Configuring security
- Encrypting communications in Elasticsearch
- Encrypting communications in an Elasticsearch Docker Container
- Enabling cipher suites for stronger encryption
- Separating node-to-node and client traffic
- Configuring an Active Directory realm
- Configuring a file realm
- Configuring an LDAP realm
- Configuring a native realm
- Configuring a PKI realm
- Configuring a SAML realm
- Configuring a Kerberos realm
- Security files
- FIPS 140-2
- How security works
- User authentication
- Built-in users
- Internal users
- Token-based authentication services
- Realms
- Realm chains
- Active Directory user authentication
- File-based user authentication
- LDAP user authentication
- Native user authentication
- OpenID Connect authentication
- PKI user authentication
- SAML authentication
- Kerberos authentication
- Integrating with other authentication systems
- Enabling anonymous access
- Controlling the user cache
- Configuring SAML single-sign-on on the Elastic Stack
- Configuring single sign-on to the Elastic Stack using OpenID Connect
- User authorization
- Auditing security events
- Encrypting communications
- Restricting connections with IP filtering
- Cross cluster search, clients, and integrations
- Tutorial: Getting started with security
- Tutorial: Encrypting communications
- Troubleshooting
- Some settings are not returned via the nodes settings API
- Authorization exceptions
- Users command fails due to extra arguments
- Users are frequently locked out of Active Directory
- Certificate verification fails for curl on Mac
- SSLHandshakeException causes connections to fail
- Common SSL/TLS exceptions
- Common Kerberos exceptions
- Common SAML issues
- Internal Server Error in Kibana
- Setup-passwords command fails due to connection failure
- Failures due to relocation of the configuration files
- Limitations
- Alerting on cluster and index events
- Command line tools
- How To
- Testing
- Glossary of terms
- REST APIs
- API conventions
- cat APIs
- Cluster APIs
- Cross-cluster replication APIs
- Document APIs
- Explore API
- Index APIs
- Add index alias
- Analyze
- Clear cache
- Close index
- Create index
- Delete index
- Delete index alias
- Delete index template
- Flush
- Force merge
- Freeze index
- Get field mapping
- Get index
- Get index alias
- Get index settings
- Get index template
- Get mapping
- Index alias exists
- Index exists
- Index recovery
- Index segments
- Index shard stores
- Index stats
- Index template exists
- Open index
- Put index template
- Put mapping
- Refresh
- Rollover index
- Shrink index
- Split index
- Synced flush
- Type exists
- Unfreeze index
- Update index alias
- Update index settings
- Index lifecycle management API
- Ingest APIs
- Info API
- Licensing APIs
- Machine learning anomaly detection APIs
- Add events to calendar
- Add jobs to calendar
- Close jobs
- Create jobs
- Create calendar
- Create datafeeds
- Create filter
- Delete calendar
- Delete datafeeds
- Delete events from calendar
- Delete filter
- Delete forecast
- Delete jobs
- Delete jobs from calendar
- Delete model snapshots
- Delete expired data
- Find file structure
- Flush jobs
- Forecast jobs
- Get buckets
- Get calendars
- Get categories
- Get datafeeds
- Get datafeed statistics
- Get influencers
- Get jobs
- Get job statistics
- Get machine learning info
- Get model snapshots
- Get overall buckets
- Get scheduled events
- Get filters
- Get records
- Open jobs
- Post data to jobs
- Preview datafeeds
- Revert model snapshots
- Set upgrade mode
- Start datafeeds
- Stop datafeeds
- Update datafeeds
- Update filter
- Update jobs
- Update model snapshots
- Machine learning data frame analytics APIs
- Migration APIs
- Reload search analyzers
- Rollup APIs
- Search APIs
- Security APIs
- Authenticate
- Change passwords
- Clear cache
- Clear roles cache
- Create API keys
- Create or update application privileges
- Create or update role mappings
- Create or update roles
- Create or update users
- Delete application privileges
- Delete role mappings
- Delete roles
- Delete users
- Disable users
- Enable users
- Get API key information
- Get application privileges
- Get builtin privileges
- Get role mappings
- Get roles
- Get token
- Get users
- Has privileges
- Invalidate API key
- Invalidate token
- OpenID Connect Prepare Authentication API
- OpenID Connect authenticate API
- OpenID Connect logout API
- SSL certificate
- Transform APIs
- Watcher APIs
- Definitions
- Release highlights
- Breaking changes
- Release notes
- Elasticsearch version 7.3.2
- Elasticsearch version 7.3.1
- Elasticsearch version 7.3.0
- Elasticsearch version 7.2.1
- Elasticsearch version 7.2.0
- Elasticsearch version 7.1.1
- Elasticsearch version 7.1.0
- Elasticsearch version 7.0.0
- Elasticsearch version 7.0.0-rc2
- Elasticsearch version 7.0.0-rc1
- Elasticsearch version 7.0.0-beta1
- Elasticsearch version 7.0.0-alpha2
- Elasticsearch version 7.0.0-alpha1
Index recovery API
editIndex recovery API
editReturns information about ongoing and completed shard recoveries.
GET /twitter/_recovery
Description
editUse the index recovery API to get information about ongoing and completed shard recoveries.
Shard recovery is the process of syncing a replica shard from a primary shard. Upon completion, the replica shard is available for search.
Recovery automatically occurs during the following processes:
- Node startup or failure. This type of recovery is called a local store recovery.
- Primary shard replication.
- Relocation of a shard to a different node in the same cluster.
- Snapshot restoration.
Path parameters
edit-
<index>
-
(Optional, string) Comma-separated list or wildcard expression of index names used to limit the request.
Use a value of
_all
to retrieve information for all indices in the cluster.
Query parameters
edit-
active_only
-
(Optional, boolean)
If
true
, the response only includes ongoing shard recoveries. Defaults tofalse
. -
detailed
-
(Optional, boolean)
If
true
, the response includes detailed information about shard recoveries. Defaults tofalse
. -
index
- (Optional, string) Comma-separated list or wildcard expression of index names used to limit the request.
Response body
edit-
id
- (Integer) ID of the shard.
-
type
-
(String) Recovery type. Returned values include:
-
STORE
- The recovery is related to a node startup or failure. This type of recovery is called a local store recovery.
-
SNAPSHOT
- The recovery is related to a snapshot restoration.
-
REPLICA
- The recovery is related to a primary shard replication.
-
RELOCATING
- The recovery is related to the relocation of a shard to a different node in the same cluster.
-
-
STAGE
-
(String) Recovery stage. Returned values include:
-
DONE
- Complete.
-
FINALIZE
- Cleanup.
-
INDEX
- Reading index metadata and copying bytes from source to destination.
-
INIT
- Recovery has not started.
-
START
- Starting the recovery process; opening the index for use.
-
TRANSLOG
- Replaying transaction log .
-
-
primary
-
(Boolean)
If
true
, the shard is a primary shard. -
start_time
- (String) Timestamp of recovery start.
-
stop_time
- (String) Timestamp of recovery finish.
-
total_time_in_millis
- (String) Total time to recover shard in milliseconds.
-
source
-
(Object) Recovery source. This can include:
- A repository description if recovery is from a snapshot
- A description of source node
-
target
- (Object) Destination node.
-
index
- (Object) Statistics about physical index recovery.
-
translog
- (Object) Statistics about translog recovery.
-
start
- (Object) Statistics about time to open and start the index.
Examples
editGet recovery information for several indices
editGET index1,index2/_recovery?human
Get segment information for all indices
editGET /_recovery?human
The API returns the following response:
{ "index1" : { "shards" : [ { "id" : 0, "type" : "SNAPSHOT", "stage" : "INDEX", "primary" : true, "start_time" : "2014-02-24T12:15:59.716", "start_time_in_millis": 1393244159716, "stop_time" : "0s", "stop_time_in_millis" : 0, "total_time" : "2.9m", "total_time_in_millis" : 175576, "source" : { "repository" : "my_repository", "snapshot" : "my_snapshot", "index" : "index1", "version" : "{version}", "restoreUUID": "PDh1ZAOaRbiGIVtCvZOMww" }, "target" : { "id" : "ryqJ5lO5S4-lSFbGntkEkg", "host" : "my.fqdn", "transport_address" : "my.fqdn", "ip" : "10.0.1.7", "name" : "my_es_node" }, "index" : { "size" : { "total" : "75.4mb", "total_in_bytes" : 79063092, "reused" : "0b", "reused_in_bytes" : 0, "recovered" : "65.7mb", "recovered_in_bytes" : 68891939, "percent" : "87.1%" }, "files" : { "total" : 73, "reused" : 0, "recovered" : 69, "percent" : "94.5%" }, "total_time" : "0s", "total_time_in_millis" : 0, "source_throttle_time" : "0s", "source_throttle_time_in_millis" : 0, "target_throttle_time" : "0s", "target_throttle_time_in_millis" : 0 }, "translog" : { "recovered" : 0, "total" : 0, "percent" : "100.0%", "total_on_start" : 0, "total_time" : "0s", "total_time_in_millis" : 0, }, "verify_index" : { "check_index_time" : "0s", "check_index_time_in_millis" : 0, "total_time" : "0s", "total_time_in_millis" : 0 } } ] } }
This response includes information
about a single index recovering a single shard.
The source of the recovery is a snapshot repository
and the target of the recovery is the my_es_node
node.
The response also includes the number and percentage of files and bytes recovered.
Get detailed recovery information
editTo get a list of physical files in recovery,
set the detailed
query parameter to true
.
GET _recovery?human&detailed=true
The API returns the following response:
{ "index1" : { "shards" : [ { "id" : 0, "type" : "STORE", "stage" : "DONE", "primary" : true, "start_time" : "2014-02-24T12:38:06.349", "start_time_in_millis" : "1393245486349", "stop_time" : "2014-02-24T12:38:08.464", "stop_time_in_millis" : "1393245488464", "total_time" : "2.1s", "total_time_in_millis" : 2115, "source" : { "id" : "RGMdRc-yQWWKIBM4DGvwqQ", "host" : "my.fqdn", "transport_address" : "my.fqdn", "ip" : "10.0.1.7", "name" : "my_es_node" }, "target" : { "id" : "RGMdRc-yQWWKIBM4DGvwqQ", "host" : "my.fqdn", "transport_address" : "my.fqdn", "ip" : "10.0.1.7", "name" : "my_es_node" }, "index" : { "size" : { "total" : "24.7mb", "total_in_bytes" : 26001617, "reused" : "24.7mb", "reused_in_bytes" : 26001617, "recovered" : "0b", "recovered_in_bytes" : 0, "percent" : "100.0%" }, "files" : { "total" : 26, "reused" : 26, "recovered" : 0, "percent" : "100.0%", "details" : [ { "name" : "segments.gen", "length" : 20, "recovered" : 20 }, { "name" : "_0.cfs", "length" : 135306, "recovered" : 135306 }, { "name" : "segments_2", "length" : 251, "recovered" : 251 } ] }, "total_time" : "2ms", "total_time_in_millis" : 2, "source_throttle_time" : "0s", "source_throttle_time_in_millis" : 0, "target_throttle_time" : "0s", "target_throttle_time_in_millis" : 0 }, "translog" : { "recovered" : 71, "total" : 0, "percent" : "100.0%", "total_on_start" : 0, "total_time" : "2.0s", "total_time_in_millis" : 2025 }, "verify_index" : { "check_index_time" : 0, "check_index_time_in_millis" : 0, "total_time" : "88ms", "total_time_in_millis" : 88 } } ] } }
The response includes a listing of any physical files recovered and their sizes.
The response also includes timings in milliseconds of the various stages of recovery:
- Index retrieval
- Translog replay
- Index start time
This response indicates the recovery is done
.
All recoveries,
whether ongoing or complete,
are kept in the cluster state
and may be reported on at any time.
To only return information about ongoing recoveries,
set the active_only
query parameter to true
.
On this page