- Elasticsearch Guide: other versions:
- Getting Started
- Setup Elasticsearch
- Breaking changes
- Breaking changes in 5.4
- Breaking changes in 5.3
- Breaking changes in 5.2
- Breaking changes in 5.1
- Breaking changes in 5.0
- Search and Query DSL changes
- Mapping changes
- Percolator changes
- Suggester changes
- Index APIs changes
- Document API changes
- Settings changes
- Allocation changes
- HTTP changes
- REST API changes
- CAT API changes
- Java API changes
- Packaging
- Plugin changes
- Filesystem related changes
- Path to data on disk
- Aggregation changes
- Script related changes
- API Conventions
- Document APIs
- Search APIs
- Aggregations
- Metrics Aggregations
- Avg Aggregation
- Cardinality Aggregation
- Extended Stats Aggregation
- Geo Bounds Aggregation
- Geo Centroid Aggregation
- Max Aggregation
- Min Aggregation
- Percentiles Aggregation
- Percentile Ranks Aggregation
- Scripted Metric Aggregation
- Stats Aggregation
- Sum Aggregation
- Top hits Aggregation
- Value Count Aggregation
- Bucket Aggregations
- Adjacency Matrix Aggregation
- Children Aggregation
- Date Histogram Aggregation
- Date Range Aggregation
- Diversified Sampler Aggregation
- Filter Aggregation
- Filters Aggregation
- Geo Distance Aggregation
- GeoHash grid Aggregation
- Global Aggregation
- Histogram Aggregation
- IP Range Aggregation
- Missing Aggregation
- Nested Aggregation
- Range Aggregation
- Reverse nested Aggregation
- Sampler Aggregation
- Significant Terms Aggregation
- Terms Aggregation
- Pipeline Aggregations
- Avg Bucket Aggregation
- Derivative Aggregation
- Max Bucket Aggregation
- Min Bucket Aggregation
- Sum Bucket Aggregation
- Stats Bucket Aggregation
- Extended Stats Bucket Aggregation
- Percentiles Bucket Aggregation
- Moving Average Aggregation
- Cumulative Sum Aggregation
- Bucket Script Aggregation
- Bucket Selector Aggregation
- Serial Differencing Aggregation
- Matrix Aggregations
- Caching heavy aggregations
- Returning only aggregation results
- Aggregation Metadata
- Returning the type of the aggregation
- Metrics Aggregations
- Indices APIs
- Create Index
- Delete Index
- Get Index
- Indices Exists
- Open / Close Index API
- Shrink Index
- Rollover Index
- Put Mapping
- Get Mapping
- Get Field Mapping
- Types Exists
- Index Aliases
- Update Indices Settings
- Get Settings
- Analyze
- Index Templates
- Shadow replica indices
- Indices Stats
- Indices Segments
- Indices Recovery
- Indices Shard Stores
- Clear Cache
- Flush
- Refresh
- Force Merge
- cat APIs
- Cluster APIs
- Query DSL
- Mapping
- Analysis
- Anatomy of an analyzer
- Testing analyzers
- Analyzers
- Normalizers
- Tokenizers
- Token Filters
- Standard Token Filter
- ASCII Folding Token Filter
- Flatten Graph Token Filter
- Length Token Filter
- Lowercase Token Filter
- Uppercase Token Filter
- NGram Token Filter
- Edge NGram Token Filter
- Porter Stem Token Filter
- Shingle Token Filter
- Stop Token Filter
- Word Delimiter Token Filter
- Word Delimiter Graph Token Filter
- Stemmer Token Filter
- Stemmer Override Token Filter
- Keyword Marker Token Filter
- Keyword Repeat Token Filter
- KStem Token Filter
- Snowball Token Filter
- Phonetic Token Filter
- Synonym Token Filter
- Synonym Graph Token Filter
- Compound Word Token Filters
- Reverse Token Filter
- Elision Token Filter
- Truncate Token Filter
- Unique Token Filter
- Pattern Capture Token Filter
- Pattern Replace Token Filter
- Trim Token Filter
- Limit Token Count Token Filter
- Hunspell Token Filter
- Common Grams Token Filter
- Normalization Token Filter
- CJK Width Token Filter
- CJK Bigram Token Filter
- Delimited Payload Token Filter
- Keep Words Token Filter
- Keep Types Token Filter
- Classic Token Filter
- Apostrophe Token Filter
- Decimal Digit Token Filter
- Fingerprint Token Filter
- Minhash Token Filter
- Character Filters
- Modules
- Index Modules
- Ingest Node
- Pipeline Definition
- Ingest APIs
- Accessing Data in Pipelines
- Handling Failures in Pipelines
- Processors
- Append Processor
- Convert Processor
- Date Processor
- Date Index Name Processor
- Fail Processor
- Foreach Processor
- Grok Processor
- Gsub Processor
- Join Processor
- JSON Processor
- KV Processor
- Lowercase Processor
- Remove Processor
- Rename Processor
- Script Processor
- Set Processor
- Split Processor
- Sort Processor
- Trim Processor
- Uppercase Processor
- Dot Expander Processor
- How To
- Testing
- Glossary of terms
- Release Notes
- 5.4.3 Release Notes
- 5.4.2 Release Notes
- 5.4.1 Release Notes
- 5.4.0 Release Notes
- 5.3.3 Release Notes
- 5.3.2 Release Notes
- 5.3.1 Release Notes
- 5.3.0 Release Notes
- 5.2.2 Release Notes
- 5.2.1 Release Notes
- 5.2.0 Release Notes
- 5.1.2 Release Notes
- 5.1.1 Release Notes
- 5.1.0 Release Notes
- 5.0.2 Release Notes
- 5.0.1 Release Notes
- 5.0.0 Combined Release Notes
- 5.0.0 GA Release Notes
- 5.0.0-rc1 Release Notes
- 5.0.0-beta1 Release Notes
- 5.0.0-alpha5 Release Notes
- 5.0.0-alpha4 Release Notes
- 5.0.0-alpha3 Release Notes
- 5.0.0-alpha2 Release Notes
- 5.0.0-alpha1 Release Notes
- 5.0.0-alpha1 Release Notes (Changes previously released in 2.x)
- Painless API Reference
WARNING: Version 5.4 of Elasticsearch has passed its EOL date.
This documentation is no longer being maintained and may be removed. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Translog
editTranslog
editChanges to Lucene are only persisted to disk during a Lucene commit, which is a relatively heavy operation and so cannot be performed after every index or delete operation. Changes that happen after one commit and before another will be lost in the event of process exit or HW failure.
To prevent this data loss, each shard has a transaction log or write ahead log associated with it. Any index or delete operation is written to the translog after being processed by the internal Lucene index.
In the event of a crash, recent transactions can be replayed from the transaction log when the shard recovers.
An Elasticsearch flush is the process of performing a Lucene commit and starting a new translog. It is done automatically in the background in order to make sure the transaction log doesn’t grow too large, which would make replaying its operations take a considerable amount of time during recovery. It is also exposed through an API, though its rarely needed to be performed manually.
Flush settings
editThe following dynamically updatable settings control how often the in-memory buffer is flushed to disk:
-
index.translog.flush_threshold_size
-
Once the translog hits this size, a flush will happen. Defaults to
512mb
.
Translog settings
editThe data in the transaction log is only persisted to disk when the translog is
fsync
ed and committed. In the event of hardware failure, any data written
since the previous translog commit will be lost.
By default, Elasticsearch fsync
s and commits the translog every 5 seconds if index.translog.durability
is set
to async
or if set to request
(default) at the end of every index, delete,
update, or bulk request. In fact, Elasticsearch
will only report success of an index, delete, update, or bulk request to the
client after the transaction log has been successfully fsync
ed and committed
on the primary and on every allocated replica.
The following dynamically updatable per-index settings control the behaviour of the transaction log:
-
index.translog.sync_interval
-
How often the translog is
fsync
ed to disk and committed, regardless of write operations. Defaults to5s
. Values less than100ms
are not allowed. -
index.translog.durability
-
Whether or not to
fsync
and commit the translog after every index, delete, update, or bulk request. This setting accepts the following parameters:-
request
-
(default)
fsync
and commit after every request. In the event of hardware failure, all acknowledged writes will already have been committed to disk. -
async
-
fsync
and commit in the background everysync_interval
. In the event of hardware failure, all acknowledged writes since the last automatic commit will be discarded.
-
What to do if the translog becomes corrupted?
editIn some cases (a bad drive, user error) the translog can become corrupted. When this corruption is detected by Elasticsearch due to mismatching checksums, Elasticsearch will fail the shard and refuse to allocate that copy of the data to the node, recovering from a replica if available.
If there is no copy of the data from which Elasticsearch can recover
successfully, a user may want to recover the data that is part of the shard at
the cost of losing the data that is currently contained in the translog. We
provide a command-line tool for this, elasticsearch-translog
.
The elasticsearch-translog
tool should not be run while Elasticsearch is
running, and you will permanently lose the documents that were contained only in
the translog!
In order to run the elasticsearch-translog
tool, specify the truncate
subcommand as well as the directory for the corrupted translog with the -d
option:
$ bin/elasticsearch-translog truncate -d /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/ Checking existing translog files !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ! WARNING: Elasticsearch MUST be stopped before running this tool ! ! ! ! WARNING: Documents inside of translog files will be lost ! ! ! ! WARNING: The following files will be DELETED! ! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! --> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-41.ckp --> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-6.ckp --> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-37.ckp --> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-24.ckp --> data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-11.ckp Continue and DELETE files? [y/N] y Reading translog UUID information from Lucene commit from shard at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/index] Translog Generation: 3 Translog UUID : AxqC4rocTC6e0fwsljAh-Q Removing existing translog files Creating new empty checkpoint at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog.ckp] Creating new empty translog at [data/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/translog-3.tlog] Done.
You can also use the -h
option to get a list of all options and parameters
that the elasticsearch-translog
tool supports.