Flush data streams or indices
Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.
After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
Query parameters
-
allow_no_indices boolean
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards string | array[string]
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
force boolean
If
true
, the request forces a flush even if there are no changes to commit to the index. -
wait_if_ongoing boolean
If
true
, the flush operation blocks until execution when another flush operation is running. Iffalse
, Elasticsearch returns an error if you request a flush when another flush operation is running.
curl \
-X GET http://api.example.com/_flush
{
"_shards": {
"failed": 42.0,
"successful": 42.0,
"total": 42.0,
"failures": [
{
"index": "string",
"node": "string",
"reason": {
"type": "string",
"reason": "string",
"stack_trace": "string",
"caused_by": {},
"root_cause": [
{}
],
"suppressed": [
{}
]
},
"shard": 42.0,
"status": "string"
}
],
"skipped": 42.0
}
}