API Reference
editAPI Reference
editbulk
editBulk index or delete documents.
Perform multiple index
, create
, delete
, and update
actions in a single request.
This reduces overhead and can greatly increase indexing speed.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:
-
To use the
create
action, you must have thecreate_doc
,create
,index
, orwrite
index privilege. Data streams support only thecreate
action. -
To use the
index
action, you must have thecreate
,index
, orwrite
index privilege. -
To use the
delete
action, you must have thedelete
orwrite
index privilege. -
To use the
update
action, you must have theindex
orwrite
index privilege. -
To automatically create a data stream or index with a bulk API request, you must have the
auto_configure
,create_index
, ormanage
index privilege. -
To make the result of a bulk operation visible to search using the
refresh
parameter, you must have themaintenance
ormanage
index privilege.
Automatic data stream creation requires a matching index template with data stream enabled.
The actions are specified in the request body using a newline delimited JSON (NDJSON) structure:
action_and_meta_data\n optional_source\n action_and_meta_data\n optional_source\n .... action_and_meta_data\n optional_source\n
The index
and create
actions expect a source on the next line and have the same semantics as the op_type
parameter in the standard index API.
A create
action fails if a document with the same ID already exists in the target
An index
action adds or replaces a document as necessary.
Data streams support only the create
action.
To update or delete a document in a data stream, you must target the backing index containing the document.
An update
action expects that the partial doc, upsert, and script and its options are specified on the next line.
A delete
action does not expect a source on the next line and has the same semantics as the standard delete API.
The final line of data must end with a newline character (\n
).
Each newline character may be preceded by a carriage return (\r
).
When sending NDJSON data to the _bulk
endpoint, use a Content-Type
header of application/json
or application/x-ndjson
.
Because this format uses literal newline characters (\n
) as delimiters, make sure that the JSON actions and sources are not pretty printed.
If you provide a target in the request path, it is used for any actions that don’t explicitly specify an _index
argument.
A note on the format: the idea here is to make processing as fast as possible.
As some of the actions are redirected to other shards on other nodes, only action_meta_data
is parsed on the receiving node side.
Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible.
There is no "correct" number of actions to perform in a single bulk request. Experiment with different settings to find the optimal size for your particular workload. Note that Elasticsearch limits the maximum size of a HTTP request to 100mb by default so clients must ensure that no request exceeds this size. It is not possible to index a single document that exceeds the size limit, so you must pre-process any such documents into smaller pieces before sending them to Elasticsearch. For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch.
Client suppport for bulk requests
Some of the officially supported clients provide helpers to assist with bulk requests and reindexing:
-
Go: Check out
esutil.BulkIndexer
-
Perl: Check out
Search::Elasticsearch::Client::5_0::Bulk
andSearch::Elasticsearch::Client::5_0::Scroll
-
Python: Check out
elasticsearch.helpers.*
-
JavaScript: Check out
client.helpers.*
-
.NET: Check out
BulkAllObservable
- PHP: Check out bulk indexing.
Submitting bulk requests with cURL
If you’re providing text file input to curl
, you must use the --data-binary
flag instead of plain -d
.
The latter doesn’t preserve newlines. For example:
$ cat requests { "index" : { "_index" : "test", "_id" : "1" } } { "field1" : "value1" } $ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo {"took":7, "errors": false, "items":[{"index":{"_index":"test","_id":"1","_version":1,"result":"created","forced_refresh":false}}]}
Optimistic concurrency control
Each index
and delete
action within a bulk API call may include the if_seq_no
and if_primary_term
parameters in their respective action and meta data lines.
The if_seq_no
and if_primary_term
parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details.
Versioning
Each bulk item can include the version value using the version
field.
It automatically follows the behavior of the index or delete operation based on the _version
mapping.
It also support the version_type
.
Routing
Each bulk item can include the routing value using the routing
field.
It automatically follows the behavior of the index or delete operation based on the _routing
mapping.
Data streams do not support custom routing unless they were created with the allow_custom_routing
setting enabled in the template.
Wait for active shards
When making bulk calls, you can set the wait_for_active_shards
parameter to require a minimum number of shard copies to be active before starting to process the bulk request.
Refresh
Control when the changes made by this request are visible to search.
Only the shards that receive the bulk request will be affected by refresh.
Imagine a _bulk?refresh=wait_for
request with three documents in it that happen to be routed to different shards in an index with five shards.
The request will only wait for those three shards to refresh.
The other two shards that make up the index do not participate in the _bulk
request at all.
client.bulk({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): The name of the data stream, index, or index alias to perform bulk actions on. -
operations
(Optional, { index, create, update, delete } | { detect_noop, doc, doc_as_upsert, script, scripted_upsert, _source, upsert } | object[]) -
include_source_on_error
(Optional, boolean): True or false if to include the document source in the error message in case of parsing errors. -
list_executed_pipelines
(Optional, boolean): Iftrue
, the response will include the ingest pipelines that were run for each index or create. -
pipeline
(Optional, string): The pipeline identifier to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to_none
turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
, Elasticsearch refreshes the affected shards to make this operation visible to search. Ifwait_for
, wait for a refresh to make this operation visible to search. Iffalse
, do nothing with refreshes. Valid values:true
,false
,wait_for
. -
routing
(Optional, string): A custom value that is used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]): Indicates whether to return the_source
field (true
orfalse
) or contains a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in_source_includes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
timeout
(Optional, string | -1 | 0): The period each action waits for the following operations: automatic index creation, dynamic mapping updates, and waiting for active shards. The default is1m
(one minute), which guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default is1
, which waits for each primary shard to be active. -
require_alias
(Optional, boolean): Iftrue
, the request’s actions must target an index alias. -
require_data_stream
(Optional, boolean): Iftrue
, the request’s actions must target a data stream (existing or to be created).
-
clear_scroll
editClear a scrolling search. Clear the search context and results for a scrolling search.
client.clearScroll({ ... })
Arguments
edit-
Request (object):
-
scroll_id
(Optional, string | string[]): A list of scroll IDs to clear. To clear all scroll IDs, use_all
. IMPORTANT: Scroll IDs can be long. It is recommended to specify scroll IDs in the request body parameter.
-
close_point_in_time
editClose a point in time.
A point in time must be opened explicitly before being used in search requests.
The keep_alive
parameter tells Elasticsearch how long it should persist.
A point in time is automatically closed when the keep_alive
period has elapsed.
However, keeping points in time has a cost; close them as soon as they are no longer required for search requests.
client.closePointInTime({ id })
Arguments
edit-
Request (object):
-
id
(string): The ID of the point-in-time.
-
count
editCount search results. Get the number of documents matching a query.
The query can be provided either by using a simple query string as a parameter, or by defining Query DSL within the request body.
The query is optional. When no query is provided, the API uses match_all
to count all the documents.
The count API supports multi-target syntax. You can run a single count API search across multiple data streams and indices.
The operation is broadcast across all shards. For each shard ID group, a replica is chosen and the search is run against it. This means that replicas increase the scalability of the count.
client.count({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases to search. It supports wildcards (*
). To search all data streams and indices, omit this parameter or use*
or_all
. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Defines the search query using Query DSL. A request body query cannot be used with theq
query string parameter. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
analyzer
(Optional, string): The analyzer to use for the query string. This parameter can be used only when theq
query string parameter is specified. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. This parameter can be used only when theq
query string parameter is specified. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. This parameter can be used only when theq
query string parameter is specified. -
df
(Optional, string): The field to use as a default when no field prefix is given in the query string. This parameter can be used only when theq
query string parameter is specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such asopen,hidden
. -
ignore_throttled
(Optional, boolean): Iftrue
, concrete, expanded, or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theq
query string parameter is specified. -
min_score
(Optional, number): The minimum_score
value that documents must have to be included in the result. -
preference
(Optional, string): The node or shard the operation should be performed on. By default, it is random. -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
terminate_after
(Optional, number): The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers. -
q
(Optional, string): The query in Lucene query string syntax. This parameter cannot be used with a request body.
-
create
editCreate a new document in the index.
You can index a new JSON document with the /<target>/_doc/
or /<target>/_create/<_id>
APIs
Using _create
guarantees that the document is indexed only if it does not already exist.
It returns a 409 response when a document with a same ID already exists in the index.
To update an existing document, you must use the /<target>/_doc/
API.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:
-
To add a document using the
PUT /<target>/_create/<_id>
orPOST /<target>/_create/<_id>
request formats, you must have thecreate_doc
,create
,index
, orwrite
index privilege. -
To automatically create a data stream or index with this API request, you must have the
auto_configure
,create_index
, ormanage
index privilege.
Automatic data stream creation requires a matching index template with data stream enabled.
Automatically create data streams and indices
If the request’s target doesn’t exist and matches an index template with a data_stream
definition, the index operation automatically creates the data stream.
If the target doesn’t exist and doesn’t match a data stream template, the operation automatically creates the index and applies any matching index templates.
Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.
If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.
Automatic index creation is controlled by the action.auto_create_index
setting.
If it is true
, any index can be created automatically.
You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false
to turn off automatic index creation entirely.
Specify a list of patterns you want to allow or prefix each pattern with +
or -
to indicate whether it should be allowed or blocked.
When a list is specified, the default behaviour is to disallow.
The action.auto_create_index
setting affects the automatic creation of indices only.
It does not affect the creation of data streams.
Routing
By default, shard placement — or routing — is controlled by using a hash of the document’s ID value.
For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing
parameter.
When setting up explicit mapping, you can also use the _routing
field to direct the index operation to extract the routing value from the document itself.
This does come at the (very minimal) cost of an additional document parsing pass.
If the _routing
mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.
Data streams do not support custom routing unless they were created with the allow_custom_routing
setting enabled in the template.
Distributed
The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.
Active shards
To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation.
If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs.
By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards
is 1
).
This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards
.
To alter this behavior per operation, use the wait_for_active_shards request
parameter.
Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas
+1).
Specifying a negative value or a number greater than the number of shard copies will throw an error.
For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes).
If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding.
This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data.
If wait_for_active_shards
is set on the request to 3
(and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding.
This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard.
However, if you set wait_for_active_shards
to all
(or to 4
, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index.
The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.
It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts.
After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary.
The _shards
section of the API response reveals the number of shard copies on which replication succeeded and failed.
client.create({ id, index })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the document. To automatically generate a document ID, use thePOST /<target>/_doc/
request format. -
index
(string): The name of the data stream or index to target. If the target doesn’t exist and matches the name or wildcard (*
) pattern of an index template with adata_stream
definition, this request creates the data stream. If the target doesn’t exist and doesn’t match a data stream template, this request creates the index. -
document
(Optional, object): A document. -
if_primary_term
(Optional, number): Only perform the operation if the document has this primary term. -
if_seq_no
(Optional, number): Only perform the operation if the document has this sequence number. -
include_source_on_error
(Optional, boolean): True or false if to include the document source in the error message in case of parsing errors. -
op_type
(Optional, Enum("index" | "create")): Set tocreate
to only index the document if it does not already exist (put if absent). If a document with the specified_id
already exists, the indexing operation will fail. The behavior is the same as using the<index>/_create
endpoint. If a document ID is specified, this paramater defaults toindex
. Otherwise, it defaults tocreate
. If the request targets a data stream, anop_type
ofcreate
is required. -
pipeline
(Optional, string): The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, setting the value to_none
turns off the default ingest pipeline for this request. If a final pipeline is configured, it will always run regardless of the value of this parameter. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
, Elasticsearch refreshes the affected shards to make this operation visible to search. Ifwait_for
, it waits for a refresh to make this operation visible to search. Iffalse
, it does nothing with refreshes. -
require_alias
(Optional, boolean): Iftrue
, the destination must be an index alias. -
require_data_stream
(Optional, boolean): Iftrue
, the request’s actions must target a data stream (existing or to be created). -
routing
(Optional, string): A custom value that is used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards. Elasticsearch waits for at least the specified timeout period before failing. The actual wait time could be longer, particularly when multiple waits occur. This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur. -
version
(Optional, number): The explicit version number for concurrency control. It must be a non-negative long number. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): The version type. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. You can set it toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default value of1
means it waits for each primary shard to be active.
-
delete
editDelete a document.
Remove a JSON document from the specified index.
You cannot send deletion requests directly to a data stream. To delete a document in a data stream, you must target the backing index containing the document.
Optimistic concurrency control
Delete operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no
and if_primary_term
parameters.
If a mismatch is detected, the operation will result in a VersionConflictException
and a status code of 409
.
Versioning
Each document indexed is versioned.
When deleting a document, the version can be specified to make sure the relevant document you are trying to delete is actually being deleted and it has not changed in the meantime.
Every write operation run on a document, deletes included, causes its version to be incremented.
The version number of a deleted document remains available for a short time after deletion to allow for control of concurrent operations.
The length of time for which a deleted document’s version remains available is determined by the index.gc_deletes
index setting.
Routing
If routing is used during indexing, the routing value also needs to be specified to delete a document.
If the _routing
mapping is set to required
and no routing value is specified, the delete API throws a RoutingMissingException
and rejects the request.
For example:
DELETE /my-index-000001/_doc/1?routing=shard-1
This request deletes the document with ID 1, but it is routed based on the user. The document is not deleted if the correct routing is not specified.
Distributed
The delete operation gets hashed into a specific shard ID. It then gets redirected into the primary shard within that ID group and replicated (if needed) to shard replicas within that ID group.
client.delete({ id, index })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the document. -
index
(string): The name of the target index. -
if_primary_term
(Optional, number): Only perform the operation if the document has this primary term. -
if_seq_no
(Optional, number): Only perform the operation if the document has this sequence number. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
, Elasticsearch refreshes the affected shards to make this operation visible to search. Ifwait_for
, it waits for a refresh to make this operation visible to search. Iffalse
, it does nothing with refreshes. -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): The period to wait for active shards. This parameter is useful for situations where the primary shard assigned to perform the delete operation might not be available when the delete operation runs. Some reasons for this might be that the primary shard is currently recovering from a store or undergoing relocation. By default, the delete operation will wait on the primary shard to become available for up to 1 minute before failing and responding with an error. -
version
(Optional, number): An explicit version number for concurrency control. It must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): The version type. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The minimum number of shard copies that must be active before proceeding with the operation. You can set it toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default value of1
means it waits for each primary shard to be active.
-
delete_by_query
editDelete documents.
Deletes documents that match the specified query.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:
-
read
-
delete
orwrite
You can specify the query criteria in the request URI or the request body using the same syntax as the search API. When you submit a delete by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and deletes matching documents using internal versioning. If a document changes between the time that the snapshot is taken and the delete operation is processed, it results in a version conflict and the delete operation fails.
Documents with a version equal to 0 cannot be deleted using delete by query because internal versioning does not support 0 as a valid version number.
While processing a delete by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents to delete. A bulk delete request is performed for each batch of matching documents. If a search or bulk request is rejected, the requests are retried up to 10 times, with exponential back off. If the maximum retry limit is reached, processing halts and all failed requests are returned in the response. Any delete requests that completed successfully still stick, they are not rolled back.
You can opt to count version conflicts instead of halting and returning by setting conflicts
to proceed
.
Note that if you opt to count version conflicts the operation could attempt to delete more documents from the source than max_docs
until it has successfully deleted max_docs documents
, or it has gone through every document in the source query.
Throttling delete requests
To control the rate at which delete by query issues batches of delete operations, you can set requests_per_second
to any positive decimal number.
This pads each batch with a wait time to throttle the rate.
Set requests_per_second
to -1
to disable throttling.
Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account.
The padding time is the difference between the batch size divided by the requests_per_second
and the time spent writing.
By default the batch size is 1000
, so if requests_per_second
is set to 500
:
target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds
Since the batch is issued as a single _bulk
request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set.
This is "bursty" instead of "smooth".
Slicing
Delete by query supports sliced scroll to parallelize the delete process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.
Setting slices
to auto
lets Elasticsearch choose the number of slices to use.
This setting will use one slice per shard, up to a certain limit.
If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards.
Adding slices to the delete by query operation creates sub-requests which means it has some quirks:
- You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
- Fetching the status of the task for the request with slices only contains the status of completed slices.
- These sub-requests are individually addressable for things like cancellation and rethrottling.
-
Rethrottling the request with
slices
will rethrottle the unfinished sub-request proportionally. -
Canceling the request with
slices
will cancel each sub-request. -
Due to the nature of
slices
each sub-request won’t get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution. -
Parameters like
requests_per_second
andmax_docs
on a request withslices
are distributed proportionally to each sub-request. Combine that with the earlier point about distribution being uneven and you should conclude that usingmax_docs
withslices
might not result in exactlymax_docs
documents being deleted. - Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.
If you’re slicing manually or otherwise tuning automatic slicing, keep in mind that:
-
Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many
slices
hurts performance. Settingslices
higher than the number of shards generally does not improve efficiency and adds overhead. - Delete performance scales linearly across available resources with the number of slices.
Whether query or delete performance dominates the runtime depends on the documents being reindexed and cluster resources.
Cancel a delete by query operation
Any delete by query can be canceled using the task cancel API. For example:
POST _tasks/r1A2WoRbTwKZ516z6NEs5A:36619/_cancel
The task ID can be found by using the get tasks API.
Cancellation should happen quickly but might take a few seconds. The get task status API will continue to list the delete by query task until this task checks that it has been cancelled and terminates itself.
client.deleteByQuery({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of data streams, indices, and aliases to search. It supports wildcards (*
). To search all data streams or indices, omit this parameter or use*
or_all
. -
max_docs
(Optional, number): The maximum number of documents to delete. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The documents to delete specified with Query DSL. -
slice
(Optional, { field, id, max }): Slice the request manually using the provided slice ID and total number of slices. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
analyzer
(Optional, string): Analyzer to use for the query string. This parameter can be used only when theq
query string parameter is specified. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. This parameter can be used only when theq
query string parameter is specified. -
conflicts
(Optional, Enum("abort" | "proceed")): What to do if delete by query hits version conflicts:abort
orproceed
. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. This parameter can be used only when theq
query string parameter is specified. -
df
(Optional, string): The field to use as default where no field prefix is given in the query string. This parameter can be used only when theq
query string parameter is specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such asopen,hidden
. -
from
(Optional, number): Skips the specified number of documents. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theq
query string parameter is specified. -
preference
(Optional, string): The node or shard the operation should be performed on. It is random by default. -
refresh
(Optional, boolean): Iftrue
, Elasticsearch refreshes all shards involved in the delete by query after the request completes. This is different than the delete API’srefresh
parameter, which causes just the shard that received the delete request to be refreshed. Unlike the delete API, it does not supportwait_for
. -
request_cache
(Optional, boolean): Iftrue
, the request cache is used for this request. Defaults to the index-level setting. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second. -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
q
(Optional, string): A query in the Lucene query string syntax. -
scroll
(Optional, string | -1 | 0): The period to retain the search context for scrolling. -
scroll_size
(Optional, number): The size of the scroll request that powers the operation. -
search_timeout
(Optional, string | -1 | 0): The explicit timeout for each search request. It defaults to no timeout. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): The type of the search operation. Available options includequery_then_fetch
anddfs_query_then_fetch
. -
slices
(Optional, number | Enum("auto")): The number of slices this task should be divided into. -
sort
(Optional, string[]): A list of<field>:<direction>
pairs. -
stats
(Optional, string[]): The specifictag
of the request for logging and statistical purposes. -
terminate_after
(Optional, number): The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers. -
timeout
(Optional, string | -1 | 0): The period each deletion request waits for active shards. -
version
(Optional, boolean): Iftrue
, returns the document version as part of a hit. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). Thetimeout
value controls how long each write request waits for unavailable shards to become available. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the operation is complete. Iffalse
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at.tasks/task/${taskId}
. When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.
-
delete_by_query_rethrottle
editThrottle a delete by query operation.
Change the number of requests per second for a particular delete by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.
client.deleteByQueryRethrottle({ task_id })
Arguments
edit-
Request (object):
-
task_id
(string | number): The ID for the task. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second. To disable throttling, set it to-1
.
-
delete_script
editDelete a script or search template. Deletes a stored script or search template.
client.deleteScript({ id })
Arguments
edit-
Request (object):
-
id
(string): The identifier for the stored script or search template. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
exists
editCheck a document.
Verify that a document exists.
For example, check to see if a document with the _id
0 exists:
HEAD my-index-000001/_doc/0
If the document exists, the API returns a status code of 200 - OK
.
If the document doesn’t exist, the API returns 404 - Not Found
.
Versioning support
You can use the version
parameter to check the document only if its current version is equal to the specified one.
Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn’t disappear immediately, although you won’t be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.
client.exists({ id, index })
Arguments
edit-
Request (object):
-
id
(string): A unique document identifier. -
index
(string): A list of data streams, indices, and aliases. It supports wildcards (*
). -
preference
(Optional, string): The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas. If it is set to_local
, the operation will prefer to be run on a local allocated shard when possible. If it is set to a custom value, the value is used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session ID or the user name. -
realtime
(Optional, boolean): Iftrue
, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): Iftrue
, the request refreshes the relevant shards before retrieving the document. Setting it totrue
should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing). -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]): Indicates whether to return the_source
field (true
orfalse
) or lists the fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in_source_includes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
stored_fields
(Optional, string | string[]): A list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the_source
parameter defaults tofalse
. -
version
(Optional, number): Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): The version type.
-
exists_source
editCheck for a document source.
Check whether a document source exists in an index. For example:
HEAD my-index-000001/_source/1
A document’s source is not available if it is disabled in the mapping.
client.existsSource({ id, index })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the document. -
index
(string): A list of data streams, indices, and aliases. It supports wildcards (*
). -
preference
(Optional, string): The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas. -
realtime
(Optional, boolean): Iftrue
, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): Iftrue
, the request refreshes the relevant shards before retrieving the document. Setting it totrue
should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing). -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]): Indicates whether to return the_source
field (true
orfalse
) or lists the fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude in the response. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. -
version
(Optional, number): The version number for concurrency control. It must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): The version type.
-
explain
editExplain a document match result. Get information about why a specific document matches, or doesn’t match, a query. It computes a score explanation for a query and a specific document.
client.explain({ id, index })
Arguments
edit-
Request (object):
-
id
(string): The document identifier. -
index
(string): Index names that are used to limit the request. Only a single index name can be provided to this parameter. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Defines the search definition using the Query DSL. -
analyzer
(Optional, string): The analyzer to use for the query string. This parameter can be used only when theq
query string parameter is specified. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. This parameter can be used only when theq
query string parameter is specified. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. This parameter can be used only when theq
query string parameter is specified. -
df
(Optional, string): The field to use as default where no field prefix is given in the query string. This parameter can be used only when theq
query string parameter is specified. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theq
query string parameter is specified. -
preference
(Optional, string): The node or shard the operation should be performed on. It is random by default. -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]):True
orfalse
to return the_source
field or not or a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in_source_includes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
stored_fields
(Optional, string | string[]): A list of stored fields to return in the response. -
q
(Optional, string): The query in the Lucene query string syntax.
-
field_caps
editGet the field capabilities.
Get information about the capabilities of fields among multiple indices.
For data streams, the API returns field capabilities among the stream’s backing indices.
It returns runtime fields like any other field.
For example, a runtime field with a type of keyword is returned the same as any other field that belongs to the keyword
family.
client.fieldCaps({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all. -
fields
(Optional, string | string[]): A list of fields to retrieve capabilities for. Wildcard (*
) expressions are supported. -
index_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Filter indices if the provided query rewrites tomatch_none
on every shard. IMPORTANT: The filtering is done on a best-effort basis, it uses index statistics and mappings to rewrite queries tomatch_none
instead of fully running the request. For instance a range query over a date field can rewrite tomatch_none
if all documents within a shard (including deleted documents) are outside of the provided range. However, not all queries can rewrite tomatch_none
so this API may return an index even if the provided filter matches no document. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Define ad-hoc runtime fields in the request similar to the way it is done in search requests. These fields exist only as part of the query and take precedence over fields defined with the same name in the index mappings. -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts with foo but no index starts with bar. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
ignore_unavailable
(Optional, boolean): Iftrue
, missing or closed indices are not included in the response. -
include_unmapped
(Optional, boolean): If true, unmapped fields are included in the response. -
filters
(Optional, string): A list of filters to apply to the response. -
types
(Optional, string[]): A list of field types to include. Any fields that do not match one of these types will be excluded from the results. It defaults to empty, meaning that all field types are returned. -
include_empty_fields
(Optional, boolean): If false, empty fields are not included in the response.
-
get
editGet a document by its ID.
Get a document and its source or stored fields from an index.
By default, this API is realtime and is not affected by the refresh rate of the index (when data will become visible for search).
In the case where stored fields are requested with the stored_fields
parameter and the document has been updated but is not yet refreshed, the API will have to parse and analyze the source to extract the stored fields.
To turn off realtime behavior, set the realtime
parameter to false.
Source filtering
By default, the API returns the contents of the _source
field unless you have used the stored_fields
parameter or the _source
field is turned off.
You can turn off _source
retrieval by using the _source
parameter:
GET my-index-000001/_doc/0?_source=false
If you only need one or two fields from the _source
, use the _source_includes
or _source_excludes
parameters to include or filter out particular fields.
This can be helpful with large documents where partial retrieval can save on network overhead
Both parameters take a comma separated list of fields or wildcard expressions.
For example:
GET my-index-000001/_doc/0?_source_includes=*.id&_source_excludes=entities
If you only want to specify includes, you can use a shorter notation:
GET my-index-000001/_doc/0?_source=*.id
Routing
If routing is used during indexing, the routing value also needs to be specified to retrieve a document. For example:
GET my-index-000001/_doc/2?routing=user1
This request gets the document with ID 2, but it is routed based on the user. The document is not fetched if the correct routing is not specified.
Distributed
The GET operation is hashed into a specific shard ID. It is then redirected to one of the replicas within that shard ID and returns the result. The replicas are the primary shard and its replicas within that shard ID group. This means that the more replicas you have, the better your GET scaling will be.
Versioning support
You can use the version
parameter to retrieve the document only if its current version is equal to the specified one.
Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn’t disappear immediately, although you won’t be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.
client.get({ id, index })
Arguments
edit-
Request (object):
-
id
(string): A unique document identifier. -
index
(string): The name of the index that contains the document. -
force_synthetic_source
(Optional, boolean): Indicates whether the request forces synthetic_source
. Use this paramater to test if the mapping supports synthetic_source
and to get a sense of the worst case performance. Fetches with this parameter enabled will be slower than enabling synthetic source natively in the index. -
preference
(Optional, string): The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas. If it is set to_local
, the operation will prefer to be run on a local allocated shard when possible. If it is set to a custom value, the value is used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session ID or the user name. -
realtime
(Optional, boolean): Iftrue
, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): Iftrue
, the request refreshes the relevant shards before retrieving the document. Setting it totrue
should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing). -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]): Indicates whether to return the_source
field (true
orfalse
) or lists the fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in_source_includes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
stored_fields
(Optional, string | string[]): A list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the_source
parameter defaults tofalse
. Only leaf fields can be retrieved with thestored_field
option. Object fields can’t be returned;if specified, the request fails. -
version
(Optional, number): The version number for concurrency control. It must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): The version type.
-
get_script
editGet a script or search template. Retrieves a stored script or search template.
client.getScript({ id })
Arguments
edit-
Request (object):
-
id
(string): The identifier for the stored script or search template. -
master_timeout
(Optional, string | -1 | 0): The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
get_script_context
editGet script contexts.
Get a list of supported script contexts and their methods.
client.getScriptContext()
get_script_languages
editGet script languages.
Get a list of available script types, languages, and contexts.
client.getScriptLanguages()
get_source
editGet a document’s source.
Get the source of a document. For example:
GET my-index-000001/_source/1
You can use the source filtering parameters to control which parts of the _source
are returned:
GET my-index-000001/_source/1/?_source_includes=*.id&_source_excludes=entities
client.getSource({ id, index })
Arguments
edit-
Request (object):
-
id
(string): A unique document identifier. -
index
(string): The name of the index that contains the document. -
preference
(Optional, string): The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas. -
realtime
(Optional, boolean): Iftrue
, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): Iftrue
, the request refreshes the relevant shards before retrieving the document. Setting it totrue
should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing). -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]): Indicates whether to return the_source
field (true
orfalse
) or lists the fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude in the response. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. -
stored_fields
(Optional, string | string[]): A list of stored fields to return as part of a hit. -
version
(Optional, number): The version number for concurrency control. It must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): The version type.
-
health_report
editGet the cluster health. Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.
Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.
The cluster’s status is controlled by the worst indicator status.
In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.
Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.
The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.
client.healthReport({ ... })
Arguments
edit-
Request (object):
-
feature
(Optional, string | string[]): A feature of the cluster, as returned by the top-level health report API. -
timeout
(Optional, string | -1 | 0): Explicit operation timeout. -
verbose
(Optional, boolean): Opt-in for more information about the health of the system. -
size
(Optional, number): Limit the number of affected resources the health report API returns.
-
index
editCreate or update a document in an index.
Add a JSON document to the specified data stream or index and make it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.
You cannot use this API to send update requests for existing documents in a data stream.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:
-
To add or overwrite a document using the
PUT /<target>/_doc/<_id>
request format, you must have thecreate
,index
, orwrite
index privilege. -
To add a document using the
POST /<target>/_doc/
request format, you must have thecreate_doc
,create
,index
, orwrite
index privilege. -
To automatically create a data stream or index with this API request, you must have the
auto_configure
,create_index
, ormanage
index privilege.
Automatic data stream creation requires a matching index template with data stream enabled.
Replica shards might not all be started when an indexing operation returns successfully.
By default, only the primary is required. Set wait_for_active_shards
to change this default behavior.
Automatically create data streams and indices
If the request’s target doesn’t exist and matches an index template with a data_stream
definition, the index operation automatically creates the data stream.
If the target doesn’t exist and doesn’t match a data stream template, the operation automatically creates the index and applies any matching index templates.
Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.
If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.
Automatic index creation is controlled by the action.auto_create_index
setting.
If it is true
, any index can be created automatically.
You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false
to turn off automatic index creation entirely.
Specify a list of patterns you want to allow or prefix each pattern with +
or -
to indicate whether it should be allowed or blocked.
When a list is specified, the default behaviour is to disallow.
The action.auto_create_index
setting affects the automatic creation of indices only.
It does not affect the creation of data streams.
Optimistic concurrency control
Index operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no
and if_primary_term
parameters.
If a mismatch is detected, the operation will result in a VersionConflictException
and a status code of 409
.
Routing
By default, shard placement — or routing — is controlled by using a hash of the document’s ID value.
For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing
parameter.
When setting up explicit mapping, you can also use the _routing
field to direct the index operation to extract the routing value from the document itself.
This does come at the (very minimal) cost of an additional document parsing pass.
If the _routing
mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.
Data streams do not support custom routing unless they were created with the allow_custom_routing
setting enabled in the template.
Distributed
The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.
Active shards
To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation.
If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs.
By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards
is 1
).
This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards
.
To alter this behavior per operation, use the wait_for_active_shards request
parameter.
Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas
+1).
Specifying a negative value or a number greater than the number of shard copies will throw an error.
For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes).
If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding.
This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data.
If wait_for_active_shards
is set on the request to 3
(and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding.
This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard.
However, if you set wait_for_active_shards
to all
(or to 4
, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index.
The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.
It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts.
After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary.
The _shards
section of the API response reveals the number of shard copies on which replication succeeded and failed.
No operation (noop) updates
When updating a document by using this API, a new version of the document is always created even if the document hasn’t changed.
If this isn’t acceptable use the _update
API with detect_noop
set to true
.
The detect_noop
option isn’t available on this API because it doesn’t fetch the old source and isn’t able to compare it against the new source.
There isn’t a definitive rule for when noop updates aren’t acceptable. It’s a combination of lots of factors like how frequently your data source sends updates that are actually noops and how many queries per second Elasticsearch runs on the shard receiving the updates.
Versioning
Each indexed document is given a version number.
By default, internal versioning is used that starts at 1 and increments with each update, deletes included.
Optionally, the version number can be set to an external value (for example, if maintained in a database).
To enable this functionality, version_type
should be set to external
.
The value provided must be a numeric, long value greater than or equal to 0, and less than around 9.2e+18
.
Versioning is completely real time, and is not affected by the near real time aspects of search operations. If no version is provided, the operation runs without any version checks.
When using the external version type, the system checks to see if the version number passed to the index request is greater than the version of the currently stored document. If true, the document will be indexed and the new version number used. If the value provided is less than or equal to the stored document’s version number, a version conflict will occur and the index operation will fail. For example:
PUT my-index-000001/_doc/1?version=2&version_type=external { "user": { "id": "elkbee" } }
In this example, the operation will succeed since the supplied version of 2 is higher than the current document version of 1. If the document was already updated and its version was set to 2 or higher, the indexing command will fail and result in a conflict (409 HTTP status code).
A nice side effect is that there is no need to maintain strict ordering of async indexing operations run as a result of changes to a source database, as long as version numbers from the source database are used. Even the simple case of updating the Elasticsearch index using data from a database is simplified if external versioning is used, as only the latest version will be used if the index operations arrive out of order.
client.index({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the data stream or index to target. If the target doesn’t exist and matches the name or wildcard (*
) pattern of an index template with adata_stream
definition, this request creates the data stream. If the target doesn’t exist and doesn’t match a data stream template, this request creates the index. You can check for existing targets with the resolve index API. -
id
(Optional, string): A unique identifier for the document. To automatically generate a document ID, use thePOST /<target>/_doc/
request format and omit this parameter. -
document
(Optional, object): A document. -
if_primary_term
(Optional, number): Only perform the operation if the document has this primary term. -
if_seq_no
(Optional, number): Only perform the operation if the document has this sequence number. -
include_source_on_error
(Optional, boolean): True or false if to include the document source in the error message in case of parsing errors. -
op_type
(Optional, Enum("index" | "create")): Set tocreate
to only index the document if it does not already exist (put if absent). If a document with the specified_id
already exists, the indexing operation will fail. The behavior is the same as using the<index>/_create
endpoint. If a document ID is specified, this paramater defaults toindex
. Otherwise, it defaults tocreate
. If the request targets a data stream, anop_type
ofcreate
is required. -
pipeline
(Optional, string): The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to_none
disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
, Elasticsearch refreshes the affected shards to make this operation visible to search. Ifwait_for
, it waits for a refresh to make this operation visible to search. Iffalse
, it does nothing with refreshes. -
routing
(Optional, string): A custom value that is used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards. This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur. -
version
(Optional, number): An explicit version number for concurrency control. It must be a non-negative long number. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): The version type. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. You can set it toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default value of1
means it waits for each primary shard to be active. -
require_alias
(Optional, boolean): Iftrue
, the destination must be an index alias.
-
info
editGet cluster info. Get basic build, version, and cluster information.
client.info()
knn_search
editRun a knn search.
The kNN search API has been replaced by the knn
option in the search API.
Perform a k-nearest neighbor (kNN) search on a dense_vector field and return the matching documents. Given a query vector, the API finds the k closest vectors and returns those documents as search hits.
Elasticsearch uses the HNSW algorithm to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved search speed. This means the results returned are not always the true k closest neighbors.
The kNN search API supports restricting the search using a filter. The search will return the top k documents that also match the filter query.
A kNN search response has the exact same structure as a search API response. However, certain sections have a meaning specific to kNN search:
-
The document
_score
is determined by the similarity between the query and document vector. -
The
hits.total
object contains the total number of nearest neighbor candidates considered, which isnum_candidates * num_shards
. Thehits.total.relation
will always beeq
, indicating an exact value.
client.knnSearch({ index, knn })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index names to search; use_all
or to perform the operation on all indices. -
knn
({ field, query_vector, k, num_candidates }): The kNN query to run. -
_source
(Optional, boolean | { excludes, includes }): Indicates which source fields are returned for matching documents. These fields are returned in thehits._source
property of the search response. -
docvalue_fields
(Optional, { field, format, include_unmapped }[]): The request returns doc values for field names matching these patterns in thehits.fields
property of the response. It accepts wildcard (*
) patterns. -
stored_fields
(Optional, string | string[]): A list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the_source
parameter defaults tofalse
. You can pass_source: true
to return both source fields and stored fields in the search response. -
fields
(Optional, string | string[]): The request returns values for field names matching these patterns in thehits.fields
property of the response. It accepts wildcard (*
) patterns. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type } | { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }[]): A query to filter the documents that can match. The kNN search will return the topk
documents that also match this filter. The value can be a single query or a list of queries. Iffilter
isn’t provided, all documents are allowed to match. -
routing
(Optional, string): A list of specific routing values.
-
mget
editGet multiple documents.
Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.
Filter source fields
By default, the _source
field is returned for every document (if stored).
Use the _source
and _source_include
or source_exclude
attributes to filter what fields are returned for a particular document.
You can include the _source
, _source_includes
, and _source_excludes
query parameters in the request URI to specify the defaults to use when there are no per-document instructions.
Get stored fields
Use the stored_fields
attribute to specify the set of stored fields you want to retrieve.
Any requested fields that are not stored are ignored.
You can include the stored_fields
query parameter in the request URI to specify the defaults to use when there are no per-document instructions.
client.mget({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): Name of the index to retrieve documents from whenids
are specified, or when a document in thedocs
array does not specify an index. -
docs
(Optional, { _id, _index, routing, _source, stored_fields, version, version_type }[]): The documents you want to retrieve. Required if no index is specified in the request URI. -
ids
(Optional, string | string[]): The IDs of the documents you want to retrieve. Allowed when the index is specified in the request URI. -
force_synthetic_source
(Optional, boolean): Should this request force synthetic _source? Use this to test if the mapping supports synthetic _source and to get a sense of the worst case performance. Fetches with this enabled will be slower the enabling synthetic source natively in the index. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
realtime
(Optional, boolean): Iftrue
, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): Iftrue
, the request refreshes relevant shards before retrieving documents. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]): True or false to return the_source
field or not, or a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in_source_includes
query parameter. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
stored_fields
(Optional, string | string[]): Iftrue
, retrieves the document fields stored in the index rather than the document_source
.
-
msearch
editRun multiple searches.
The format of the request is similar to the bulk API format and makes use of the newline delimited JSON (NDJSON) format. The structure is as follows:
header\n body\n header\n body\n
This structure is specifically optimized to reduce parsing if a specific search ends up redirected to another node.
The final line of data must end with a newline character \n
.
Each newline character may be preceded by a carriage return \r
.
When sending requests to this endpoint the Content-Type
header should be set to application/x-ndjson
.
client.msearch({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and index aliases to search. -
searches
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, index, preference, request_cache, routing, search_type, ccs_minimize_roundtrips, allow_partial_search_results, ignore_throttled } | { aggregations, collapse, query, explain, ext, stored_fields, docvalue_fields, knn, from, highlight, indices_boost, min_score, post_filter, profile, rescore, script_fields, search_after, size, sort, _source, fields, terminate_after, stats, timeout, track_scores, track_total_hits, version, runtime_mappings, seq_no_primary_term, pit, suggest }[]) -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. -
ccs_minimize_roundtrips
(Optional, boolean): If true, network roundtrips between the coordinating node and remote clusters are minimized for cross-cluster search requests. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. -
ignore_throttled
(Optional, boolean): If true, concrete, expanded or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): If true, missing or closed indices are not included in the response. -
include_named_queries_score
(Optional, boolean): Indicates whether hit.matched_queries should be rendered as a map that includes the name of the matched query associated with its score (true) or as an array containing the name of the matched queries (false) This functionality reruns each named query on every hit in a search response. Typically, this adds a small overhead to a request. However, using computationally expensive named queries on a large number of hits may add significant overhead. -
max_concurrent_searches
(Optional, number): Maximum number of concurrent searches the multi search API can execute. -
max_concurrent_shard_requests
(Optional, number): Maximum number of concurrent shard requests that each sub-search request executes per node. -
pre_filter_shard_size
(Optional, number): Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method i.e., if date filters are mandatory to match but the shard bounds and the query are disjoint. -
rest_total_hits_as_int
(Optional, boolean): If true, hits.total are returned as an integer in the response. Defaults to false, which returns an object. -
routing
(Optional, string): Custom routing value used to route search operations to a specific shard. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): Indicates whether global term and document frequencies should be used when scoring returned documents. -
typed_keys
(Optional, boolean): Specifies whether aggregation and suggester names should be prefixed by their respective types in the response.
-
msearch_template
editRun multiple templated searches.
Run multiple templated searches with a single request.
If you are providing a text file or text input to curl
, use the --data-binary
flag instead of -d
to preserve newlines.
For example:
$ cat requests { "index": "my-index" } { "id": "my-search-template", "params": { "query_string": "hello world", "from": 0, "size": 10 }} { "index": "my-other-index" } { "id": "my-other-search-template", "params": { "query_type": "match_all" }} $ curl -H "Content-Type: application/x-ndjson" -XGET localhost:9200/_msearch/template --data-binary "@requests"; echo
client.msearchTemplate({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases to search. It supports wildcards (*
). To search all data streams and indices, omit this parameter or use*
. -
search_templates
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, index, preference, request_cache, routing, search_type, ccs_minimize_roundtrips, allow_partial_search_results, ignore_throttled } | { aggregations, collapse, query, explain, ext, stored_fields, docvalue_fields, knn, from, highlight, indices_boost, min_score, post_filter, profile, rescore, script_fields, search_after, size, sort, _source, fields, terminate_after, stats, timeout, track_scores, track_total_hits, version, runtime_mappings, seq_no_primary_term, pit, suggest }[]) -
ccs_minimize_roundtrips
(Optional, boolean): Iftrue
, network round-trips are minimized for cross-cluster search requests. -
max_concurrent_searches
(Optional, number): The maximum number of concurrent searches the API can run. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): The type of the search operation. -
rest_total_hits_as_int
(Optional, boolean): Iftrue
, the response returnshits.total
as an integer. Iffalse
, it returnshits.total
as an object. -
typed_keys
(Optional, boolean): Iftrue
, the response prefixes aggregation and suggester names with their respective types.
-
mtermvectors
editGet multiple term vectors.
Get multiple term vectors with a single request.
You can specify existing documents by index and ID or provide artificial documents in the body of the request.
You can specify the index in the request body or request URI.
The response contains a docs
array with all the fetched termvectors.
Each element has the structure provided by the termvectors API.
Artificial documents
You can also use mtermvectors
to generate term vectors for artificial documents provided in the body of the request.
The mapping used is determined by the specified _index
.
client.mtermvectors({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): The name of the index that contains the documents. -
docs
(Optional, { _id, _index, routing, _source, stored_fields, version, version_type }[]): An array of existing or artificial documents. -
ids
(Optional, string[]): A simplified syntax to specify documents by their ID if they’re in the same index. -
fields
(Optional, string | string[]): A list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in thecompletion_fields
orfielddata_fields
parameters. -
field_statistics
(Optional, boolean): Iftrue
, the response includes the document count, sum of document frequencies, and sum of total term frequencies. -
offsets
(Optional, boolean): Iftrue
, the response includes term offsets. -
payloads
(Optional, boolean): Iftrue
, the response includes term payloads. -
positions
(Optional, boolean): Iftrue
, the response includes term positions. -
preference
(Optional, string): The node or shard the operation should be performed on. It is random by default. -
realtime
(Optional, boolean): If true, the request is real-time as opposed to near-real-time. -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
term_statistics
(Optional, boolean): If true, the response includes term frequency and document frequency. -
version
(Optional, number): Iftrue
, returns the document version as part of a hit. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): The version type.
-
open_point_in_time
editOpen a point in time.
A search request by default runs against the most recent visible data of the target indices,
which is called point in time. Elasticsearch pit (point in time) is a lightweight view into the
state of the data as it existed when initiated. In some cases, it’s preferred to perform multiple
search requests using the same point in time. For example, if refreshes happen between
search_after
requests, then the results of those requests might not be consistent as changes happening
between searches are only visible to the more recent point in time.
A point in time must be opened explicitly before being used in search requests.
A subsequent search request with the pit
parameter must not specify index
, routing
, or preference
values as these parameters are copied from the point in time.
Just like regular searches, you can use from
and size
to page through point in time search results, up to the first 10,000 hits.
If you want to retrieve more hits, use PIT with search_after
.
The open point in time request and each subsequent search request can return different identifiers; always use the most recently received ID for the next search request.
When a PIT that contains shard failures is used in a search request, the missing are always reported in the search response as a NoShardAvailableActionException
exception.
To get rid of these exceptions, a new PIT needs to be created so that shards missing from the previous PIT can be handled, assuming they become available in the meantime.
Keeping point in time alive
The keep_alive
parameter, which is passed to a open point in time request and search request, extends the time to live of the corresponding point in time.
The value does not need to be long enough to process all data — it just needs to be long enough for the next request.
Normally, the background merge process optimizes the index by merging together smaller segments to create new, bigger segments. Once the smaller segments are no longer needed they are deleted. However, open point-in-times prevent the old segments from being deleted since they are still in use.
Keeping older segments alive means that more disk space and file handles are needed. Ensure that you have configured your nodes to have ample free file handles.
Additionally, if a segment contains deleted or updated documents then the point in time must keep track of whether each document in the segment was live at the time of the initial search request. Ensure that your nodes have sufficient heap space if you have many open point-in-times on an index that is subject to ongoing deletes or updates. Note that a point-in-time doesn’t prevent its associated indices from being deleted. You can check how many point-in-times (that is, search contexts) are open with the nodes stats API.
client.openPointInTime({ index, keep_alive })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index names to open point in time; use_all
or empty string to perform the operation on all indices -
keep_alive
(string | -1 | 0): Extend the length of time that the point in time persists. -
index_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Filter indices if the provided query rewrites tomatch_none
on every shard. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
preference
(Optional, string): The node or shard the operation should be performed on. By default, it is random. -
routing
(Optional, string): A custom value that is used to route operations to a specific shard. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
allow_partial_search_results
(Optional, boolean): Indicates whether the point in time tolerates unavailable shards or shard failures when initially creating the PIT. Iffalse
, creating a point in time request when a shard is missing or unavailable will throw an exception. Iftrue
, the point in time will contain all the shards that are available at the time of the request. -
max_concurrent_shard_requests
(Optional, number): Maximum number of concurrent shard requests that each sub-search request executes per node.
-
ping
editPing the cluster. Get information about whether the cluster is running.
client.ping()
put_script
editCreate or update a script or search template. Creates or updates a stored script or search template.
client.putScript({ id, script })
Arguments
edit-
Request (object):
-
id
(string): The identifier for the stored script or search template. It must be unique within the cluster. -
script
({ lang, options, source }): The script or search template, its parameters, and its language. -
context
(Optional, string): The context in which the script or search template should run. To prevent errors, the API immediately compiles the script or template in this context. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
rank_eval
editEvaluate ranked search results.
Evaluate the quality of ranked search results over a set of typical search queries.
client.rankEval({ requests })
Arguments
edit-
Request (object):
-
requests
({ id, request, ratings, template_id, params }[]): A set of typical search requests, together with their provided ratings. -
index
(Optional, string | string[]): A list of data streams, indices, and index aliases used to limit the request. Wildcard (*
) expressions are supported. To target all data streams and indices in a cluster, omit this parameter or use_all
or*
. -
metric
(Optional, { precision, recall, mean_reciprocal_rank, dcg, expected_reciprocal_rank }): Definition of the evaluation metric to calculate. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
ignore_unavailable
(Optional, boolean): Iftrue
, missing or closed indices are not included in the response. -
search_type
(Optional, string): Search operation type
-
reindex
editReindex documents.
Copy documents from a source to a destination. You can copy all documents to the destination index or reindex a subset of the documents. The source can be any existing index, alias, or data stream. The destination must differ from the source. For example, you cannot reindex a data stream into itself.
Reindex requires _source
to be enabled for all documents in the source.
The destination should be configured as wanted before calling the reindex API.
Reindex does not copy the settings from the source or its associated template.
Mappings, shard counts, and replicas, for example, must be configured ahead of time.
If the Elasticsearch security features are enabled, you must have the following security privileges:
-
The
read
index privilege for the source data stream, index, or alias. -
The
write
index privilege for the destination data stream, index, or index alias. -
To automatically create a data stream or index with a reindex API request, you must have the
auto_configure
,create_index
, ormanage
index privilege for the destination data stream, index, or alias. -
If reindexing from a remote cluster, the
source.remote.user
must have themonitor
cluster privilege and theread
index privilege for the source data stream, index, or alias.
If reindexing from a remote cluster, you must explicitly allow the remote host in the reindex.remote.whitelist
setting.
Automatic data stream creation requires a matching index template with data stream enabled.
The dest
element can be configured like the index API to control optimistic concurrency control.
Omitting version_type
or setting it to internal
causes Elasticsearch to blindly dump documents into the destination, overwriting any that happen to have the same ID.
Setting version_type
to external
causes Elasticsearch to preserve the version
from the source, create any documents that are missing, and update any documents that have an older version in the destination than they do in the source.
Setting op_type
to create
causes the reindex API to create only missing documents in the destination.
All existing documents will cause a version conflict.
Because data streams are append-only, any reindex request to a destination data stream must have an op_type
of create
.
A reindex can only add new documents to a destination data stream.
It cannot update existing documents in a destination data stream.
By default, version conflicts abort the reindex process.
To continue reindexing if there are conflicts, set the conflicts
request body property to proceed
.
In this case, the response includes a count of the version conflicts that were encountered.
Note that the handling of other error types is unaffected by the conflicts
property.
Additionally, if you opt to count version conflicts, the operation could attempt to reindex more documents from the source than max_docs
until it has successfully indexed max_docs
documents into the target or it has gone through every document in the source query.
The reindex API makes no effort to handle ID collisions. The last document written will "win" but the order isn’t usually predictable so it is not a good idea to rely on this behavior. Instead, make sure that IDs are unique by using a script.
Running reindex asynchronously
If the request contains wait_for_completion=false
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task.
Elasticsearch creates a record of this task as a document at _tasks/<task_id>
.
Reindex from multiple sources
If you have many sources to reindex it is generally better to reindex them one at a time rather than using a glob pattern to pick up multiple sources. That way you can resume the process if there are any errors by removing the partially completed source and starting over. It also makes parallelizing the process fairly simple: split the list of sources to reindex and run each list in parallel.
For example, you can use a bash script like this:
for index in i1 i2 i3 i4 i5; do curl -HContent-Type:application/json -XPOST localhost:9200/_reindex?pretty -d'{ "source": { "index": "'$index'" }, "dest": { "index": "'$index'-reindexed" } }' done
Throttling
Set requests_per_second
to any positive decimal number (1.4
, 6
, 1000
, for example) to throttle the rate at which reindex issues batches of index operations.
Requests are throttled by padding each batch with a wait time.
To turn off throttling, set requests_per_second
to -1
.
The throttling is done by waiting between batches so that the scroll that reindex uses internally can be given a timeout that takes into account the padding.
The padding time is the difference between the batch size divided by the requests_per_second
and the time spent writing.
By default the batch size is 1000
, so if requests_per_second
is set to 500
:
target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds
Since the batch is issued as a single bulk request, large batch sizes cause Elasticsearch to create many requests and then wait for a while before starting the next set. This is "bursty" instead of "smooth".
Slicing
Reindex supports sliced scroll to parallelize the reindexing process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts.
Reindexing from remote clusters does not support manual or automatic slicing.
You can slice a reindex request manually by providing a slice ID and total number of slices to each request.
You can also let reindex automatically parallelize by using sliced scroll to slice on _id
.
The slices
parameter specifies the number of slices to use.
Adding slices
to the reindex request just automates the manual process, creating sub-requests which means it has some quirks:
- You can see these requests in the tasks API. These sub-requests are "child" tasks of the task for the request with slices.
-
Fetching the status of the task for the request with
slices
only contains the status of completed slices. - These sub-requests are individually addressable for things like cancellation and rethrottling.
-
Rethrottling the request with
slices
will rethrottle the unfinished sub-request proportionally. -
Canceling the request with
slices
will cancel each sub-request. -
Due to the nature of
slices
, each sub-request won’t get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution. -
Parameters like
requests_per_second
andmax_docs
on a request withslices
are distributed proportionally to each sub-request. Combine that with the previous point about distribution being uneven and you should conclude that usingmax_docs
withslices
might not result in exactlymax_docs
documents being reindexed. - Each sub-request gets a slightly different snapshot of the source, though these are all taken at approximately the same time.
If slicing automatically, setting slices
to auto
will choose a reasonable number for most indices.
If slicing manually or otherwise tuning automatic slicing, use the following guidelines.
Query performance is most efficient when the number of slices is equal to the number of shards in the index.
If that number is large (for example, 500
), choose a lower number as too many slices will hurt performance.
Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
Indexing performance scales linearly across available resources with the number of slices.
Whether query or indexing performance dominates the runtime depends on the documents being reindexed and cluster resources.
Modify documents during reindexing
Like _update_by_query
, reindex operations support a script that modifies the document.
Unlike _update_by_query
, the script is allowed to modify the document’s metadata.
Just as in _update_by_query
, you can set ctx.op
to change the operation that is run on the destination.
For example, set ctx.op
to noop
if your script decides that the document doesn’t have to be indexed in the destination. This "no operation" will be reported in the noop
counter in the response body.
Set ctx.op
to delete
if your script decides that the document must be deleted from the destination.
The deletion will be reported in the deleted
counter in the response body.
Setting ctx.op
to anything else will return an error, as will setting any other field in ctx
.
Think of the possibilities! Just be careful; you are able to change:
-
_id
-
_index
-
_version
-
_routing
Setting _version
to null
or clearing it from the ctx
map is just like not sending the version in an indexing request.
It will cause the document to be overwritten in the destination regardless of the version on the target or the version type you use in the reindex API.
Reindex from remote
Reindex supports reindexing from a remote Elasticsearch cluster.
The host
parameter must contain a scheme, host, port, and optional path.
The username
and password
parameters are optional and when they are present the reindex operation will connect to the remote Elasticsearch node using basic authentication.
Be sure to use HTTPS when using basic authentication or the password will be sent in plain text.
There are a range of settings available to configure the behavior of the HTTPS connection.
When using Elastic Cloud, it is also possible to authenticate against the remote cluster through the use of a valid API key.
Remote hosts must be explicitly allowed with the reindex.remote.whitelist
setting.
It can be set to a comma delimited list of allowed remote host and port combinations.
Scheme is ignored; only the host and port are used.
For example:
reindex.remote.whitelist: [otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*"]
The list of allowed hosts must be configured on any nodes that will coordinate the reindex. This feature should work with remote clusters of any version of Elasticsearch. This should enable you to upgrade from any version of Elasticsearch to the current version by reindexing from a cluster of the old version.
Elasticsearch does not support forward compatibility across major versions. For example, you cannot reindex from a 7.x cluster into a 6.x cluster.
To enable queries sent to older versions of Elasticsearch, the query
parameter is sent directly to the remote host without validation or modification.
Reindexing from remote clusters does not support manual or automatic slicing.
Reindexing from a remote server uses an on-heap buffer that defaults to a maximum size of 100mb.
If the remote index includes very large documents you’ll need to use a smaller batch size.
It is also possible to set the socket read timeout on the remote connection with the socket_timeout
field and the connection timeout with the connect_timeout
field.
Both default to 30 seconds.
Configuring SSL parameters
Reindex from remote supports configurable SSL settings.
These must be specified in the elasticsearch.yml
file, with the exception of the secure settings, which you add in the Elasticsearch keystore.
It is not possible to configure SSL in the body of the reindex request.
client.reindex({ dest, source })
Arguments
edit-
Request (object):
-
dest
({ index, op_type, pipeline, routing, version_type }): The destination you are copying to. -
source
({ index, query, remote, size, slice, sort, _source, runtime_mappings }): The source you are copying from. -
conflicts
(Optional, Enum("abort" | "proceed")): Indicates whether to continue reindexing even when there are conflicts. -
max_docs
(Optional, number): The maximum number of documents to reindex. By default, all documents are reindexed. If it is a value less then or equal toscroll_size
, a scroll will not be used to retrieve the results for the operation. Ifconflicts
is set toproceed
, the reindex operation could attempt to reindex more documents from the source thanmax_docs
until it has successfully indexedmax_docs
documents into the target or it has gone through every document in the source query. -
script
(Optional, { source, id, params, lang, options }): The script to run to update the document source or metadata when reindexing. -
size
(Optional, number) -
refresh
(Optional, boolean): Iftrue
, the request refreshes affected shards to make this operation visible to search. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second. By default, there is no throttle. -
scroll
(Optional, string | -1 | 0): The period of time that a consistent view of the index should be maintained for scrolled search. -
slices
(Optional, number | Enum("auto")): The number of slices this task should be divided into. It defaults to one slice, which means the task isn’t sliced into subtasks. Reindex supports sliced scroll to parallelize the reindexing process. This parallelization can improve efficiency and provide a convenient way to break the request down into smaller parts. NOTE: Reindexing from remote clusters does not support manual or automatic slicing. If set toauto
, Elasticsearch chooses the number of slices to use. This setting will use one slice per shard, up to a certain limit. If there are multiple sources, it will choose the number of slices based on the index or backing index with the smallest number of shards. -
timeout
(Optional, string | -1 | 0): The period each indexing waits for automatic index creation, dynamic mapping updates, and waiting for active shards. By default, Elasticsearch waits for at least one minute before failing. The actual wait time could be longer, particularly when multiple waits occur. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set it toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). The default value is one, which means it waits for each primary shard to be active. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the operation is complete. -
require_alias
(Optional, boolean): Iftrue
, the destination must be an index alias.
-
reindex_rethrottle
editThrottle a reindex operation.
Change the number of requests per second for a particular reindex operation. For example:
POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1
Rethrottling that speeds up the query takes effect immediately. Rethrottling that slows down the query will take effect after completing the current batch. This behavior prevents scroll timeouts.
client.reindexRethrottle({ task_id })
Arguments
edit-
Request (object):
-
task_id
(string): The task identifier, which can be found by using the tasks API. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second. It can be either-1
to turn off throttling or any decimal number like1.7
or12
to throttle to that level.
-
render_search_template
editRender a search template.
Render a search template as a search request body.
client.renderSearchTemplate({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): The ID of the search template to render. If nosource
is specified, this or theid
request body parameter is required. -
file
(Optional, string) -
params
(Optional, Record<string, User-defined value>): Key-value pairs used to replace Mustache variables in the template. The key is the variable name. The value is the variable value. -
source
(Optional, string): An inline search template. It supports the same parameters as the search API’s request body. These parameters also support Mustache variables. If noid
or<templated-id>
is specified, this parameter is required.
-
scripts_painless_execute
editRun a script.
Runs a script and returns a result. Use this API to build and test scripts, such as when defining a script for a runtime field. This API requires very few dependencies and is especially useful if you don’t have permissions to write documents on a cluster.
The API uses several contexts, which control how scripts are run, what variables are available at runtime, and what the return type is.
Each context requires a script, but additional parameters depend on the context you’re using for that script.
client.scriptsPainlessExecute({ ... })
Arguments
edit-
Request (object):
-
context
(Optional, Enum("painless_test" | "filter" | "score" | "boolean_field" | "date_field" | "double_field" | "geo_point_field" | "ip_field" | "keyword_field" | "long_field" | "composite_field")): The context that the script should run in. NOTE: Result ordering in the field contexts is not guaranteed. -
context_setup
(Optional, { document, index, query }): Additional parameters for thecontext
. NOTE: This parameter is required for all contexts exceptpainless_test
, which is the default if no value is provided forcontext
. -
script
(Optional, { source, id, params, lang, options }): The Painless script to run.
-
scroll
editRun a scrolling search.
The scroll API is no longer recommend for deep pagination. If you need to preserve the index state while paging through more than 10,000 hits, use the search_after
parameter with a point in time (PIT).
The scroll API gets large sets of results from a single scrolling search request.
To get the necessary scroll ID, submit a search API request that includes an argument for the scroll
query parameter.
The scroll
parameter indicates how long Elasticsearch should retain the search context for the request.
The search response returns a scroll ID in the _scroll_id
response body parameter.
You can then use the scroll ID with the scroll API to retrieve the next batch of results for the request.
If the Elasticsearch security features are enabled, the access to the results of a specific scroll ID is restricted to the user or API key that submitted the search.
You can also use the scroll API to specify a new scroll parameter that extends or shortens the retention period for the search context.
Results from a scrolling search reflect the state of the index at the time of the initial search request. Subsequent indexing or document changes only affect later search and scroll requests.
client.scroll({ scroll_id })
Arguments
edit-
Request (object):
-
scroll_id
(string): The scroll ID of the search. -
scroll
(Optional, string | -1 | 0): The period to retain the search context for scrolling. -
rest_total_hits_as_int
(Optional, boolean): If true, the API response’s hit.total property is returned as an integer. If false, the API response’s hit.total property is returned as an object.
-
search
editRun a search.
Get search hits that match the query defined in the request.
You can provide search queries using the q
query string parameter or the request body.
If both are specified, only the query parameter is used.
If the Elasticsearch security features are enabled, you must have the read index privilege for the target data stream, index, or alias. For cross-cluster search, refer to the documentation about configuring CCS privileges.
To search a point in time (PIT) for an alias, you must have the read
index privilege for the alias’s data streams or indices.
Search slicing
When paging through a large number of documents, it can be helpful to split the search into multiple slices to consume them independently with the slice
and pit
properties.
By default the splitting is done first on the shards, then locally on each shard.
The local splitting partitions the shard into contiguous ranges based on Lucene document IDs.
For instance if the number of shards is equal to 2 and you request 4 slices, the slices 0 and 2 are assigned to the first shard and the slices 1 and 3 are assigned to the second shard.
The same point-in-time ID should be used for all slices. If different PIT IDs are used, slices can overlap and miss documents. This situation can occur because the splitting criterion is based on Lucene document IDs, which are not stable across changes to the index.
client.search({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases to search. It supports wildcards (*
). To search all data streams and indices, omit this parameter or use*
or_all
. -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): Defines the aggregations that are run as part of the search request. -
collapse
(Optional, { field, inner_hits, max_concurrent_group_searches, collapse }): Collapses search results the values of the specified field. -
explain
(Optional, boolean): Iftrue
, the request returns detailed information about score computation as part of a hit. -
ext
(Optional, Record<string, User-defined value>): Configuration of search extensions defined by Elasticsearch plugins. -
from
(Optional, number): The starting document offset, which must be non-negative. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
highlight
(Optional, { encoder, fields }): Specifies the highlighter to use for retrieving highlighted snippets from one or more fields in your search results. -
track_total_hits
(Optional, boolean | number): Number of hits matching the query to count accurately. Iftrue
, the exact number of hits is returned at the cost of some performance. Iffalse
, the response does not include the total number of hits matching the query. -
indices_boost
(Optional, Record<string, number>[]): Boost the_score
of documents from specified indices. The boost value is the factor by which scores are multiplied. A boost value greater than1.0
increases the score. A boost value between0
and1.0
decreases the score. -
docvalue_fields
(Optional, { field, format, include_unmapped }[]): An array of wildcard (*
) field patterns. The request returns doc values for field names matching these patterns in thehits.fields
property of the response. -
knn
(Optional, { field, query_vector, query_vector_builder, k, num_candidates, boost, filter, similarity, inner_hits, rescore_vector } | { field, query_vector, query_vector_builder, k, num_candidates, boost, filter, similarity, inner_hits, rescore_vector }[]): The approximate kNN search to run. -
rank
(Optional, { rrf }): The Reciprocal Rank Fusion (RRF) to use. -
min_score
(Optional, number): The minimum_score
for matching documents. Documents with a lower_score
are not included in the search results. -
post_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Use thepost_filter
parameter to filter search results. The search hits are filtered after the aggregations are calculated. A post filter has no impact on the aggregation results. -
profile
(Optional, boolean): Set totrue
to return detailed timing information about the execution of individual components in a search request. NOTE: This is a debugging tool and adds significant overhead to search execution. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The search definition using the Query DSL. -
rescore
(Optional, { window_size, query, learning_to_rank } | { window_size, query, learning_to_rank }[]): Can be used to improve precision by reordering just the top (for example 100 - 500) documents returned by thequery
andpost_filter
phases. -
retriever
(Optional, { standard, knn, rrf, text_similarity_reranker, rule }): A retriever is a specification to describe top documents returned from a search. A retriever replaces other elements of the search API that also return top documents such asquery
andknn
. -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Retrieve a script evaluation (based on different fields) for each hit. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): Used to retrieve the next page of hits using a set of sort values from the previous page. -
size
(Optional, number): The number of hits to return, which must not be negative. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
property. -
slice
(Optional, { field, id, max }): Split a scrolled search into multiple slices that can be consumed independently. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): A list of <field>:<direction> pairs. -
_source
(Optional, boolean | { excludes, includes }): The source fields that are returned for matching documents. These fields are returned in thehits._source
property of the search response. If thestored_fields
property is specified, the_source
property defaults tofalse
. Otherwise, it defaults totrue
. -
fields
(Optional, { field, format, include_unmapped }[]): An array of wildcard (*
) field patterns. The request returns values for field names matching these patterns in thehits.fields
property of the response. -
suggest
(Optional, { text }): Defines a suggester that provides similar looking terms based on a provided text. -
terminate_after
(Optional, number): The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. IMPORTANT: Use with caution. Elasticsearch applies this property to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this property for requests that target data streams with backing indices across multiple data tiers. If set to0
(default), the query does not terminate early. -
timeout
(Optional, string): The period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout. -
track_scores
(Optional, boolean): Iftrue
, calculate and return document scores, even if the scores are not used for sorting. -
version
(Optional, boolean): Iftrue
, the request returns the document version as part of a hit. -
seq_no_primary_term
(Optional, boolean): Iftrue
, the request returns sequence number and primary term of the last modification of each hit. -
stored_fields
(Optional, string | string[]): A list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the_source
property defaults tofalse
. You can pass_source: true
to return both source fields and stored fields in the search response. -
pit
(Optional, { id, keep_alive }): Limit the search to a point in time (PIT). If you provide a PIT, you cannot specify an<index>
in the request path. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): One or more runtime fields in the search request. These fields take precedence over mapped fields with the same name. -
stats
(Optional, string[]): The stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
allow_partial_search_results
(Optional, boolean): Iftrue
and there are shard request timeouts or shard failures, the request returns partial results. Iffalse
, it returns an error with no partial results. To override the default behavior, you can set thesearch.default_allow_partial_results
cluster setting tofalse
. -
analyzer
(Optional, string): The analyzer to use for the query string. This parameter can be used only when theq
query string parameter is specified. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. This parameter can be used only when theq
query string parameter is specified. -
batched_reduce_size
(Optional, number): The number of shard results that should be reduced at once on the coordinating node. If the potential number of shards in the request can be large, this value should be used as a protection mechanism to reduce the memory overhead per search request. -
ccs_minimize_roundtrips
(Optional, boolean): Iftrue
, network round-trips between the coordinating node and the remote clusters are minimized when running cross-cluster search (CCS) requests. -
default_operator
(Optional, Enum("and" | "or")): The default operator for the query string query:AND
orOR
. This parameter can be used only when theq
query string parameter is specified. -
df
(Optional, string): The field to use as a default when no field prefix is given in the query string. This parameter can be used only when theq
query string parameter is specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values such asopen,hidden
. -
ignore_throttled
(Optional, boolean): Iftrue
, concrete, expanded or aliased indices will be ignored when frozen. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
include_named_queries_score
(Optional, boolean): Iftrue
, the response includes the score contribution from any named queries. This functionality reruns each named query on every hit in a search response. Typically, this adds a small overhead to a request. However, using computationally expensive named queries on a large number of hits may add significant overhead. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theq
query string parameter is specified. -
max_concurrent_shard_requests
(Optional, number): The number of concurrent shard requests per node that the search runs concurrently. This value should be used to limit the impact of the search on the cluster in order to limit the number of concurrent shard requests. -
min_compatible_shard_node
(Optional, string): The minimum version of the node that can handle the request Any handling node with a lower version will fail the request. -
preference
(Optional, string): The nodes and shards used for the search. By default, Elasticsearch selects from eligible nodes and shards using adaptive replica selection, accounting for allocation awareness. Valid values are: *_only_local
to run the search only on shards on the local node; *_local
to, if possible, run the search on shards on the local node, or if not, select shards using the default method; *_only_nodes:<node-id>,<node-id>
to run the search on only the specified nodes IDs, where, if suitable shards exist on more than one selected node, use shards on those nodes using the default method, or if none of the specified nodes are available, select shards from any available node using the default method; *_prefer_nodes:<node-id>,<node-id>
to if possible, run the search on the specified nodes IDs, or if not, select shards using the default method; *_shards:<shard>,<shard>
to run the search only on the specified shards; *<custom-string>
(any string that does not start with_
) to route searches with the same<custom-string>
to the same shards in the same order. -
pre_filter_shard_size
(Optional, number): A threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method (if date filters are mandatory to match but the shard bounds and the query are disjoint). When unspecified, the pre-filter phase is executed if any of these conditions is met: * The request targets more than 128 shards. * The request targets one or more read-only index. * The primary sort of the query targets an indexed field. -
request_cache
(Optional, boolean): Iftrue
, the caching of search results is enabled for requests wheresize
is0
. It defaults to index level settings. -
routing
(Optional, string): A custom value that is used to route operations to a specific shard. -
scroll
(Optional, string | -1 | 0): The period to retain the search context for scrolling. By default, this value cannot exceed1d
(24 hours). You can change this limit by using thesearch.max_keep_alive
cluster-level setting. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): Indicates how distributed term frequencies are calculated for relevance scoring. -
suggest_field
(Optional, string): The field to use for suggestions. -
suggest_mode
(Optional, Enum("missing" | "popular" | "always")): The suggest mode. This parameter can be used only when thesuggest_field
andsuggest_text
query string parameters are specified. -
suggest_size
(Optional, number): The number of suggestions to return. This parameter can be used only when thesuggest_field
andsuggest_text
query string parameters are specified. -
suggest_text
(Optional, string): The source text for which the suggestions should be returned. This parameter can be used only when thesuggest_field
andsuggest_text
query string parameters are specified. -
typed_keys
(Optional, boolean): Iftrue
, aggregation and suggester names are be prefixed by their respective types in the response. -
rest_total_hits_as_int
(Optional, boolean): Indicates whetherhits.total
should be rendered as an integer or an object in the rest search response. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in_source_includes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
q
(Optional, string): A query in the Lucene query string syntax. Query parameter searches do not support the full Elasticsearch Query DSL but are handy for testing. IMPORTANT: This parameter overrides the query parameter in the request body. If both parameters are specified, documents matching the query request body parameter are not returned. -
force_synthetic_source
(Optional, boolean): Should this request force synthetic _source? Use this to test if the mapping supports synthetic _source and to get a sense of the worst case performance. Fetches with this enabled will be slower the enabling synthetic source natively in the index.
-
search_mvt
editSearch a vector tile.
Search a vector tile for geospatial values. Before using this API, you should be familiar with the Mapbox vector tile specification. The API returns results as a binary mapbox vector tile.
Internally, Elasticsearch translates a vector tile search API request into a search containing:
-
A
geo_bounding_box
query on the<field>
. The query uses the<zoom>/<x>/<y>
tile as a bounding box. -
A
geotile_grid
orgeohex_grid
aggregation on the<field>
. Thegrid_agg
parameter determines the aggregation type. The aggregation uses the<zoom>/<x>/<y>
tile as a bounding box. -
Optionally, a
geo_bounds
aggregation on the<field>
. The search only includes this aggregation if theexact_bounds
parameter istrue
. -
If the optional parameter
with_labels
istrue
, the internal search will include a dynamic runtime field that calls thegetLabelPosition
function of the geometry doc value. This enables the generation of new point features containing suggested geometry labels, so that, for example, multi-polygons will have only one label.
For example, Elasticsearch may translate a vector tile search API request with a grid_agg
argument of geotile
and an exact_bounds
argument of true
into the following search
GET my-index/_search { "size": 10000, "query": { "geo_bounding_box": { "my-geo-field": { "top_left": { "lat": -40.979898069620134, "lon": -45 }, "bottom_right": { "lat": -66.51326044311186, "lon": 0 } } } }, "aggregations": { "grid": { "geotile_grid": { "field": "my-geo-field", "precision": 11, "size": 65536, "bounds": { "top_left": { "lat": -40.979898069620134, "lon": -45 }, "bottom_right": { "lat": -66.51326044311186, "lon": 0 } } } }, "bounds": { "geo_bounds": { "field": "my-geo-field", "wrap_longitude": false } } } }
The API returns results as a binary Mapbox vector tile. Mapbox vector tiles are encoded as Google Protobufs (PBF). By default, the tile contains three layers:
-
A
hits
layer containing a feature for each<field>
value matching thegeo_bounding_box
query. -
An
aggs
layer containing a feature for each cell of thegeotile_grid
orgeohex_grid
. The layer only contains features for cells with matching data. - A meta layer containing:
- A feature containing a bounding box. By default, this is the bounding box of the tile.
-
Value ranges for any sub-aggregations on the
geotile_grid
orgeohex_grid
. - Metadata for the search.
The API only returns features that can display at its zoom level. For example, if a polygon feature has no area at its zoom level, the API omits it. The API returns errors as UTF-8 encoded JSON.
You can specify several options for this API as either a query parameter or request body parameter. If you specify both parameters, the query parameter takes precedence.
Grid precision for geotile
For a grid_agg
of geotile
, you can use cells in the aggs
layer as tiles for lower zoom levels.
grid_precision
represents the additional zoom levels available through these cells. The final precision is computed by as follows: <zoom> + grid_precision
.
For example, if <zoom>
is 7 and grid_precision
is 8, then the geotile_grid
aggregation will use a precision of 15.
The maximum final precision is 29.
The grid_precision
also determines the number of cells for the grid as follows: (2^grid_precision) x (2^grid_precision)
.
For example, a value of 8 divides the tile into a grid of 256 x 256 cells.
The aggs
layer only contains features for cells with matching data.
Grid precision for geohex
For a grid_agg
of geohex
, Elasticsearch uses <zoom>
and grid_precision
to calculate a final precision as follows: <zoom> + grid_precision
.
This precision determines the H3 resolution of the hexagonal cells produced by the geohex
aggregation.
The following table maps the H3 resolution for each precision.
For example, if <zoom>
is 3 and grid_precision
is 3, the precision is 6.
At a precision of 6, hexagonal cells have an H3 resolution of 2.
If <zoom>
is 3 and grid_precision
is 4, the precision is 7.
At a precision of 7, hexagonal cells have an H3 resolution of 3.
| Precision | Unique tile bins | H3 resolution | Unique hex bins | Ratio | | --------- | ---------------- | ------------- | ----------------| ----- | | 1 | 4 | 0 | 122 | 30.5 | | 2 | 16 | 0 | 122 | 7.625 | | 3 | 64 | 1 | 842 | 13.15625 | | 4 | 256 | 1 | 842 | 3.2890625 | | 5 | 1024 | 2 | 5882 | 5.744140625 | | 6 | 4096 | 2 | 5882 | 1.436035156 | | 7 | 16384 | 3 | 41162 | 2.512329102 | | 8 | 65536 | 3 | 41162 | 0.6280822754 | | 9 | 262144 | 4 | 288122 | 1.099098206 | | 10 | 1048576 | 4 | 288122 | 0.2747745514 | | 11 | 4194304 | 5 | 2016842 | 0.4808526039 | | 12 | 16777216 | 6 | 14117882 | 0.8414913416 | | 13 | 67108864 | 6 | 14117882 | 0.2103728354 | | 14 | 268435456 | 7 | 98825162 | 0.3681524172 | | 15 | 1073741824 | 8 | 691776122 | 0.644266719 | | 16 | 4294967296 | 8 | 691776122 | 0.1610666797 | | 17 | 17179869184 | 9 | 4842432842 | 0.2818666889 | | 18 | 68719476736 | 10 | 33897029882 | 0.4932667053 | | 19 | 274877906944 | 11 | 237279209162 | 0.8632167343 | | 20 | 1099511627776 | 11 | 237279209162 | 0.2158041836 | | 21 | 4398046511104 | 12 | 1660954464122 | 0.3776573213 | | 22 | 17592186044416 | 13 | 11626681248842 | 0.6609003122 | | 23 | 70368744177664 | 13 | 11626681248842 | 0.165225078 | | 24 | 281474976710656 | 14 | 81386768741882 | 0.2891438866 | | 25 | 1125899906842620 | 15 | 569707381193162 | 0.5060018015 | | 26 | 4503599627370500 | 15 | 569707381193162 | 0.1265004504 | | 27 | 18014398509482000 | 15 | 569707381193162 | 0.03162511259 | | 28 | 72057594037927900 | 15 | 569707381193162 | 0.007906278149 | | 29 | 288230376151712000 | 15 | 569707381193162 | 0.001976569537 |
Hexagonal cells don’t align perfectly on a vector tile. Some cells may intersect more than one vector tile. To compute the H3 resolution for each precision, Elasticsearch compares the average density of hexagonal bins at each resolution with the average density of tile bins at each zoom level. Elasticsearch uses the H3 resolution that is closest to the corresponding geotile density.
client.searchMvt({ index, field, zoom, x, y })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, or aliases to search -
field
(string): Field containing geospatial data to return -
zoom
(number): Zoom level for the vector tile to search -
x
(number): X coordinate for the vector tile to search -
y
(number): Y coordinate for the vector tile to search -
aggs
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): Sub-aggregations for the geotile_grid. It supports the following aggregation types: -avg
-boxplot
-cardinality
-extended stats
-max
-median absolute deviation
-min
-percentile
-percentile-rank
-stats
-sum
-value count
The aggregation names can’t start with_mvt_
. The_mvt_
prefix is reserved for internal aggregations. -
buffer
(Optional, number): The size, in pixels, of a clipping buffer outside the tile. This allows renderers to avoid outline artifacts from geometries that extend past the extent of the tile. -
exact_bounds
(Optional, boolean): Iffalse
, the meta layer’s feature is the bounding box of the tile. Iftrue
, the meta layer’s feature is a bounding box resulting from ageo_bounds
aggregation. The aggregation runs on <field> values that intersect the<zoom>/<x>/<y>
tile withwrap_longitude
set tofalse
. The resulting bounding box may be larger than the vector tile. -
extent
(Optional, number): The size, in pixels, of a side of the tile. Vector tiles are square with equal sides. -
fields
(Optional, string | string[]): The fields to return in thehits
layer. It supports wildcards (*
). This parameter does not support fields with array values. Fields with array values may return inconsistent results. -
grid_agg
(Optional, Enum("geotile" | "geohex")): The aggregation used to create a grid for thefield
. -
grid_precision
(Optional, number): Additional zoom levels available through the aggs layer. For example, if<zoom>
is7
andgrid_precision
is8
, you can zoom in up to level 15. Accepts 0-8. If 0, results don’t include the aggs layer. -
grid_type
(Optional, Enum("grid" | "point" | "centroid")): Determines the geometry type for features in the aggs layer. In the aggs layer, each feature represents ageotile_grid
cell. Ifgrid, each feature is a polygon of the cells bounding box. If `point
, each feature is a Point that is the centroid of the cell. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The query DSL used to filter documents for the search. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name. -
size
(Optional, number): The maximum number of features to return in the hits layer. Accepts 0-10000. If 0, results don’t include the hits layer. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): Sort the features in the hits layer. By default, the API calculates a bounding box for each feature. It sorts features based on this box’s diagonal length, from longest to shortest. -
track_total_hits
(Optional, boolean | number): The number of hits matching the query to count accurately. Iftrue
, the exact number of hits is returned at the cost of some performance. Iffalse
, the response does not include the total number of hits matching the query. -
with_labels
(Optional, boolean): Iftrue
, the hits and aggs layers will contain additional point features representing suggested label positions for the original features. *Point
andMultiPoint
features will have one of the points selected. *Polygon
andMultiPolygon
features will have a single point generated, either the centroid, if it is within the polygon, or another point within the polygon selected from the sorted triangle-tree. *LineString
features will likewise provide a roughly central point selected from the triangle-tree. * The aggregation results will provide one central point for each aggregation bucket. All attributes from the original features will also be copied to the new label features. In addition, the new features will be distinguishable using the tag_mvt_label_position
.
-
search_shards
editGet the search shards.
Get the indices and shards that a search request would be run against.
This information can be useful for working out issues or planning optimizations with routing and shard preferences.
When filtered aliases are used, the filter is returned as part of the indices
section.
If the Elasticsearch security features are enabled, you must have the view_index_metadata
or manage
index privilege for the target data stream, index, or alias.
client.searchShards({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases to search. It supports wildcards (*
). To search all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. IT can also be set to-1
to indicate that the request should never timeout. -
preference
(Optional, string): The node or shard the operation should be performed on. It is random by default. -
routing
(Optional, string): A custom value used to route operations to a specific shard.
-
search_template
editRun a search with a search template.
client.searchTemplate({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases to search. It supports wildcards (*
). -
explain
(Optional, boolean): Iftrue
, returns detailed information about score calculation as part of each hit. If you specify both this and theexplain
query parameter, the API uses only the query parameter. -
id
(Optional, string): The ID of the search template to use. If nosource
is specified, this parameter is required. -
params
(Optional, Record<string, User-defined value>): Key-value pairs used to replace Mustache variables in the template. The key is the variable name. The value is the variable value. -
profile
(Optional, boolean): Iftrue
, the query execution is profiled. -
source
(Optional, string): An inline search template. Supports the same parameters as the search API’s request body. It also supports Mustache variables. If noid
is specified, this parameter is required. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
ccs_minimize_roundtrips
(Optional, boolean): Iftrue
, network round-trips are minimized for cross-cluster search requests. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_throttled
(Optional, boolean): Iftrue
, specified concrete, expanded, or aliased indices are not included in the response when throttled. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
preference
(Optional, string): The node or shard the operation should be performed on. It is random by default. -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
scroll
(Optional, string | -1 | 0): Specifies how long a consistent view of the index should be maintained for scrolled search. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): The type of the search operation. -
rest_total_hits_as_int
(Optional, boolean): Iftrue
,hits.total
is rendered as an integer in the response. Iffalse
, it is rendered as an object. -
typed_keys
(Optional, boolean): Iftrue
, the response prefixes aggregation and suggester names with their respective types.
-
terms_enum
editGet terms in an index.
Discover terms that match a partial string in an index. This API is designed for low-latency look-ups used in auto-complete scenarios.
info The terms enum API may return terms from deleted documents. Deleted documents are initially only marked as deleted. It is not until their segments are merged that documents are actually deleted. Until that happens, the terms enum API will return terms from these documents.
client.termsEnum({ index, field })
Arguments
edit-
Request (object):
-
index
(string): A list of data streams, indices, and index aliases to search. Wildcard (*
) expressions are supported. To search all data streams or indices, omit this parameter or use*
or_all
. -
field
(string): The string to match at the start of indexed terms. If not provided, all terms in the field are considered. -
size
(Optional, number): The number of matching terms to return. -
timeout
(Optional, string | -1 | 0): The maximum length of time to spend collecting results. If the timeout is exceeded thecomplete
flag set tofalse
in the response and the results may be partial or empty. -
case_insensitive
(Optional, boolean): Whentrue
, the provided search string is matched against index terms without case sensitivity. -
index_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Filter an index shard if the provided query rewrites tomatch_none
. -
string
(Optional, string): The string to match at the start of indexed terms. If it is not provided, all terms in the field are considered. > info > The prefix string cannot be larger than the largest possible keyword value, which is Lucene’s term byte-length limit of 32766. -
search_after
(Optional, string): The string after which terms in the index should be returned. It allows for a form of pagination if the last result from one request is passed as thesearch_after
parameter for a subsequent request.
-
termvectors
editGet term vector information.
Get information and statistics about terms in the fields of a particular document.
You can retrieve term vectors for documents stored in the index or for artificial documents passed in the body of the request.
You can specify the fields you are interested in through the fields
parameter or by adding the fields to the request body.
For example:
GET /my-index-000001/_termvectors/1?fields=message
Fields can be specified using wildcards, similar to the multi match query.
Term vectors are real-time by default, not near real-time.
This can be changed by setting realtime
parameter to false
.
You can request three types of values: term information, term statistics, and field statistics. By default, all term information and field statistics are returned for all fields but term statistics are excluded.
Term information
- term frequency in the field (always returned)
-
term positions (
positions: true
) -
start and end offsets (
offsets: true
) -
term payloads (
payloads: true
), as base64 encoded bytes
If the requested information wasn’t stored in the index, it will be computed on the fly if possible. Additionally, term vectors could be computed for documents not even existing in the index, but instead provided by the user.
warn Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16.
Behaviour
The term and field statistics are not accurate.
Deleted documents are not taken into account.
The information is only retrieved for the shard the requested document resides in.
The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context.
By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected.
Use routing
only to hit a particular shard.
client.termvectors({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the index that contains the document. -
id
(Optional, string): A unique identifier for the document. -
doc
(Optional, object): An artificial document (a document not present in the index) for which you want to retrieve term vectors. -
filter
(Optional, { max_doc_freq, max_num_terms, max_term_freq, max_word_length, min_doc_freq, min_term_freq, min_word_length }): Filter terms based on their tf-idf scores. This could be useful in order find out a good characteristic vector of a document. This feature works in a similar manner to the second phase of the More Like This Query. -
per_field_analyzer
(Optional, Record<string, string>): Override the default per-field analyzer. This is useful in order to generate term vectors in any fashion, especially when using artificial documents. When providing an analyzer for a field that already stores term vectors, the term vectors will be regenerated. -
fields
(Optional, string | string[]): A list of fields to include in the statistics. It is used as the default list unless a specific field list is provided in thecompletion_fields
orfielddata_fields
parameters. -
field_statistics
(Optional, boolean): Iftrue
, the response includes: * The document count (how many documents contain this field). * The sum of document frequencies (the sum of document frequencies for all terms in this field). * The sum of total term frequencies (the sum of total term frequencies of each term in this field). -
offsets
(Optional, boolean): Iftrue
, the response includes term offsets. -
payloads
(Optional, boolean): Iftrue
, the response includes term payloads. -
positions
(Optional, boolean): Iftrue
, the response includes term positions. -
term_statistics
(Optional, boolean): Iftrue
, the response includes: * The total term frequency (how often a term occurs in all documents). * The document frequency (the number of documents containing the current term). By default these values are not returned since term statistics can have a serious performance impact. -
routing
(Optional, string): A custom value that is used to route operations to a specific shard. -
version
(Optional, number): Iftrue
, returns the document version as part of a hit. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): The version type. -
preference
(Optional, string): The node or shard the operation should be performed on. It is random by default. -
realtime
(Optional, boolean): If true, the request is real-time as opposed to near-real-time.
-
update
editUpdate a document.
Update a document by running a script or passing a partial document.
If the Elasticsearch security features are enabled, you must have the index
or write
index privilege for the target index or index alias.
The script can update, delete, or skip modifying the document. The API also supports passing a partial document, which is merged into the existing document. To fully replace an existing document, use the index API. This operation:
- Gets the document (collocated with the shard) from the index.
- Runs the specified script.
- Indexes the result.
The document must still be reindexed, but using this API removes some network roundtrips and reduces chances of version conflicts between the GET and the index operation.
The _source
field must be enabled to use this API.
In addition to _source
, you can access the following variables through the ctx
map: _index
, _type
, _id
, _version
, _routing
, and _now
(the current timestamp).
client.update({ id, index })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the document to be updated. -
index
(string): The name of the target index. By default, the index is created automatically if it doesn’t exist. -
detect_noop
(Optional, boolean): Iftrue
, theresult
in the response is set tonoop
(no operation) when there are no changes to the document. -
doc
(Optional, object): A partial update to an existing document. If bothdoc
andscript
are specified,doc
is ignored. -
doc_as_upsert
(Optional, boolean): Iftrue
, use the contents of doc as the value of upsert. NOTE: Using ingest pipelines withdoc_as_upsert
is not supported. -
script
(Optional, { source, id, params, lang, options }): The script to run to update the document. -
scripted_upsert
(Optional, boolean): Iftrue
, run the script whether or not the document exists. -
_source
(Optional, boolean | { excludes, includes }): Iffalse
, turn off source retrieval. You can also specify a list of the fields you want to retrieve. -
upsert
(Optional, object): If the document does not already exist, the contents of upsert are inserted as a new document. If the document exists, the script is run. -
if_primary_term
(Optional, number): Only perform the operation if the document has this primary term. -
if_seq_no
(Optional, number): Only perform the operation if the document has this sequence number. -
include_source_on_error
(Optional, boolean): True or false if to include the document source in the error message in case of parsing errors. -
lang
(Optional, string): The script language. -
refresh
(Optional, Enum(true | false | "wait_for")): If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes. -
require_alias
(Optional, boolean): Iftrue
, the destination must be an index alias. -
retry_on_conflict
(Optional, number): The number of times the operation should be retried when a conflict occurs. -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): The period to wait for the following operations: dynamic mapping updates and waiting for active shards. Elasticsearch waits for at least the timeout period before failing. The actual wait time could be longer, particularly when multiple waits occur. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of copies of each shard that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas
+1). The default value of1
means it waits for each primary shard to be active. -
_source_excludes
(Optional, string | string[]): The source fields you want to exclude. -
_source_includes
(Optional, string | string[]): The source fields you want to retrieve.
-
update_by_query
editUpdate documents. Updates documents that match the specified query. If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes.
If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:
-
read
-
index
orwrite
You can specify the query criteria in the request URI or the request body using the same syntax as the search API.
When you submit an update by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and updates matching documents using internal versioning.
When the versions match, the document is updated and the version number is incremented.
If a document changes between the time that the snapshot is taken and the update operation is processed, it results in a version conflict and the operation fails.
You can opt to count version conflicts instead of halting and returning by setting conflicts
to proceed
.
Note that if you opt to count version conflicts, the operation could attempt to update more documents from the source than max_docs
until it has successfully updated max_docs
documents or it has gone through every document in the source query.
Documents with a version equal to 0 cannot be updated using update by query because internal versioning does not support 0 as a valid version number.
While processing an update by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents. A bulk update request is performed for each batch of matching documents. Any query or update failures cause the update by query request to fail and the failures are shown in the response. Any update requests that completed successfully still stick, they are not rolled back.
Throttling update requests
To control the rate at which update by query issues batches of update operations, you can set requests_per_second
to any positive decimal number.
This pads each batch with a wait time to throttle the rate.
Set requests_per_second
to -1
to turn off throttling.
Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account.
The padding time is the difference between the batch size divided by the requests_per_second
and the time spent writing.
By default the batch size is 1000, so if requests_per_second
is set to 500
:
target_time = 1000 / 500 per second = 2 seconds wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds
Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".
Slicing
Update by query supports sliced scroll to parallelize the update process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.
Setting slices
to auto
chooses a reasonable number for most data streams and indices.
This setting will use one slice per shard, up to a certain limit.
If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards.
Adding slices
to _update_by_query
just automates the manual process of creating sub-requests, which means it has some quirks:
- You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
-
Fetching the status of the task for the request with
slices
only contains the status of completed slices. - These sub-requests are individually addressable for things like cancellation and rethrottling.
-
Rethrottling the request with
slices
will rethrottle the unfinished sub-request proportionally. - Canceling the request with slices will cancel each sub-request.
- Due to the nature of slices each sub-request won’t get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
-
Parameters like
requests_per_second
andmax_docs
on a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that usingmax_docs
withslices
might not result in exactlymax_docs
documents being updated. - Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.
If you’re slicing manually or otherwise tuning automatic slicing, keep in mind that:
- Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
- Update performance scales linearly across available resources with the number of slices.
Whether query or update performance dominates the runtime depends on the documents being reindexed and cluster resources.
Update the document source
Update by query supports scripts to update the document source.
As with the update API, you can set ctx.op
to change the operation that is performed.
Set ctx.op = "noop"
if your script decides that it doesn’t have to make any changes.
The update by query operation skips updating the document and increments the noop
counter.
Set ctx.op = "delete"
if your script decides that the document should be deleted.
The update by query operation deletes the document and increments the deleted
counter.
Update by query supports only index
, noop
, and delete
.
Setting ctx.op
to anything else is an error.
Setting any other field in ctx
is an error.
This API enables you to only modify the source of matching documents; you cannot move them.
client.updateByQuery({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of data streams, indices, and aliases to search. It supports wildcards (*
). To search all data streams or indices, omit this parameter or use*
or_all
. -
max_docs
(Optional, number): The maximum number of documents to update. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The documents to update using the Query DSL. -
script
(Optional, { source, id, params, lang, options }): The script to run to update the document source or metadata when updating. -
slice
(Optional, { field, id, max }): Slice the request manually using the provided slice ID and total number of slices. -
conflicts
(Optional, Enum("abort" | "proceed")): The preferred behavior when update by query hits version conflicts:abort
orproceed
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
analyzer
(Optional, string): The analyzer to use for the query string. This parameter can be used only when theq
query string parameter is specified. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. This parameter can be used only when theq
query string parameter is specified. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. This parameter can be used only when theq
query string parameter is specified. -
df
(Optional, string): The field to use as default where no field prefix is given in the query string. This parameter can be used only when theq
query string parameter is specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
from
(Optional, number): Skips the specified number of documents. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theq
query string parameter is specified. -
pipeline
(Optional, string): The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to_none
disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter. -
preference
(Optional, string): The node or shard the operation should be performed on. It is random by default. -
q
(Optional, string): A query in the Lucene query string syntax. -
refresh
(Optional, boolean): Iftrue
, Elasticsearch refreshes affected shards to make the operation visible to search after the request completes. This is different than the update API’srefresh
parameter, which causes just the shard that received the request to be refreshed. -
request_cache
(Optional, boolean): Iftrue
, the request cache is used for this request. It defaults to the index-level setting. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second. -
routing
(Optional, string): A custom value used to route operations to a specific shard. -
scroll
(Optional, string | -1 | 0): The period to retain the search context for scrolling. -
scroll_size
(Optional, number): The size of the scroll request that powers the operation. -
search_timeout
(Optional, string | -1 | 0): An explicit timeout for each search request. By default, there is no timeout. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): The type of the search operation. Available options includequery_then_fetch
anddfs_query_then_fetch
. -
slices
(Optional, number | Enum("auto")): The number of slices this task should be divided into. -
sort
(Optional, string[]): A list of <field>:<direction> pairs. -
stats
(Optional, string[]): The specifictag
of the request for logging and statistical purposes. -
terminate_after
(Optional, number): The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers. -
timeout
(Optional, string | -1 | 0): The period each update request waits for the following operations: dynamic mapping updates, waiting for active shards. By default, it is one minute. This guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur. -
version
(Optional, boolean): Iftrue
, returns the document version as part of a hit. -
version_type
(Optional, boolean): Should the document increment the version number (internal) on hit or not (reindex) -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). Thetimeout
parameter controls how long each write request waits for unavailable shards to become available. Both work exactly the way they work in the bulk API. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the operation is complete. Iffalse
, Elasticsearch performs some preflight checks, launches the request, and returns a task ID that you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at.tasks/task/${taskId}
.
-
update_by_query_rethrottle
editThrottle an update by query operation.
Change the number of requests per second for a particular update by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.
client.updateByQueryRethrottle({ task_id })
Arguments
edit-
Request (object):
-
task_id
(string): The ID for the task. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second. To turn off throttling, set it to-1
.
-
async_search
editdelete
editDelete an async search.
If the asynchronous search is still running, it is cancelled.
Otherwise, the saved search results are deleted.
If the Elasticsearch security features are enabled, the deletion of a specific async search is restricted to: the authenticated user that submitted the original search request; users that have the cancel_task
cluster privilege.
client.asyncSearch.delete({ id })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the async search.
-
get
editGet async search results.
Retrieve the results of a previously submitted asynchronous search request. If the Elasticsearch security features are enabled, access to the results of a specific async search is restricted to the user or API key that submitted it.
client.asyncSearch.get({ id })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the async search. -
keep_alive
(Optional, string | -1 | 0): The length of time that the async search should be available in the cluster. When not specified, thekeep_alive
set with the corresponding submit async request will be used. Otherwise, it is possible to override the value and extend the validity of the request. When this period expires, the search, if still running, is cancelled. If the search is completed, its saved results are deleted. -
typed_keys
(Optional, boolean): Specify whether aggregation and suggester names should be prefixed by their respective types in the response -
wait_for_completion_timeout
(Optional, string | -1 | 0): Specifies to wait for the search to be completed up until the provided timeout. Final results will be returned if available before the timeout expires, otherwise the currently available results will be returned once the timeout expires. By default no timeout is set meaning that the currently available results will be returned without any additional wait.
-
status
editGet the async search status.
Get the status of a previously submitted async search request given its identifier, without retrieving search results. If the Elasticsearch security features are enabled, the access to the status of a specific async search is restricted to:
- The user or API key that submitted the original async search request.
-
Users that have the
monitor
cluster privilege or greater privileges.
client.asyncSearch.status({ id })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the async search. -
keep_alive
(Optional, string | -1 | 0): The length of time that the async search needs to be available. Ongoing async searches and any saved search results are deleted after this period.
-
submit
editRun an async search.
When the primary sort of the results is an indexed field, shards get sorted based on minimum and maximum value that they hold for that field. Partial results become available following the sort criteria that was requested.
Warning: Asynchronous search does not support scroll or search requests that include only the suggest section.
By default, Elasticsearch does not allow you to store an async search response larger than 10Mb and an attempt to do this results in an error.
The maximum allowed size for a stored async search response can be set by changing the search.max_async_search_response_size
cluster level setting.
client.asyncSearch.submit({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of index names to search; use_all
or empty string to perform the operation on all indices -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>) -
collapse
(Optional, { field, inner_hits, max_concurrent_group_searches, collapse }) -
explain
(Optional, boolean): If true, returns detailed information about score computation as part of a hit. -
ext
(Optional, Record<string, User-defined value>): Configuration of search extensions defined by Elasticsearch plugins. -
from
(Optional, number): Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter. -
highlight
(Optional, { encoder, fields }) -
track_total_hits
(Optional, boolean | number): Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits. -
indices_boost
(Optional, Record<string, number>[]): Boosts the _score of documents from specified indices. -
docvalue_fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The request returns doc values for field names matching these patterns in the hits.fields property of the response. -
knn
(Optional, { field, query_vector, query_vector_builder, k, num_candidates, boost, filter, similarity, inner_hits, rescore_vector } | { field, query_vector, query_vector_builder, k, num_candidates, boost, filter, similarity, inner_hits, rescore_vector }[]): Defines the approximate kNN search to run. -
min_score
(Optional, number): Minimum _score for matching documents. Documents with a lower _score are not included in the search results. -
post_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }) -
profile
(Optional, boolean) -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Defines the search definition using the Query DSL. -
rescore
(Optional, { window_size, query, learning_to_rank } | { window_size, query, learning_to_rank }[]) -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Retrieve a script evaluation (based on different fields) for each hit. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]) -
size
(Optional, number): The number of hits to return. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter. -
slice
(Optional, { field, id, max }) -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]) -
_source
(Optional, boolean | { excludes, includes }): Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response. -
fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The request returns values for field names matching these patterns in the hits.fields property of the response. -
suggest
(Optional, { text }) -
terminate_after
(Optional, number): Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Defaults to 0, which does not terminate query execution early. -
timeout
(Optional, string): Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout. -
track_scores
(Optional, boolean): If true, calculate and return document scores, even if the scores are not used for sorting. -
version
(Optional, boolean): If true, returns document version as part of a hit. -
seq_no_primary_term
(Optional, boolean): If true, returns sequence number and primary term of the last modification of each hit. See Optimistic concurrency control. -
stored_fields
(Optional, string | string[]): List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. You can pass _source: true to return both source fields and stored fields in the search response. -
pit
(Optional, { id, keep_alive }): Limits the search to a point in time (PIT). If you provide a PIT, you cannot specify an <index> in the request path. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name. -
stats
(Optional, string[]): Stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API. -
wait_for_completion_timeout
(Optional, string | -1 | 0): Blocks and waits until the search is completed up to a certain timeout. When the async search completes within the timeout, the response won’t include the ID as the results are not stored in the cluster. -
keep_alive
(Optional, string | -1 | 0): Specifies how long the async search needs to be available. Ongoing async searches and any saved search results are deleted after this period. -
keep_on_completion
(Optional, boolean): Iftrue
, results are stored for later retrieval when the search completes within thewait_for_completion_timeout
. -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
allow_partial_search_results
(Optional, boolean): Indicate if an error should be returned if there is a partial search failure or timeout -
analyzer
(Optional, string): The analyzer to use for the query string -
analyze_wildcard
(Optional, boolean): Specify whether wildcard and prefix queries should be analyzed (default: false) -
batched_reduce_size
(Optional, number): Affects how often partial results become available, which happens whenever shard results are reduced. A partial reduction is performed every time the coordinating node has received a certain number of new shard responses (5 by default). -
ccs_minimize_roundtrips
(Optional, boolean): The default value is the only supported value. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query (AND or OR) -
df
(Optional, string): The field to use as default where no field prefix is given in the query string -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
ignore_throttled
(Optional, boolean): Whether specified concrete, expanded or aliased indices should be ignored when throttled -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed) -
lenient
(Optional, boolean): Specify whether format-based query failures (such as providing text to a numeric field) should be ignored -
max_concurrent_shard_requests
(Optional, number): The number of concurrent shard requests per node this search executes concurrently. This value should be used to limit the impact of the search on the cluster in order to limit the number of concurrent shard requests -
min_compatible_shard_node
(Optional, string) -
preference
(Optional, string): Specify the node or shard the operation should be performed on (default: random) -
request_cache
(Optional, boolean): Specify if request cache should be used for this request or not, defaults to true -
routing
(Optional, string): A list of specific routing values -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): Search operation type -
suggest_field
(Optional, string): Specifies which field to use for suggestions. -
suggest_mode
(Optional, Enum("missing" | "popular" | "always")): Specify suggest mode -
suggest_size
(Optional, number): How many suggestions to return in response -
suggest_text
(Optional, string): The source text for which the suggestions should be returned. -
typed_keys
(Optional, boolean): Specify whether aggregation and suggester names should be prefixed by their respective types in the response -
rest_total_hits_as_int
(Optional, boolean): Indicates whether hits.total should be rendered as an integer or an object in the rest search response -
_source_excludes
(Optional, string | string[]): A list of fields to exclude from the returned _source field -
_source_includes
(Optional, string | string[]): A list of fields to extract and return from the _source field -
q
(Optional, string): Query in the Lucene query string syntax
-
autoscaling
editdelete_autoscaling_policy
editDelete an autoscaling policy.
This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
client.autoscaling.deleteAutoscalingPolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): the name of the autoscaling policy -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_autoscaling_capacity
editGet the autoscaling capacity.
This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
This API gets the current autoscaling capacity based on the configured autoscaling policy. It will return information to size the cluster appropriately to the current workload.
The required_capacity
is calculated as the maximum of the required_capacity
result of all individual deciders that are enabled for the policy.
The operator should verify that the current_nodes
match the operator’s knowledge of the cluster to avoid making autoscaling decisions based on stale or incomplete information.
The response contains decider-specific information you can use to diagnose how and why autoscaling determined a certain capacity was required. This information is provided for diagnosis only. Do not use this information to make autoscaling decisions.
client.autoscaling.getAutoscalingCapacity({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_autoscaling_policy
editGet an autoscaling policy.
This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
client.autoscaling.getAutoscalingPolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): the name of the autoscaling policy -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
put_autoscaling_policy
editCreate or update an autoscaling policy.
This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
client.autoscaling.putAutoscalingPolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): the name of the autoscaling policy -
policy
(Optional, { roles, deciders }) -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
cat
editaliases
editGet aliases.
Get the cluster’s index aliases, including filter and routing information. This API does not return data stream aliases.
CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.
client.cat.aliases({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): A list of aliases to retrieve. Supports wildcards (*
). To retrieve all aliases, omit this parameter or use*
or_all
. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such asopen,hidden
. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicated that the request should never timeout, you can set it to-1
.
-
allocation
editGet shard allocation information.
Get a snapshot of the number of shards allocated to each data node and their disk space.
CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
client.cat.allocation({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): A list of node identifiers or names used to limit the returned information. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
component_templates
editGet component templates.
Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
client.cat.componentTemplates({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): The name of the component template. It accepts wildcard expressions. If it is omitted, all component templates are returned. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node.
-
count
editGet a document count.
Get quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.
CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.
client.cat.count({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases used to limit the request. It supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name.
-
fielddata
editGet field data cache information.
Get the amount of heap memory currently used by the field data cache on every data node in the cluster.
cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.
client.cat.fielddata({ ... })
Arguments
edit-
Request (object):
-
fields
(Optional, string | string[]): List of fields used to limit returned information. To retrieve all fields, omit this parameter. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name.
-
health
editGet the cluster health status.
CAT APIs are only intended for human consumption using the command line or Kibana console.
They are not intended for use by applications. For application consumption, use the cluster health API.
This API is often used to check malfunctioning clusters.
To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats:
HH:MM:SS
, which is human-readable but includes no date information;
Unix epoch time
, which is machine-sortable and includes date information.
The latter format is useful for cluster recoveries that take multiple days.
You can use the cat health API to verify cluster health across multiple nodes.
You also can use the API to track the recovery of a large cluster over a longer period of time.
client.cat.health({ ... })
Arguments
edit-
Request (object):
-
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values. -
ts
(Optional, boolean): If true, returnsHH:MM:SS
and Unix epoch timestamps. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name.
-
help
editGet CAT help.
Get help for the CAT APIs.
client.cat.help()
indices
editGet index information.
Get high-level information about indices in a cluster, including backing indices for data streams.
Use this request to get the following information for each index in a cluster: - shard count - document count - deleted document count - primary store size - total store size of all shards, including shard replicas
These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.
CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.
client.cat.indices({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. -
health
(Optional, Enum("green" | "yellow" | "red")): The health status used to limit returned indices. By default, the response includes indices of any health status. -
include_unloaded_segments
(Optional, boolean): If true, the response includes information from segments that are not loaded into memory. -
pri
(Optional, boolean): If true, the response only includes information from primary shards. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name.
-
master
editGet master node information.
Get information about the master node, including the ID, bound IP address, and name.
cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.master({ ... })
Arguments
edit-
Request (object):
-
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
ml_data_frame_analytics
editGet data frame analytics jobs.
Get configuration and usage information about data frame analytics jobs.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.
client.cat.mlDataFrameAnalytics({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): The ID of the data frame analytics to fetch -
allow_no_match
(Optional, boolean): Whether to ignore if a wildcard expression matches no configs. (This includes_all
string or when no configs have been specified) -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit in which to display byte values -
h
(Optional, Enum("assignment_explanation" | "create_time" | "description" | "dest_index" | "failure_reason" | "id" | "model_memory_limit" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "progress" | "source_index" | "state" | "type" | "version") | Enum("assignment_explanation" | "create_time" | "description" | "dest_index" | "failure_reason" | "id" | "model_memory_limit" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "progress" | "source_index" | "state" | "type" | "version")[]): List of column names to display. -
s
(Optional, Enum("assignment_explanation" | "create_time" | "description" | "dest_index" | "failure_reason" | "id" | "model_memory_limit" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "progress" | "source_index" | "state" | "type" | "version") | Enum("assignment_explanation" | "create_time" | "description" | "dest_index" | "failure_reason" | "id" | "model_memory_limit" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "progress" | "source_index" | "state" | "type" | "version")[]): List of column names or column aliases used to sort the response. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Unit used to display time values.
-
ml_datafeeds
editGet datafeeds.
Get configuration and usage information about datafeeds.
This API returns a maximum of 10,000 datafeeds.
If the Elasticsearch security features are enabled, you must have monitor_ml
, monitor
, manage_ml
, or manage
cluster privileges to use this API.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.
client.cat.mlDatafeeds({ ... })
Arguments
edit-
Request (object):
-
datafeed_id
(Optional, string): A numerical character string that uniquely identifies the datafeed. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:
-
- Contains wildcard expressions and there are no datafeeds that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
If true
, the API returns an empty datafeeds array when there are no matches and the subset of results when
there are partial matches. If false
, the API returns a 404 status code when there are no matches or only
partial matches.
h
(Optional, Enum("ae" | "bc" | "id" | "na" | "ne" | "ni" | "nn" | "sba" | "sc" | "seah" | "st" | "s") | Enum("ae" | "bc" | "id" | "na" | "ne" | "ni" | "nn" | "sba" | "sc" | "seah" | "st" | "s")[]): List of column names to display.
s
(Optional, Enum("ae" | "bc" | "id" | "na" | "ne" | "ni" | "nn" | "sba" | "sc" | "seah" | "st" | "s") | Enum("ae" | "bc" | "id" | "na" | "ne" | "ni" | "nn" | "sba" | "sc" | "seah" | "st" | "s")[]): List of column names or column aliases used to sort the response.
* *time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values.
ml_jobs
editGet anomaly detection jobs.
Get configuration and usage information for anomaly detection jobs.
This API returns a maximum of 10,000 jobs.
If the Elasticsearch security features are enabled, you must have monitor_ml
,
monitor
, manage_ml
, or manage
cluster privileges to use this API.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.
client.cat.mlJobs({ ... })
Arguments
edit-
Request (object):
-
job_id
(Optional, string): Identifier for the anomaly detection job. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:
-
- Contains wildcard expressions and there are no jobs that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
If true
, the API returns an empty jobs array when there are no matches and the subset of results when there
are partial matches. If false
, the API returns a 404 status code when there are no matches or only partial
matches.
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values.
h
(Optional, Enum("assignment_explanation" | "buckets.count" | "buckets.time.exp_avg" | "buckets.time.exp_avg_hour" | "buckets.time.max" | "buckets.time.min" | "buckets.time.total" | "data.buckets" | "data.earliest_record" | "data.empty_buckets" | "data.input_bytes" | "data.input_fields" | "data.input_records" | "data.invalid_dates" | "data.last" | "data.last_empty_bucket" | "data.last_sparse_bucket" | "data.latest_record" | "data.missing_fields" | "data.out_of_order_timestamps" | "data.processed_fields" | "data.processed_records" | "data.sparse_buckets" | "forecasts.memory.avg" | "forecasts.memory.max" | "forecasts.memory.min" | "forecasts.memory.total" | "forecasts.records.avg" | "forecasts.records.max" | "forecasts.records.min" | "forecasts.records.total" | "forecasts.time.avg" | "forecasts.time.max" | "forecasts.time.min" | "forecasts.time.total" | "forecasts.total" | "id" | "model.bucket_allocation_failures" | "model.by_fields" | "model.bytes" | "model.bytes_exceeded" | "model.categorization_status" | "model.categorized_doc_count" | "model.dead_category_count" | "model.failed_category_count" | "model.frequent_category_count" | "model.log_time" | "model.memory_limit" | "model.memory_status" | "model.over_fields" | "model.partition_fields" | "model.rare_category_count" | "model.timestamp" | "model.total_category_count" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "opened_time" | "state") | Enum("assignment_explanation" | "buckets.count" | "buckets.time.exp_avg" | "buckets.time.exp_avg_hour" | "buckets.time.max" | "buckets.time.min" | "buckets.time.total" | "data.buckets" | "data.earliest_record" | "data.empty_buckets" | "data.input_bytes" | "data.input_fields" | "data.input_records" | "data.invalid_dates" | "data.last" | "data.last_empty_bucket" | "data.last_sparse_bucket" | "data.latest_record" | "data.missing_fields" | "data.out_of_order_timestamps" | "data.processed_fields" | "data.processed_records" | "data.sparse_buckets" | "forecasts.memory.avg" | "forecasts.memory.max" | "forecasts.memory.min" | "forecasts.memory.total" | "forecasts.records.avg" | "forecasts.records.max" | "forecasts.records.min" | "forecasts.records.total" | "forecasts.time.avg" | "forecasts.time.max" | "forecasts.time.min" | "forecasts.time.total" | "forecasts.total" | "id" | "model.bucket_allocation_failures" | "model.by_fields" | "model.bytes" | "model.bytes_exceeded" | "model.categorization_status" | "model.categorized_doc_count" | "model.dead_category_count" | "model.failed_category_count" | "model.frequent_category_count" | "model.log_time" | "model.memory_limit" | "model.memory_status" | "model.over_fields" | "model.partition_fields" | "model.rare_category_count" | "model.timestamp" | "model.total_category_count" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "opened_time" | "state")[]): List of column names to display.
s
(Optional, Enum("assignment_explanation" | "buckets.count" | "buckets.time.exp_avg" | "buckets.time.exp_avg_hour" | "buckets.time.max" | "buckets.time.min" | "buckets.time.total" | "data.buckets" | "data.earliest_record" | "data.empty_buckets" | "data.input_bytes" | "data.input_fields" | "data.input_records" | "data.invalid_dates" | "data.last" | "data.last_empty_bucket" | "data.last_sparse_bucket" | "data.latest_record" | "data.missing_fields" | "data.out_of_order_timestamps" | "data.processed_fields" | "data.processed_records" | "data.sparse_buckets" | "forecasts.memory.avg" | "forecasts.memory.max" | "forecasts.memory.min" | "forecasts.memory.total" | "forecasts.records.avg" | "forecasts.records.max" | "forecasts.records.min" | "forecasts.records.total" | "forecasts.time.avg" | "forecasts.time.max" | "forecasts.time.min" | "forecasts.time.total" | "forecasts.total" | "id" | "model.bucket_allocation_failures" | "model.by_fields" | "model.bytes" | "model.bytes_exceeded" | "model.categorization_status" | "model.categorized_doc_count" | "model.dead_category_count" | "model.failed_category_count" | "model.frequent_category_count" | "model.log_time" | "model.memory_limit" | "model.memory_status" | "model.over_fields" | "model.partition_fields" | "model.rare_category_count" | "model.timestamp" | "model.total_category_count" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "opened_time" | "state") | Enum("assignment_explanation" | "buckets.count" | "buckets.time.exp_avg" | "buckets.time.exp_avg_hour" | "buckets.time.max" | "buckets.time.min" | "buckets.time.total" | "data.buckets" | "data.earliest_record" | "data.empty_buckets" | "data.input_bytes" | "data.input_fields" | "data.input_records" | "data.invalid_dates" | "data.last" | "data.last_empty_bucket" | "data.last_sparse_bucket" | "data.latest_record" | "data.missing_fields" | "data.out_of_order_timestamps" | "data.processed_fields" | "data.processed_records" | "data.sparse_buckets" | "forecasts.memory.avg" | "forecasts.memory.max" | "forecasts.memory.min" | "forecasts.memory.total" | "forecasts.records.avg" | "forecasts.records.max" | "forecasts.records.min" | "forecasts.records.total" | "forecasts.time.avg" | "forecasts.time.max" | "forecasts.time.min" | "forecasts.time.total" | "forecasts.total" | "id" | "model.bucket_allocation_failures" | "model.by_fields" | "model.bytes" | "model.bytes_exceeded" | "model.categorization_status" | "model.categorized_doc_count" | "model.dead_category_count" | "model.failed_category_count" | "model.frequent_category_count" | "model.log_time" | "model.memory_limit" | "model.memory_status" | "model.over_fields" | "model.partition_fields" | "model.rare_category_count" | "model.timestamp" | "model.total_category_count" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "opened_time" | "state")[]): List of column names or column aliases used to sort the response.
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values.
ml_trained_models
editGet trained models.
Get configuration and usage information about inference trained models.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.
client.cat.mlTrainedModels({ ... })
Arguments
edit-
Request (object):
-
model_id
(Optional, string): A unique identifier for the trained model. -
allow_no_match
(Optional, boolean): Specifies what to do when the request: contains wildcard expressions and there are no models that match; contains the_all
string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. Iftrue
, the API returns an empty array when there are no matches and the subset of results when there are partial matches. Iffalse
, the API returns a 404 status code when there are no matches or only partial matches. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
h
(Optional, Enum("create_time" | "created_by" | "data_frame_analytics_id" | "description" | "heap_size" | "id" | "ingest.count" | "ingest.current" | "ingest.failed" | "ingest.pipelines" | "ingest.time" | "license" | "operations" | "version") | Enum("create_time" | "created_by" | "data_frame_analytics_id" | "description" | "heap_size" | "id" | "ingest.count" | "ingest.current" | "ingest.failed" | "ingest.pipelines" | "ingest.time" | "license" | "operations" | "version")[]): A list of column names to display. -
s
(Optional, Enum("create_time" | "created_by" | "data_frame_analytics_id" | "description" | "heap_size" | "id" | "ingest.count" | "ingest.current" | "ingest.failed" | "ingest.pipelines" | "ingest.time" | "license" | "operations" | "version") | Enum("create_time" | "created_by" | "data_frame_analytics_id" | "description" | "heap_size" | "id" | "ingest.count" | "ingest.current" | "ingest.failed" | "ingest.pipelines" | "ingest.time" | "license" | "operations" | "version")[]): A list of column names or aliases used to sort the response. -
from
(Optional, number): Skips the specified number of transforms. -
size
(Optional, number): The maximum number of transforms to display. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Unit used to display time values.
-
nodeattrs
editGet node attribute information.
Get information about custom node attributes. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.nodeattrs({ ... })
Arguments
edit-
Request (object):
-
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
nodes
editGet node information.
Get information about the nodes in a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.nodes({ ... })
Arguments
edit-
Request (object):
-
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
full_id
(Optional, boolean | string): Iftrue
, return the full node ID. Iffalse
, return the shortened node ID. -
include_unloaded_segments
(Optional, boolean): If true, the response includes information from segments that are not loaded into memory. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Unit used to display time values.
-
pending_tasks
editGet pending task information.
Get information about cluster-level changes that have not yet taken effect. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the pending cluster tasks API.
client.cat.pendingTasks({ ... })
Arguments
edit-
Request (object):
-
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Unit used to display time values.
-
plugins
editGet plugin information.
Get a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.plugins({ ... })
Arguments
edit-
Request (object):
-
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
include_bootstrap
(Optional, boolean): Include bootstrap plugins in the response -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
recovery
editGet shard recovery information.
Get information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.
client.cat.recovery({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
active_only
(Optional, boolean): Iftrue
, the response only includes ongoing shard recoveries. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
detailed
(Optional, boolean): Iftrue
, the response includes detailed information about shard recoveries. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Unit used to display time values.
-
repositories
editGet snapshot repository information.
Get a list of snapshot repositories for a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.
client.cat.repositories({ ... })
Arguments
edit-
Request (object):
-
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
segments
editGet segment information.
Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index segments API.
client.cat.segments({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
shards
editGet shard information.
Get information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
client.cat.shards({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Unit used to display time values.
-
snapshots
editGet snapshot information.
Get information about the snapshots stored in one or more repositories. A snapshot is a backup of an index or running Elasticsearch cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot API.
client.cat.snapshots({ ... })
Arguments
edit-
Request (object):
-
repository
(Optional, string | string[]): A list of snapshot repositories used to limit the request. Accepts wildcard expressions._all
returns all repositories. If any repository fails during the request, Elasticsearch returns an error. -
ignore_unavailable
(Optional, boolean): Iftrue
, the response does not include information from unavailable snapshots. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Unit used to display time values.
-
tasks
editGet task information.
Get information about tasks currently running in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.
client.cat.tasks({ ... })
Arguments
edit-
Request (object):
-
actions
(Optional, string[]): The task action names, which are used to limit the response. -
detailed
(Optional, boolean): Iftrue
, the response includes detailed information about shard recoveries. -
nodes
(Optional, string[]): Unique node identifiers, which are used to limit the response. -
parent_task_id
(Optional, string): The parent task identifier, which is used to limit the response. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Unit used to display time values. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the task has completed.
-
templates
editGet index template information.
Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
client.cat.templates({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): The name of the template to return. Accepts wildcard expressions. If omitted, all templates are returned. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
thread_pool
editGet thread pool statistics.
Get thread pool statistics for each node in a cluster. Returned information includes all built-in thread pools and custom thread pools. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.threadPool({ ... })
Arguments
edit-
Request (object):
-
thread_pool_patterns
(Optional, string | string[]): A list of thread pool names used to limit the request. Accepts wildcard expressions. -
h
(Optional, string | string[]): List of columns to appear in the response. Supports simple wildcards. -
s
(Optional, string | string[]): List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting:asc
or:desc
as a suffix to the column name. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values. -
local
(Optional, boolean): Iftrue
, the request computes the list of selected nodes from the local cluster state. Iffalse
the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
transforms
editGet transform information.
Get configuration and usage information about transforms.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.
client.cat.transforms({ ... })
Arguments
edit-
Request (object):
-
transform_id
(Optional, string): A transform identifier or a wildcard expression. If you do not specify one of these options, the API returns information for all transforms. -
allow_no_match
(Optional, boolean): Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the_all
string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. Iftrue
, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. Iffalse
, the request returns a 404 status code when there are no matches or only partial matches. -
from
(Optional, number): Skips the specified number of transforms. -
h
(Optional, Enum("changes_last_detection_time" | "checkpoint" | "checkpoint_duration_time_exp_avg" | "checkpoint_progress" | "create_time" | "delete_time" | "description" | "dest_index" | "documents_deleted" | "documents_indexed" | "docs_per_second" | "documents_processed" | "frequency" | "id" | "index_failure" | "index_time" | "index_total" | "indexed_documents_exp_avg" | "last_search_time" | "max_page_search_size" | "pages_processed" | "pipeline" | "processed_documents_exp_avg" | "processing_time" | "reason" | "search_failure" | "search_time" | "search_total" | "source_index" | "state" | "transform_type" | "trigger_count" | "version") | Enum("changes_last_detection_time" | "checkpoint" | "checkpoint_duration_time_exp_avg" | "checkpoint_progress" | "create_time" | "delete_time" | "description" | "dest_index" | "documents_deleted" | "documents_indexed" | "docs_per_second" | "documents_processed" | "frequency" | "id" | "index_failure" | "index_time" | "index_total" | "indexed_documents_exp_avg" | "last_search_time" | "max_page_search_size" | "pages_processed" | "pipeline" | "processed_documents_exp_avg" | "processing_time" | "reason" | "search_failure" | "search_time" | "search_total" | "source_index" | "state" | "transform_type" | "trigger_count" | "version")[]): List of column names to display. -
s
(Optional, Enum("changes_last_detection_time" | "checkpoint" | "checkpoint_duration_time_exp_avg" | "checkpoint_progress" | "create_time" | "delete_time" | "description" | "dest_index" | "documents_deleted" | "documents_indexed" | "docs_per_second" | "documents_processed" | "frequency" | "id" | "index_failure" | "index_time" | "index_total" | "indexed_documents_exp_avg" | "last_search_time" | "max_page_search_size" | "pages_processed" | "pipeline" | "processed_documents_exp_avg" | "processing_time" | "reason" | "search_failure" | "search_time" | "search_total" | "source_index" | "state" | "transform_type" | "trigger_count" | "version") | Enum("changes_last_detection_time" | "checkpoint" | "checkpoint_duration_time_exp_avg" | "checkpoint_progress" | "create_time" | "delete_time" | "description" | "dest_index" | "documents_deleted" | "documents_indexed" | "docs_per_second" | "documents_processed" | "frequency" | "id" | "index_failure" | "index_time" | "index_total" | "indexed_documents_exp_avg" | "last_search_time" | "max_page_search_size" | "pages_processed" | "pipeline" | "processed_documents_exp_avg" | "processing_time" | "reason" | "search_failure" | "search_time" | "search_total" | "source_index" | "state" | "transform_type" | "trigger_count" | "version")[]): List of column names or column aliases used to sort the response. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values. -
size
(Optional, number): The maximum number of transforms to obtain.
-
ccr
editdelete_auto_follow_pattern
editDelete auto-follow patterns.
Delete a collection of cross-cluster replication auto-follow patterns.
client.ccr.deleteAutoFollowPattern({ name })
Arguments
edit-
Request (object):
-
name
(string): The auto-follow pattern collection to delete. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
follow
editCreate a follower. Create a cross-cluster replication follower index that follows a specific leader index. When the API returns, the follower index exists and cross-cluster replication starts replicating operations from the leader index to the follower index.
client.ccr.follow({ index, leader_index, remote_cluster })
Arguments
edit-
Request (object):
-
index
(string): The name of the follower index. -
leader_index
(string): The name of the index in the leader cluster to follow. -
remote_cluster
(string): The remote cluster containing the leader index. -
data_stream_name
(Optional, string): If the leader index is part of a data stream, the name to which the local data stream for the followed index should be renamed. -
max_outstanding_read_requests
(Optional, number): The maximum number of outstanding reads requests from the remote cluster. -
max_outstanding_write_requests
(Optional, number): The maximum number of outstanding write requests on the follower. -
max_read_request_operation_count
(Optional, number): The maximum number of operations to pull per read from the remote cluster. -
max_read_request_size
(Optional, number | string): The maximum size in bytes of per read of a batch of operations pulled from the remote cluster. -
max_retry_delay
(Optional, string | -1 | 0): The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying. -
max_write_buffer_count
(Optional, number): The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit. -
max_write_buffer_size
(Optional, number | string): The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit. -
max_write_request_operation_count
(Optional, number): The maximum number of operations per bulk write request executed on the follower. -
max_write_request_size
(Optional, number | string): The maximum total bytes of operations per bulk write request executed on the follower. -
read_poll_timeout
(Optional, string | -1 | 0): The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again. -
settings
(Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store }): Settings to override from the leader index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): Specifies the number of shards to wait on being active before responding. This defaults to waiting on none of the shards to be active. A shard must be restored from the leader index before being active. Restoring a follower shard requires transferring all the remote Lucene segment files to the follower index.
-
follow_info
editGet follower information.
Get information about all cross-cluster replication follower indices. For example, the results include follower index names, leader index names, replication options, and whether the follower indices are active or paused.
client.ccr.followInfo({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A comma-delimited list of follower index patterns. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
follow_stats
editGet follower stats.
Get cross-cluster replication follower stats. The API returns shard-level stats about the "following tasks" associated with each shard for the specified indices.
client.ccr.followStats({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A comma-delimited list of index patterns. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
forget_follower
editForget a follower. Remove the cross-cluster replication follower retention leases from the leader.
A following index takes out retention leases on its leader index. These leases are used to increase the likelihood that the shards of the leader index retain the history of operations that the shards of the following index need to run replication. When a follower index is converted to a regular index by the unfollow API (either by directly calling the API or by index lifecycle management tasks), these leases are removed. However, removal of the leases can fail, for example when the remote cluster containing the leader index is unavailable. While the leases will eventually expire on their own, their extended existence can cause the leader index to hold more history than necessary and prevent index lifecycle management from performing some operations on the leader index. This API exists to enable manually removing the leases when the unfollow API is unable to do so.
This API does not stop replication by a following index. If you use this API with a follower index that is still actively following, the following index will add back retention leases on the leader. The only purpose of this API is to handle the case of failure to remove the following retention leases after the unfollow API is invoked.
client.ccr.forgetFollower({ index })
Arguments
edit-
Request (object):
-
index
(string): the name of the leader index for which specified follower retention leases should be removed -
follower_cluster
(Optional, string) -
follower_index
(Optional, string) -
follower_index_uuid
(Optional, string) -
leader_remote_cluster
(Optional, string) -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_auto_follow_pattern
editGet auto-follow patterns.
Get cross-cluster replication auto-follow patterns.
client.ccr.getAutoFollowPattern({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): The auto-follow pattern collection that you want to retrieve. If you do not specify a name, the API returns information for all collections. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
pause_auto_follow_pattern
editPause an auto-follow pattern.
Pause a cross-cluster replication auto-follow pattern. When the API returns, the auto-follow pattern is inactive. New indices that are created on the remote cluster and match the auto-follow patterns are ignored.
You can resume auto-following with the resume auto-follow pattern API. When it resumes, the auto-follow pattern is active again and automatically configures follower indices for newly created indices on the remote cluster that match its patterns. Remote indices that were created while the pattern was paused will also be followed, unless they have been deleted or closed in the interim.
client.ccr.pauseAutoFollowPattern({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the auto-follow pattern to pause. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
pause_follow
editPause a follower.
Pause a cross-cluster replication follower index. The follower index will not fetch any additional operations from the leader index. You can resume following with the resume follower API. You can pause and resume a follower index to change the configuration of the following task.
client.ccr.pauseFollow({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the follower index. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
put_auto_follow_pattern
editCreate or update auto-follow patterns. Create a collection of cross-cluster replication auto-follow patterns for a remote cluster. Newly created indices on the remote cluster that match any of the patterns are automatically configured as follower indices. Indices on the remote cluster that were created before the auto-follow pattern was created will not be auto-followed even if they match the pattern.
This API can also be used to update auto-follow patterns. NOTE: Follower indices that were configured automatically before updating an auto-follow pattern will remain unchanged even if they do not match against the new patterns.
client.ccr.putAutoFollowPattern({ name, remote_cluster })
Arguments
edit-
Request (object):
-
name
(string): The name of the collection of auto-follow patterns. -
remote_cluster
(string): The remote cluster containing the leader indices to match against. -
follow_index_pattern
(Optional, string): The name of follower index. The template {{leader_index}} can be used to derive the name of the follower index from the name of the leader index. When following a data stream, use {{leader_index}}; CCR does not support changes to the names of a follower data stream’s backing indices. -
leader_index_patterns
(Optional, string[]): An array of simple index patterns to match against indices in the remote cluster specified by the remote_cluster field. -
leader_index_exclusion_patterns
(Optional, string[]): An array of simple index patterns that can be used to exclude indices from being auto-followed. Indices in the remote cluster whose names are matching one or more leader_index_patterns and one or more leader_index_exclusion_patterns won’t be followed. -
max_outstanding_read_requests
(Optional, number): The maximum number of outstanding reads requests from the remote cluster. -
settings
(Optional, Record<string, User-defined value>): Settings to override from the leader index. Note that certain settings can not be overrode (e.g., index.number_of_shards). -
max_outstanding_write_requests
(Optional, number): The maximum number of outstanding reads requests from the remote cluster. -
read_poll_timeout
(Optional, string | -1 | 0): The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again. -
max_read_request_operation_count
(Optional, number): The maximum number of operations to pull per read from the remote cluster. -
max_read_request_size
(Optional, number | string): The maximum size in bytes of per read of a batch of operations pulled from the remote cluster. -
max_retry_delay
(Optional, string | -1 | 0): The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying. -
max_write_buffer_count
(Optional, number): The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit. -
max_write_buffer_size
(Optional, number | string): The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit. -
max_write_request_operation_count
(Optional, number): The maximum number of operations per bulk write request executed on the follower. -
max_write_request_size
(Optional, number | string): The maximum total bytes of operations per bulk write request executed on the follower. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
resume_auto_follow_pattern
editResume an auto-follow pattern.
Resume a cross-cluster replication auto-follow pattern that was paused. The auto-follow pattern will resume configuring following indices for newly created indices that match its patterns on the remote cluster. Remote indices created while the pattern was paused will also be followed unless they have been deleted or closed in the interim.
client.ccr.resumeAutoFollowPattern({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the auto-follow pattern to resume. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
resume_follow
editResume a follower. Resume a cross-cluster replication follower index that was paused. The follower index could have been paused with the pause follower API. Alternatively it could be paused due to replication that cannot be retried due to failures during following tasks. When this API returns, the follower index will resume fetching operations from the leader index.
client.ccr.resumeFollow({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the follow index to resume following. -
max_outstanding_read_requests
(Optional, number) -
max_outstanding_write_requests
(Optional, number) -
max_read_request_operation_count
(Optional, number) -
max_read_request_size
(Optional, string) -
max_retry_delay
(Optional, string | -1 | 0) -
max_write_buffer_count
(Optional, number) -
max_write_buffer_size
(Optional, string) -
max_write_request_operation_count
(Optional, number) -
max_write_request_size
(Optional, string) -
read_poll_timeout
(Optional, string | -1 | 0) -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
stats
editGet cross-cluster replication stats.
This API returns stats about auto-following and the same shard-level stats as the get follower stats API.
client.ccr.stats({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
unfollow
editUnfollow an index.
Convert a cross-cluster replication follower index to a regular index. The API stops the following task associated with a follower index and removes index metadata and settings associated with cross-cluster replication. The follower index must be paused and closed before you call the unfollow API.
info Currently cross-cluster replication does not support converting an existing regular index to a follower index. Converting a follower index to a regular index is an irreversible operation.
client.ccr.unfollow({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the follower index. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout.
-
cluster
editallocation_explain
editExplain the shard allocations. Get explanations for shard allocations in the cluster. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.
client.cluster.allocationExplain({ ... })
Arguments
edit-
Request (object):
-
current_node
(Optional, string): Specifies the node ID or the name of the node to only explain a shard that is currently located on the specified node. -
index
(Optional, string): Specifies the name of the index that you would like an explanation for. -
primary
(Optional, boolean): If true, returns explanation for the primary shard for the given shard ID. -
shard
(Optional, number): Specifies the ID of the shard that you would like an explanation for. -
include_disk_info
(Optional, boolean): If true, returns information about disk usage and shard sizes. -
include_yes_decisions
(Optional, boolean): If true, returns YES decisions in explanation. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
delete_component_template
editDelete component templates. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
client.cluster.deleteComponentTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List or wildcard expression of component template names used to limit the request. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_voting_config_exclusions
editClear cluster voting config exclusions. Remove master-eligible nodes from the voting configuration exclusion list.
client.cluster.deleteVotingConfigExclusions({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
wait_for_removal
(Optional, boolean): Specifies whether to wait for all excluded nodes to be removed from the cluster before clearing the voting configuration exclusions list. Defaults to true, meaning that all excluded nodes must be removed from the cluster before this API takes any action. If set to false then the voting configuration exclusions list is cleared even if some excluded nodes are still in the cluster.
-
exists_component_template
editCheck component templates. Returns information about whether a particular component template exists.
client.cluster.existsComponentTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of component template names used to limit the request. Wildcard (*) expressions are supported. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
local
(Optional, boolean): If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.
-
get_component_template
editGet component templates. Get information about component templates.
client.cluster.getComponentTemplate({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): List of component template names used to limit the request. Wildcard (*
) expressions are supported. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
include_defaults
(Optional, boolean): Return all default configurations for the component template (default: false) -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. Iffalse
, information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_settings
editGet cluster-wide settings. By default, it returns only settings that have been explicitly defined.
client.cluster.getSettings({ ... })
Arguments
edit-
Request (object):
-
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
include_defaults
(Optional, boolean): Iftrue
, returns default cluster settings from the local node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
health
editGet the cluster health status. You can also use the API to get the health status of only specified data streams and indices. For data streams, the API retrieves the health status of the stream’s backing indices.
The cluster health status is: green, yellow or red. On the shard level, a red status indicates that the specific shard is not allocated in the cluster. Yellow means that the primary shard is allocated but replicas are not. Green means that all shards are allocated. The index level status is controlled by the worst shard status.
One of the main benefits of the API is the ability to wait until the cluster reaches a certain high watermark health level. The cluster status is controlled by the worst index status.
client.cluster.health({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*
) are supported. To target all data streams and indices in a cluster, omit this parameter or use _all or*
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
level
(Optional, Enum("cluster" | "indices" | "shards")): Can be one of cluster, indices or shards. Controls the details level of the health information returned. -
local
(Optional, boolean): If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): A number controlling to how many active shards to wait for, all to wait for all shards in the cluster to be active, or 0 to not wait. -
wait_for_events
(Optional, Enum("immediate" | "urgent" | "high" | "normal" | "low" | "languid")): Can be one of immediate, urgent, high, normal, low, languid. Wait until all currently queued events with the given priority are processed. -
wait_for_nodes
(Optional, string | number): The request waits until the specified number N of nodes is available. It also accepts >=N, ⇐N, >N and <N. Alternatively, it is possible to use ge(N), le(N), gt(N) and lt(N) notation. -
wait_for_no_initializing_shards
(Optional, boolean): A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard initializations. Defaults to false, which means it will not wait for initializing shards. -
wait_for_no_relocating_shards
(Optional, boolean): A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard relocations. Defaults to false, which means it will not wait for relocating shards. -
wait_for_status
(Optional, Enum("green" | "yellow" | "red")): One of green, yellow or red. Will wait (until the timeout provided) until the status of the cluster changes to the one provided or better, i.e. green > yellow > red. By default, will not wait for any status.
-
info
editGet cluster info. Returns basic information about the cluster.
client.cluster.info({ target })
Arguments
edit-
Request (object):
-
target
(Enum("_all" | "http" | "ingest" | "thread_pool" | "script") | Enum("_all" | "http" | "ingest" | "thread_pool" | "script")[]): Limits the information returned to the specific target. Supports a list, such as http,ingest.
-
pending_tasks
editGet the pending cluster tasks. Get information about cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet taken effect.
This API returns a list of any pending updates to the cluster state. These are distinct from the tasks reported by the task management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests. However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task might be reported by both task api and pending cluster tasks API.
client.cluster.pendingTasks({ ... })
Arguments
edit-
Request (object):
-
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. Iffalse
, information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
post_voting_config_exclusions
editUpdate voting configuration exclusions. Update the cluster voting config exclusions by node IDs or node names. By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.
Clusters should have no voting configuration exclusions in normal operation.
Once the excluded nodes have stopped, clear the voting configuration exclusions with DELETE /_cluster/voting_config_exclusions
.
This API waits for the nodes to be fully removed from the cluster before it returns.
If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use DELETE /_cluster/voting_config_exclusions?wait_for_removal=false
to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.
A response to POST /_cluster/voting_config_exclusions
with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling DELETE /_cluster/voting_config_exclusions
.
If the call to POST /_cluster/voting_config_exclusions
fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration.
In that case, you may safely retry the call.
Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.
client.cluster.postVotingConfigExclusions({ ... })
Arguments
edit-
Request (object):
-
node_names
(Optional, string | string[]): A list of the names of the nodes to exclude from the voting configuration. If specified, you may not also specify node_ids. -
node_ids
(Optional, string | string[]): A list of the persistent ids of the nodes to exclude from the voting configuration. If specified, you may not also specify node_names. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
timeout
(Optional, string | -1 | 0): When adding a voting configuration exclusion, the API waits for the specified nodes to be excluded from the voting configuration before returning. If the timeout expires before the appropriate condition is satisfied, the request fails and returns an error.
-
put_component_template
editCreate or update a component template. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
An index template can be composed of multiple component templates.
To use a component template, specify it in an index template’s composed_of
list.
Component templates are only applied to new data streams and indices as part of a matching index template.
Settings and mappings specified directly in the index template or the create index request override any settings or mappings specified in a component template.
Component templates are only used during index creation. For data streams, this includes data stream creation and the creation of a stream’s backing indices. Changes to component templates do not affect existing indices, including a stream’s backing indices.
You can use C-style /* *\/
block comments in component templates.
You can include comments anywhere in the request body except before the opening curly bracket.
Applying component templates
You cannot directly apply a component template to a data stream or index.
To be applied, a component template must be included in an index template’s composed_of
list.
client.cluster.putComponentTemplate({ name, template })
Arguments
edit-
Request (object):
-
name
(string): Name of the component template to create. Elasticsearch includes the following built-in component templates:logs-mappings
;logs-settings
;metrics-mappings
;metrics-settings
;synthetics-mapping
;synthetics-settings
. Elastic Agent uses these templates to configure backing indices for its data streams. If you use Elastic Agent and want to overwrite one of these templates, set theversion
for your replacement template higher than the current version. If you don’t use Elastic Agent and want to disable all built-in component and index templates, setstack.templates.enabled
tofalse
using the cluster update settings API. -
template
({ aliases, mappings, settings, defaults, data_stream, lifecycle }): The template to be applied which includes mappings, settings, or aliases configuration. -
version
(Optional, number): Version number used to manage component templates externally. This number isn’t automatically generated or incremented by Elasticsearch. To unset a version, replace the template without specifying a version. -
_meta
(Optional, Record<string, User-defined value>): Optional user metadata about the component template. It may have any contents. This map is not automatically generated by Elasticsearch. This information is stored in the cluster state, so keeping it short is preferable. To unset_meta
, replace the template without specifying this information. -
deprecated
(Optional, boolean): Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning. -
create
(Optional, boolean): Iftrue
, this request cannot replace or update existing component templates. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
put_settings
editUpdate the cluster settings.
Configure and update dynamic settings on a running cluster.
You can also configure dynamic settings locally on an unstarted or shut down node in elasticsearch.yml
.
Updates made with this API can be persistent, which apply across cluster restarts, or transient, which reset after a cluster restart. You can also reset transient or persistent settings by assigning them a null value.
If you configure the same setting using multiple methods, Elasticsearch applies the settings in following order of precedence: 1) Transient setting; 2) Persistent setting; 3) elasticsearch.yml
setting; 4) Default setting value.
For example, you can apply a transient setting to override a persistent setting or elasticsearch.yml
setting.
However, a change to an elasticsearch.yml
setting will not override a defined transient or persistent setting.
In Elastic Cloud, use the user settings feature to configure all cluster settings. This method automatically rejects unsafe settings that could break your cluster.
If you run Elasticsearch on your own hardware, use this API to configure dynamic cluster settings.
Only use elasticsearch.yml
for static cluster settings and node settings.
The API doesn’t require a restart and ensures a setting’s value is the same on all nodes.
Transient cluster settings are no longer recommended. Use persistent cluster settings instead. If a cluster becomes unstable, transient settings can clear unexpectedly, resulting in a potentially undesired cluster configuration.
client.cluster.putSettings({ ... })
Arguments
edit-
Request (object):
-
persistent
(Optional, Record<string, User-defined value>) -
transient
(Optional, Record<string, User-defined value>) -
flat_settings
(Optional, boolean): Return settings in flat format (default: false) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
remote_info
editGet remote cluster information.
Get information about configured remote clusters. The API returns connection and endpoint information keyed by the configured remote cluster alias.
info This API returns information that reflects current state on the local cluster. The
connected
field does not necessarily reflect whether a remote cluster is down or unavailable, only whether there is currently an open connection to it. Elasticsearch does not spontaneously try to reconnect to a disconnected remote cluster. To trigger a reconnection, attempt a cross-cluster search, ES|QL cross-cluster search, or try the [resolve cluster endpoint](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-resolve-cluster).
client.cluster.remoteInfo()
reroute
editReroute the cluster. Manually change the allocation of individual shards in the cluster. For example, a shard can be moved from one node to another explicitly, an allocation can be canceled, and an unassigned shard can be explicitly allocated to a specific node.
It is important to note that after processing any reroute commands Elasticsearch will perform rebalancing as normal (respecting the values of settings such as cluster.routing.rebalance.enable
) in order to remain in a balanced state.
For example, if the requested allocation includes moving a shard from node1 to node2 then this may cause a shard to be moved from node2 back to node1 to even things out.
The cluster can be set to disable allocations using the cluster.routing.allocation.enable
setting.
If allocations are disabled then the only allocations that will be performed are explicit ones given using the reroute command, and consequent allocations due to rebalancing.
The cluster will attempt to allocate a shard a maximum of index.allocation.max_retries
times in a row (defaults to 5
), before giving up and leaving the shard unallocated.
This scenario can be caused by structural problems such as having an analyzer which refers to a stopwords file which doesn’t exist on all nodes.
Once the problem has been corrected, allocation can be manually retried by calling the reroute API with the ?retry_failed
URI query parameter, which will attempt a single retry round for these shards.
client.cluster.reroute({ ... })
Arguments
edit-
Request (object):
-
commands
(Optional, { cancel, move, allocate_replica, allocate_stale_primary, allocate_empty_primary }[]): Defines the commands to perform. -
dry_run
(Optional, boolean): If true, then the request simulates the operation. It will calculate the result of applying the commands to the current cluster state and return the resulting cluster state after the commands (and rebalancing) have been applied; it will not actually perform the requested changes. -
explain
(Optional, boolean): If true, then the response contains an explanation of why the commands can or cannot run. -
metric
(Optional, string | string[]): Limits the information returned to the specified metrics. -
retry_failed
(Optional, boolean): If true, then retries allocation of shards that are blocked due to too many subsequent allocation failures. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
state
editGet the cluster state. Get comprehensive information about the state of the cluster.
The cluster state is an internal data structure which keeps track of a variety of information needed by every node, including the identity and attributes of the other nodes in the cluster; cluster-wide settings; index metadata, including the mapping and settings for each index; the location and status of every shard copy in the cluster.
The elected master node ensures that every node in the cluster has a copy of the same cluster state. This API lets you retrieve a representation of this internal state for debugging or diagnostic purposes. You may need to consult the Elasticsearch source code to determine the precise meaning of the response.
By default the API will route requests to the elected master node since this node is the authoritative source of cluster states.
You can also retrieve the cluster state held on the node handling the API request by adding the ?local=true
query parameter.
Elasticsearch may need to expend significant effort to compute a response to this API in larger clusters, and the response may comprise a very large quantity of data. If you use this API repeatedly, your cluster may become unstable.
The response is a representation of an internal data structure. Its format is not subject to the same compatibility guarantees as other more stable APIs and may change from version to version. Do not query this API using external monitoring tools. Instead, obtain the information you require using other more stable cluster APIs.
client.cluster.state({ ... })
Arguments
edit-
Request (object):
-
metric
(Optional, string | string[]): Limit the information returned to the specified metrics -
index
(Optional, string | string[]): A list of index names; use_all
or empty string to perform the operation on all indices -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
flat_settings
(Optional, boolean): Return settings in flat format (default: false) -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed) -
local
(Optional, boolean): Return local information, do not retrieve the state from master node (default: false) -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master -
wait_for_metadata_version
(Optional, number): Wait for the metadata version to be equal or greater than the specified metadata version -
wait_for_timeout
(Optional, string | -1 | 0): The maximum time to wait for wait_for_metadata_version before timing out
-
stats
editGet cluster statistics. Get basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).
client.cluster.stats({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): List of node filters used to limit returned information. Defaults to all nodes in the cluster. -
include_remotes
(Optional, boolean): Include remote cluster data into the response -
timeout
(Optional, string | -1 | 0): Period to wait for each node to respond. If a node does not respond before its timeout expires, the response does not include its stats. However, timed out nodes are included in the response’s_nodes.failed
property. Defaults to no timeout.
-
connector
editcheck_in
editCheck in a connector.
Update the last_seen
field in the connector and set it to the current timestamp.
client.connector.checkIn({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be checked in
-
delete
editDelete a connector.
Removes a connector and associated sync jobs. This is a destructive action that is not recoverable. NOTE: This action doesn’t delete any API keys, ingest pipelines, or data indices associated with the connector. These need to be removed manually.
client.connector.delete({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be deleted -
delete_sync_jobs
(Optional, boolean): A flag indicating if associated sync jobs should be also removed. Defaults to false.
-
get
editGet a connector.
Get the details about a connector.
client.connector.get({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector
-
list
editGet all connectors.
Get information about all connectors.
client.connector.list({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): Starting offset (default: 0) -
size
(Optional, number): Specifies a max number of results to get -
index_name
(Optional, string | string[]): A list of connector index names to fetch connector documents for -
connector_name
(Optional, string | string[]): A list of connector names to fetch connector documents for -
service_type
(Optional, string | string[]): A list of connector service types to fetch connector documents for -
query
(Optional, string): A wildcard query string that filters connectors with matching name, description or index name
-
post
editCreate a connector.
Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud. Self-managed connectors (Connector clients) are self-managed on your infrastructure.
client.connector.post({ ... })
Arguments
edit-
Request (object):
-
description
(Optional, string) -
index_name
(Optional, string) -
is_native
(Optional, boolean) -
language
(Optional, string) -
name
(Optional, string) -
service_type
(Optional, string)
-
put
editCreate or update a connector.
client.connector.put({ ... })
Arguments
edit-
Request (object):
-
connector_id
(Optional, string): The unique identifier of the connector to be created or updated. ID is auto-generated if not provided. -
description
(Optional, string) -
index_name
(Optional, string) -
is_native
(Optional, boolean) -
language
(Optional, string) -
name
(Optional, string) -
service_type
(Optional, string)
-
sync_job_cancel
editCancel a connector sync job.
Cancel a connector sync job, which sets the status to cancelling and updates cancellation_requested_at
to the current time.
The connector service is then responsible for setting the status of connector sync jobs to cancelled.
client.connector.syncJobCancel({ connector_sync_job_id })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier of the connector sync job
-
sync_job_check_in
editCheck in a connector sync job.
Check in a connector sync job and set the last_seen
field to the current time before updating it in the internal index.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
client.connector.syncJobCheckIn({ connector_sync_job_id })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier of the connector sync job to be checked in.
-
sync_job_claim
editClaim a connector sync job.
This action updates the job status to in_progress
and sets the last_seen
and started_at
timestamps to the current time.
Additionally, it can set the sync_cursor
property for the sync job.
This API is not intended for direct connector management by users. It supports the implementation of services that utilize the connector protocol to communicate with Elasticsearch.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
client.connector.syncJobClaim({ connector_sync_job_id, worker_hostname })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier of the connector sync job. -
worker_hostname
(string): The host name of the current system that will run the job. -
sync_cursor
(Optional, User-defined value): The cursor object from the last incremental sync job. This should reference thesync_cursor
field in the connector state for which the job runs.
-
sync_job_delete
editDelete a connector sync job.
Remove a connector sync job and its associated data. This is a destructive action that is not recoverable.
client.connector.syncJobDelete({ connector_sync_job_id })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier of the connector sync job to be deleted
-
sync_job_error
editSet a connector sync job error.
Set the error
field for a connector sync job and set its status
to error
.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
client.connector.syncJobError({ connector_sync_job_id, error })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier for the connector sync job. -
error
(string): The error for the connector sync job error field.
-
sync_job_get
editGet a connector sync job.
client.connector.syncJobGet({ connector_sync_job_id })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier of the connector sync job
-
sync_job_list
editGet all connector sync jobs.
Get information about all stored connector sync jobs listed by their creation date in ascending order.
client.connector.syncJobList({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): Starting offset (default: 0) -
size
(Optional, number): Specifies a max number of results to get -
status
(Optional, Enum("canceling" | "canceled" | "completed" | "error" | "in_progress" | "pending" | "suspended")): A sync job status to fetch connector sync jobs for -
connector_id
(Optional, string): A connector id to fetch connector sync jobs for -
job_type
(Optional, Enum("full" | "incremental" | "access_control") | Enum("full" | "incremental" | "access_control")[]): A list of job types to fetch the sync jobs for
-
sync_job_post
editCreate a connector sync job.
Create a connector sync job document in the internal index and initialize its counters and timestamps with default values.
client.connector.syncJobPost({ id })
Arguments
edit-
Request (object):
-
id
(string): The id of the associated connector -
job_type
(Optional, Enum("full" | "incremental" | "access_control")) -
trigger_method
(Optional, Enum("on_demand" | "scheduled"))
-
sync_job_update_stats
editSet the connector sync job stats.
Stats include: deleted_document_count
, indexed_document_count
, indexed_document_volume
, and total_document_count
.
You can also update last_seen
.
This API is mainly used by the connector service for updating sync job information.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
client.connector.syncJobUpdateStats({ connector_sync_job_id, deleted_document_count, indexed_document_count, indexed_document_volume })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier of the connector sync job. -
deleted_document_count
(number): The number of documents the sync job deleted. -
indexed_document_count
(number): The number of documents the sync job indexed. -
indexed_document_volume
(number): The total size of the data (in MiB) the sync job indexed. -
last_seen
(Optional, string | -1 | 0): The timestamp to use in thelast_seen
property for the connector sync job. -
metadata
(Optional, Record<string, User-defined value>): The connector-specific metadata. -
total_document_count
(Optional, number): The total number of documents in the target index after the sync job finished.
-
update_active_filtering
editActivate the connector draft filter.
Activates the valid draft filtering for a connector.
client.connector.updateActiveFiltering({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated
-
update_api_key_id
editUpdate the connector API key ID.
Update the api_key_id
and api_key_secret_id
fields of a connector.
You can specify the ID of the API key used for authorization and the ID of the connector secret where the API key is stored.
The connector secret ID is required only for Elastic managed (native) connectors.
Self-managed connectors (connector clients) do not use this field.
client.connector.updateApiKeyId({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
api_key_id
(Optional, string) -
api_key_secret_id
(Optional, string)
-
update_configuration
editUpdate the connector configuration.
Update the configuration field in the connector document.
client.connector.updateConfiguration({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
configuration
(Optional, Record<string, { category, default_value, depends_on, display, label, options, order, placeholder, required, sensitive, tooltip, type, ui_restrictions, validations, value }>) -
values
(Optional, Record<string, User-defined value>)
-
update_error
editUpdate the connector error field.
Set the error field for the connector. If the error provided in the request body is non-null, the connector’s status is updated to error. Otherwise, if the error is reset to null, the connector status is updated to connected.
client.connector.updateError({ connector_id, error })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
error
(T | null)
-
update_features
editUpdate the connector features. Update the connector features in the connector document. This API can be used to control the following aspects of a connector:
- document-level security
- incremental syncs
- advanced sync rules
- basic sync rules
Normally, the running connector service automatically manages these features. However, you can use this API to override the default behavior.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
client.connector.updateFeatures({ connector_id, features })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated. -
features
({ document_level_security, incremental_sync, native_connector_api_keys, sync_rules })
-
update_filtering
editUpdate the connector filtering.
Update the draft filtering configuration of a connector and marks the draft validation state as edited. The filtering draft is activated once validated by the running Elastic connector service. The filtering property is used to configure sync rules (both basic and advanced) for a connector.
client.connector.updateFiltering({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
filtering
(Optional, { active, domain, draft }[]) -
rules
(Optional, { created_at, field, id, order, policy, rule, updated_at, value }[]) -
advanced_snippet
(Optional, { created_at, updated_at, value })
-
update_filtering_validation
editUpdate the connector draft filtering validation.
Update the draft filtering validation info for a connector.
client.connector.updateFilteringValidation({ connector_id, validation })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
validation
({ errors, state })
-
update_index_name
editUpdate the connector index name.
Update the index_name
field of a connector, specifying the index where the data ingested by the connector is stored.
client.connector.updateIndexName({ connector_id, index_name })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
index_name
(T | null)
-
update_name
editUpdate the connector name and description.
client.connector.updateName({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
name
(Optional, string) -
description
(Optional, string)
-
update_native
editUpdate the connector is_native flag.
client.connector.updateNative({ connector_id, is_native })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
is_native
(boolean)
-
update_pipeline
editUpdate the connector pipeline.
When you create a new connector, the configuration of an ingest pipeline is populated with default settings.
client.connector.updatePipeline({ connector_id, pipeline })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
pipeline
({ extract_binary_content, name, reduce_whitespace, run_ml_inference })
-
update_scheduling
editUpdate the connector scheduling.
client.connector.updateScheduling({ connector_id, scheduling })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
scheduling
({ access_control, full, incremental })
-
update_service_type
editUpdate the connector service type.
client.connector.updateServiceType({ connector_id, service_type })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
service_type
(string)
-
update_status
editUpdate the connector status.
client.connector.updateStatus({ connector_id, status })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
status
(Enum("created" | "needs_configuration" | "configured" | "connected" | "error"))
-
dangling_indices
editdelete_dangling_index
editDelete a dangling index.
If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
For example, this can happen if you delete more than cluster.indices.tombstones.size
indices while an Elasticsearch node is offline.
client.danglingIndices.deleteDanglingIndex({ index_uuid, accept_data_loss })
Arguments
edit-
Request (object):
-
index_uuid
(string): The UUID of the index to delete. Use the get dangling indices API to find the UUID. -
accept_data_loss
(boolean): This parameter must be set to true to acknowledge that it will no longer be possible to recove data from the dangling index. -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
import_dangling_index
editImport a dangling index.
If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
For example, this can happen if you delete more than cluster.indices.tombstones.size
indices while an Elasticsearch node is offline.
client.danglingIndices.importDanglingIndex({ index_uuid, accept_data_loss })
Arguments
edit-
Request (object):
-
index_uuid
(string): The UUID of the index to import. Use the get dangling indices API to locate the UUID. -
accept_data_loss
(boolean): This parameter must be set to true to import a dangling index. Because Elasticsearch cannot know where the dangling index data came from or determine which shard copies are fresh and which are stale, it cannot guarantee that the imported data represents the latest state of the index when it was last in the cluster. -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
list_dangling_indices
editGet the dangling indices.
If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
For example, this can happen if you delete more than cluster.indices.tombstones.size
indices while an Elasticsearch node is offline.
Use this API to list dangling indices, which you can then import or delete.
client.danglingIndices.listDanglingIndices()
enrich
editdelete_policy
editDelete an enrich policy. Deletes an existing enrich policy and its enrich index.
client.enrich.deletePolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): Enrich policy to delete. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
execute_policy
editRun an enrich policy. Create the enrich index for an existing enrich policy.
client.enrich.executePolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): Enrich policy to execute. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks other enrich policy execution requests until complete.
-
get_policy
editGet an enrich policy. Returns information about an enrich policy.
client.enrich.getPolicy({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): List of enrich policy names used to limit the request. To return information for all enrich policies, omit this parameter. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
put_policy
editCreate an enrich policy. Creates an enrich policy.
client.enrich.putPolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): Name of the enrich policy to create or update. -
geo_match
(Optional, { enrich_fields, indices, match_field, query, name, elasticsearch_version }): Matches enrich data to incoming documents based on ageo_shape
query. -
match
(Optional, { enrich_fields, indices, match_field, query, name, elasticsearch_version }): Matches enrich data to incoming documents based on aterm
query. -
range
(Optional, { enrich_fields, indices, match_field, query, name, elasticsearch_version }): Matches a number, date, or IP address in incoming documents to a range in the enrich index based on aterm
query. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
stats
editGet enrich stats. Returns enrich coordinator statistics and information about enrich policies that are currently executing.
client.enrich.stats({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
eql
editdelete
editDelete an async EQL search. Delete an async EQL search or a stored synchronous EQL search. The API also deletes results for the search.
client.eql.delete({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the search to delete. A search ID is provided in the EQL search API’s response for an async search. A search ID is also provided if the request’skeep_on_completion
parameter istrue
.
-
get
editGet async EQL search results. Get the current status and available results for an async EQL search or a stored synchronous EQL search.
client.eql.get({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the search. -
keep_alive
(Optional, string | -1 | 0): Period for which the search and its results are stored on the cluster. Defaults to the keep_alive value set by the search’s EQL search API request. -
wait_for_completion_timeout
(Optional, string | -1 | 0): Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.
-
get_status
editGet the async EQL status. Get the current status for an async EQL search or a stored synchronous EQL search without returning results.
client.eql.getStatus({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the search.
-
search
editGet EQL search results. Returns search results for an Event Query Language (EQL) query. EQL assumes each document in a data stream or index corresponds to an event.
client.eql.search({ index, query })
Arguments
edit-
Request (object):
-
index
(string | string[]): The name of the index to scope the operation -
query
(string): EQL query you wish to run. -
case_sensitive
(Optional, boolean) -
event_category_field
(Optional, string): Field containing the event classification, such as process, file, or network. -
tiebreaker_field
(Optional, string): Field used to sort hits with the same timestamp in ascending order -
timestamp_field
(Optional, string): Field containing event timestamp. Default "@timestamp" -
fetch_size
(Optional, number): Maximum number of events to search at a time for sequence queries. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type } | { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }[]): Query, written in Query DSL, used to filter the events on which the EQL query runs. -
keep_alive
(Optional, string | -1 | 0) -
keep_on_completion
(Optional, boolean) -
wait_for_completion_timeout
(Optional, string | -1 | 0) -
allow_partial_search_results
(Optional, boolean): Allow query execution also in case of shard failures. If true, the query will keep running and will return results based on the available shards. For sequences, the behavior can be further refined using allow_partial_sequence_results -
allow_partial_sequence_results
(Optional, boolean): This flag applies only to sequences and has effect only if allow_partial_search_results=true. If true, the sequence query will return results based on the available shards, ignoring the others. If false, the sequence query will return successfully, but will always have empty results. -
size
(Optional, number): For basic queries, the maximum number of matching events to return. Defaults to 10 -
fields
(Optional, { field, format, include_unmapped } | { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The response returns values for field names matching these patterns in the fields property of each hit. -
result_position
(Optional, Enum("tail" | "head")) -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>) -
max_samples_per_key
(Optional, number): By default, the response of a sample query contains up to10
samples, with one sample per unique set of join keys. Use thesize
parameter to get a smaller or larger set of samples. To retrieve more than one sample per set of join keys, use themax_samples_per_key
parameter. Pipes are not supported for sample queries. -
allow_no_indices
(Optional, boolean) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]) -
ignore_unavailable
(Optional, boolean): If true, missing or closed indices are not included in the response.
-
esql
editasync_query
editRun an async ES|QL query. Asynchronously run an ES|QL (Elasticsearch query language) query, monitor its progress, and retrieve results when they become available.
The API accepts the same parameters and request body as the synchronous query API, along with additional async related properties.
client.esql.asyncQuery({ query })
Arguments
edit-
Request (object):
-
query
(string): The ES|QL query API accepts an ES|QL query string in the query parameter, runs it, and returns the results. -
columnar
(Optional, boolean): By default, ES|QL returns results as rows. For example, FROM returns each individual document as one row. For the JSON, YAML, CBOR and smile formats, ES|QL can return the results in a columnar fashion where one row represents all the values of a certain column in the results. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Specify a Query DSL query in the filter parameter to filter the set of documents that an ES|QL query runs on. -
locale
(Optional, string) -
params
(Optional, number | number | string | boolean | null | User-defined value[]): To avoid any attempts of hacking or code injection, extract the values in a separate list of parameters. Use question mark placeholders (?) in the query string for each of the parameters. -
profile
(Optional, boolean): If provided andtrue
the response will include an extraprofile
object with information on how the query was executed. This information is for human debugging and its format can change at any time but it can give some insight into the performance of each part of the query. -
tables
(Optional, Record<string, Record<string, { integer, keyword, long, double }>>): Tables to use with the LOOKUP operation. The top level key is the table name and the next level key is the column name. -
include_ccs_metadata
(Optional, boolean): When set totrue
and performing a cross-cluster query, the response will include an extra_clusters
object with information about the clusters that participated in the search along with info such as shards count. -
wait_for_completion_timeout
(Optional, string | -1 | 0): The period to wait for the request to finish. By default, the request waits for 1 second for the query results. If the query completes during this period, results are returned Otherwise, a query ID is returned that can later be used to retrieve the results. -
delimiter
(Optional, string): The character to use between values within a CSV row. It is valid only for the CSV format. -
drop_null_columns
(Optional, boolean): Indicates whether columns that are entirelynull
will be removed from thecolumns
andvalues
portion of the results. Iftrue
, the response will include an extra section under the nameall_columns
which has the name of all the columns. -
format
(Optional, Enum("csv" | "json" | "tsv" | "txt" | "yaml" | "cbor" | "smile" | "arrow")): A short version of the Accept header, for examplejson
oryaml
. -
keep_alive
(Optional, string | -1 | 0): The period for which the query and its results are stored in the cluster. The default period is five days. When this period expires, the query and its results are deleted, even if the query is still ongoing. If thekeep_on_completion
parameter is false, Elasticsearch only stores async queries that do not complete within the period set by thewait_for_completion_timeout
parameter, regardless of this value. -
keep_on_completion
(Optional, boolean): Indicates whether the query and its results are stored in the cluster. If false, the query and its results are stored in the cluster only if the request does not complete during the period set by thewait_for_completion_timeout
parameter.
-
async_query_delete
editDelete an async ES|QL query. If the query is still running, it is cancelled. Otherwise, the stored results are deleted.
If the Elasticsearch security features are enabled, only the following users can use this API to delete a query:
- The authenticated user that submitted the original query request
-
Users with the
cancel_task
cluster privilege
client.esql.asyncQueryDelete({ id })
Arguments
edit-
Request (object):
-
id
(string): The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with thekeep_on_completion
parameter set totrue
.
-
async_query_get
editGet async ES|QL query results. Get the current status and available results or stored results for an ES|QL asynchronous query. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can retrieve the results using this API.
client.esql.asyncQueryGet({ id })
Arguments
edit-
Request (object):
-
id
(string): The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with thekeep_on_completion
parameter set totrue
. -
drop_null_columns
(Optional, boolean): Indicates whether columns that are entirelynull
will be removed from thecolumns
andvalues
portion of the results. Iftrue
, the response will include an extra section under the nameall_columns
which has the name of all the columns. -
keep_alive
(Optional, string | -1 | 0): The period for which the query and its results are stored in the cluster. When this period expires, the query and its results are deleted, even if the query is still ongoing. -
wait_for_completion_timeout
(Optional, string | -1 | 0): The period to wait for the request to finish. By default, the request waits for complete query results. If the request completes during the period specified in this parameter, complete query results are returned. Otherwise, the response returns anis_running
value oftrue
and no results.
-
async_query_stop
editStop async ES|QL query.
This API interrupts the query execution and returns the results so far. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can stop it.
client.esql.asyncQueryStop({ id })
Arguments
edit-
Request (object):
-
id
(string): The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with thekeep_on_completion
parameter set totrue
. -
drop_null_columns
(Optional, boolean): Indicates whether columns that are entirelynull
will be removed from thecolumns
andvalues
portion of the results. Iftrue
, the response will include an extra section under the nameall_columns
which has the name of all the columns.
-
query
editRun an ES|QL query. Get search results for an ES|QL (Elasticsearch query language) query.
client.esql.query({ query })
Arguments
edit-
Request (object):
-
query
(string): The ES|QL query API accepts an ES|QL query string in the query parameter, runs it, and returns the results. -
columnar
(Optional, boolean): By default, ES|QL returns results as rows. For example, FROM returns each individual document as one row. For the JSON, YAML, CBOR and smile formats, ES|QL can return the results in a columnar fashion where one row represents all the values of a certain column in the results. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Specify a Query DSL query in the filter parameter to filter the set of documents that an ES|QL query runs on. -
locale
(Optional, string) -
params
(Optional, number | number | string | boolean | null | User-defined value[]): To avoid any attempts of hacking or code injection, extract the values in a separate list of parameters. Use question mark placeholders (?) in the query string for each of the parameters. -
profile
(Optional, boolean): If provided andtrue
the response will include an extraprofile
object with information on how the query was executed. This information is for human debugging and its format can change at any time but it can give some insight into the performance of each part of the query. -
tables
(Optional, Record<string, Record<string, { integer, keyword, long, double }>>): Tables to use with the LOOKUP operation. The top level key is the table name and the next level key is the column name. -
include_ccs_metadata
(Optional, boolean): When set totrue
and performing a cross-cluster query, the response will include an extra_clusters
object with information about the clusters that participated in the search along with info such as shards count. -
format
(Optional, Enum("csv" | "json" | "tsv" | "txt" | "yaml" | "cbor" | "smile" | "arrow")): A short version of the Accept header, e.g. json, yaml. -
delimiter
(Optional, string): The character to use between values within a CSV row. Only valid for the CSV format. -
drop_null_columns
(Optional, boolean): Should columns that are entirelynull
be removed from thecolumns
andvalues
portion of the results? Defaults tofalse
. Iftrue
then the response will include an extra section under the nameall_columns
which has the name of all columns.
-
features
editget_features
editGet the features.
Get a list of features that can be included in snapshots using the feature_states
field when creating a snapshot.
You can use this API to determine which feature states to include when taking a snapshot.
By default, all feature states are included in a snapshot if that snapshot includes the global state, or none if it does not.
A feature state includes one or more system indices necessary for a given feature to function. In order to ensure data integrity, all system indices that comprise a feature state are snapshotted and restored together.
The features listed by this API are a combination of built-in features and features defined by plugins. In order for a feature state to be listed in this API and recognized as a valid feature state by the create snapshot API, the plugin that defines that feature must be installed on the master node.
client.features.getFeatures({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
reset_features
editReset the features. Clear all of the state information stored in system indices by Elasticsearch features, including the security and machine learning indices.
Intended for development and testing use only. Do not reset features on a production cluster.
Return a cluster to the same state as a new installation by resetting the feature state for all Elasticsearch features. This deletes all state information stored in system indices.
The response code is HTTP 200 if the state is successfully reset for all features. It is HTTP 500 if the reset operation failed for any feature.
Note that select features might provide a way to reset particular system indices. Using this API resets all features, both those that are built-in and implemented as plugins.
To list the features that will be affected, use the get features API.
The features installed on the node you submit this request to are the features that will be reset. Run on the master node if you have any doubts about which plugins are installed on individual nodes.
client.features.resetFeatures({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
fleet
editglobal_checkpoints
editGet global checkpoints.
Get the current global checkpoints for an index. This API is designed for internal use by the Fleet server project.
client.fleet.globalCheckpoints({ index })
Arguments
edit-
Request (object):
-
index
(string | string): A single index or index alias that resolves to a single index. -
wait_for_advance
(Optional, boolean): A boolean value which controls whether to wait (until the timeout) for the global checkpoints to advance past the providedcheckpoints
. -
wait_for_index
(Optional, boolean): A boolean value which controls whether to wait (until the timeout) for the target index to exist and all primary shards be active. Can only be true whenwait_for_advance
is true. -
checkpoints
(Optional, number[]): A comma separated list of previous global checkpoints. When used in combination withwait_for_advance
, the API will only return once the global checkpoints advances past the checkpoints. Providing an empty list will cause Elasticsearch to immediately return the current global checkpoints. -
timeout
(Optional, string | -1 | 0): Period to wait for a global checkpoints to advance pastcheckpoints
.
-
msearch
editExecutes several [fleet searches](https://www.elastic.co/guide/en/elasticsearch/reference/current/fleet-search.html) with a single API request. The API follows the same structure as the [multi search](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html) API. However, similar to the fleet search API, it supports the wait_for_checkpoints parameter.
client.fleet.msearch({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string): A single target to search. If the target is an index alias, it must resolve to a single index. -
searches
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, index, preference, request_cache, routing, search_type, ccs_minimize_roundtrips, allow_partial_search_results, ignore_throttled } | { aggregations, collapse, query, explain, ext, stored_fields, docvalue_fields, knn, from, highlight, indices_boost, min_score, post_filter, profile, rescore, script_fields, search_after, size, sort, _source, fields, terminate_after, stats, timeout, track_scores, track_total_hits, version, runtime_mappings, seq_no_primary_term, pit, suggest }[]) -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. -
ccs_minimize_roundtrips
(Optional, boolean): If true, network roundtrips between the coordinating node and remote clusters are minimized for cross-cluster search requests. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. -
ignore_throttled
(Optional, boolean): If true, concrete, expanded or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): If true, missing or closed indices are not included in the response. -
max_concurrent_searches
(Optional, number): Maximum number of concurrent searches the multi search API can execute. -
max_concurrent_shard_requests
(Optional, number): Maximum number of concurrent shard requests that each sub-search request executes per node. -
pre_filter_shard_size
(Optional, number): Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method i.e., if date filters are mandatory to match but the shard bounds and the query are disjoint. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): Indicates whether global term and document frequencies should be used when scoring returned documents. -
rest_total_hits_as_int
(Optional, boolean): If true, hits.total are returned as an integer in the response. Defaults to false, which returns an object. -
typed_keys
(Optional, boolean): Specifies whether aggregation and suggester names should be prefixed by their respective types in the response. -
wait_for_checkpoints
(Optional, number[]): A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search. -
allow_partial_search_results
(Optional, boolean): If true, returns partial results if there are shard request timeouts or [shard failures](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-replication.html#shard-failures). If false, returns an error with no partial results. Defaults to the configured cluster settingsearch.default_allow_partial_results
which is true by default.
-
search
editThe purpose of the fleet search api is to provide a search api where the search will only be executed after provided checkpoint has been processed and is visible for searches inside of Elasticsearch.
client.fleet.search({ index })
Arguments
edit-
Request (object):
-
index
(string | string): A single target to search. If the target is an index alias, it must resolve to a single index. -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>) -
collapse
(Optional, { field, inner_hits, max_concurrent_group_searches, collapse }) -
explain
(Optional, boolean): If true, returns detailed information about score computation as part of a hit. -
ext
(Optional, Record<string, User-defined value>): Configuration of search extensions defined by Elasticsearch plugins. -
from
(Optional, number): Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter. -
highlight
(Optional, { encoder, fields }) -
track_total_hits
(Optional, boolean | number): Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits. -
indices_boost
(Optional, Record<string, number>[]): Boosts the _score of documents from specified indices. -
docvalue_fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The request returns doc values for field names matching these patterns in the hits.fields property of the response. -
min_score
(Optional, number): Minimum _score for matching documents. Documents with a lower _score are not included in the search results. -
post_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }) -
profile
(Optional, boolean) -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Defines the search definition using the Query DSL. -
rescore
(Optional, { window_size, query, learning_to_rank } | { window_size, query, learning_to_rank }[]) -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Retrieve a script evaluation (based on different fields) for each hit. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]) -
size
(Optional, number): The number of hits to return. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter. -
slice
(Optional, { field, id, max }) -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]) -
_source
(Optional, boolean | { excludes, includes }): Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response. -
fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The request returns values for field names matching these patterns in the hits.fields property of the response. -
suggest
(Optional, { text }) -
terminate_after
(Optional, number): Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Defaults to 0, which does not terminate query execution early. -
timeout
(Optional, string): Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout. -
track_scores
(Optional, boolean): If true, calculate and return document scores, even if the scores are not used for sorting. -
version
(Optional, boolean): If true, returns document version as part of a hit. -
seq_no_primary_term
(Optional, boolean): If true, returns sequence number and primary term of the last modification of each hit. See Optimistic concurrency control. -
stored_fields
(Optional, string | string[]): List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. You can pass _source: true to return both source fields and stored fields in the search response. -
pit
(Optional, { id, keep_alive }): Limits the search to a point in time (PIT). If you provide a PIT, you cannot specify an <index> in the request path. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name. -
stats
(Optional, string[]): Stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API. -
allow_no_indices
(Optional, boolean) -
analyzer
(Optional, string) -
analyze_wildcard
(Optional, boolean) -
batched_reduce_size
(Optional, number) -
ccs_minimize_roundtrips
(Optional, boolean) -
default_operator
(Optional, Enum("and" | "or")) -
df
(Optional, string) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]) -
ignore_throttled
(Optional, boolean) -
ignore_unavailable
(Optional, boolean) -
lenient
(Optional, boolean) -
max_concurrent_shard_requests
(Optional, number) -
min_compatible_shard_node
(Optional, string) -
preference
(Optional, string) -
pre_filter_shard_size
(Optional, number) -
request_cache
(Optional, boolean) -
routing
(Optional, string) -
scroll
(Optional, string | -1 | 0) -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")) -
suggest_field
(Optional, string): Specifies which field to use for suggestions. -
suggest_mode
(Optional, Enum("missing" | "popular" | "always")) -
suggest_size
(Optional, number) -
suggest_text
(Optional, string): The source text for which the suggestions should be returned. -
typed_keys
(Optional, boolean) -
rest_total_hits_as_int
(Optional, boolean) -
_source_excludes
(Optional, string | string[]) -
_source_includes
(Optional, string | string[]) -
q
(Optional, string) -
wait_for_checkpoints
(Optional, number[]): A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search. -
allow_partial_search_results
(Optional, boolean): If true, returns partial results if there are shard request timeouts or [shard failures](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-replication.html#shard-failures). If false, returns an error with no partial results. Defaults to the configured cluster settingsearch.default_allow_partial_results
which is true by default.
-
graph
editexplore
editExplore graph analytics.
Extract and summarize information about the documents and terms in an Elasticsearch data stream or index.
The easiest way to understand the behavior of this API is to use the Graph UI to explore connections.
An initial request to the _explore
API contains a seed query that identifies the documents of interest and specifies the fields that define the vertices and connections you want to include in the graph.
Subsequent requests enable you to spider out from one more vertices of interest.
You can exclude vertices that have already been returned.
client.graph.explore({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): Name of the index. -
connections
(Optional, { connections, query, vertices }): Specifies or more fields from which you want to extract terms that are associated with the specified vertices. -
controls
(Optional, { sample_diversity, sample_size, timeout, use_significance }): Direct the Graph API how to build the graph. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): A seed query that identifies the documents of interest. Can be any valid Elasticsearch query. -
vertices
(Optional, { exclude, field, include, min_doc_count, shard_min_doc_count, size }[]): Specifies one or more fields that contain the terms you want to include in the graph as vertices. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.
-
ilm
editdelete_lifecycle
editDelete a lifecycle policy. You cannot delete policies that are currently in use. If the policy is being used to manage any indices, the request fails and returns an error.
client.ilm.deleteLifecycle({ policy })
Arguments
edit-
Request (object):
-
policy
(string): Identifier for the policy. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
explain_lifecycle
editExplain the lifecycle state. Get the current lifecycle status for one or more indices. For data streams, the API retrieves the current lifecycle status for the stream’s backing indices.
The response indicates when the index entered each lifecycle state, provides the definition of the running phase, and information about any failures.
client.ilm.explainLifecycle({ index })
Arguments
edit-
Request (object):
-
index
(string): List of data streams, indices, and aliases to target. Supports wildcards (*
). To target all data streams and indices, use*
or_all
. -
only_errors
(Optional, boolean): Filters the returned indices to only indices that are managed by ILM and are in an error state, either due to an encountering an error while executing the policy, or attempting to use a policy that does not exist. -
only_managed
(Optional, boolean): Filters the returned indices to only indices that are managed by ILM. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_lifecycle
editGet lifecycle policies.
client.ilm.getLifecycle({ ... })
Arguments
edit-
Request (object):
-
policy
(Optional, string): Identifier for the policy. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_status
editGet the ILM status.
Get the current index lifecycle management status.
client.ilm.getStatus()
migrate_to_data_tiers
editMigrate to data tiers routing. Switch the indices, ILM policies, and legacy, composable, and component templates from using custom node attributes and attribute-based allocation filters to using data tiers. Optionally, delete one legacy index template. Using node roles enables ILM to automatically move the indices between data tiers.
Migrating away from custom node attributes routing can be manually performed. This API provides an automated way of performing three out of the four manual steps listed in the migration guide:
- Stop setting the custom hot attribute on new indices.
- Remove custom allocation settings from existing ILM policies.
- Replace custom allocation settings from existing indices with the corresponding tier preference.
ILM must be stopped before performing the migration.
Use the stop ILM and get ILM status APIs to wait until the reported operation mode is STOPPED
.
client.ilm.migrateToDataTiers({ ... })
Arguments
edit-
Request (object):
-
legacy_template_to_delete
(Optional, string) -
node_attribute
(Optional, string) -
dry_run
(Optional, boolean): If true, simulates the migration from node attributes based allocation filters to data tiers, but does not perform the migration. This provides a way to retrieve the indices and ILM policies that need to be migrated.
-
move_to_step
editMove to a lifecycle step. Manually move an index into a specific step in the lifecycle policy and run that step.
This operation can result in the loss of data. Manually moving an index into a specific step runs that step even if it has already been performed. This is a potentially destructive action and this should be considered an expert level API.
You must specify both the current step and the step to be executed in the body of the request. The request will fail if the current step does not match the step currently running for the index This is to prevent the index from being moved from an unexpected step into the next step.
When specifying the target (next_step
) to which the index will be moved, either the name or both the action and name fields are optional.
If only the phase is specified, the index will move to the first step of the first action in the target phase.
If the phase and action are specified, the index will move to the first step of the specified action in the specified phase.
Only actions specified in the ILM policy are considered valid.
An index cannot move to a step that is not part of its policy.
client.ilm.moveToStep({ index, current_step, next_step })
Arguments
edit-
Request (object):
-
index
(string): The name of the index whose lifecycle step is to change -
current_step
({ action, name, phase }): The step that the index is expected to be in. -
next_step
({ action, name, phase }): The step that you want to run.
-
put_lifecycle
editCreate or update a lifecycle policy. If the specified policy exists, it is replaced and the policy version is incremented.
Only the latest version of the policy is stored, you cannot revert to previous versions.
client.ilm.putLifecycle({ policy })
Arguments
edit-
Request (object):
-
policy
(string): Identifier for the policy. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
remove_policy
editRemove policies from an index. Remove the assigned lifecycle policies from an index or a data stream’s backing indices. It also stops managing the indices.
client.ilm.removePolicy({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the index to remove policy on
-
retry
editRetry a policy. Retry running the lifecycle policy for an index that is in the ERROR step. The API sets the policy back to the step where the error occurred and runs the step. Use the explain lifecycle state API to determine whether an index is in the ERROR step.
client.ilm.retry({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the indices (comma-separated) whose failed lifecycle step is to be retry
-
start
editStart the ILM plugin. Start the index lifecycle management plugin if it is currently stopped. ILM is started automatically when the cluster is formed. Restarting ILM is necessary only when it has been stopped using the stop ILM API.
client.ilm.start({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
stop
editStop the ILM plugin. Halt all lifecycle management operations and stop the index lifecycle management plugin. This is useful when you are performing maintenance on the cluster and need to prevent ILM from performing any actions on your indices.
The API returns as soon as the stop request has been acknowledged, but the plugin might continue to run until in-progress operations complete and the plugin can be safely stopped. Use the get ILM status API to check whether ILM is running.
client.ilm.stop({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
indices
editadd_block
editAdd an index block.
Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.
client.indices.addBlock({ index, block })
Arguments
edit-
Request (object):
-
index
(string): A list or wildcard expression of index names used to limit the request. By default, you must explicitly name the indices you are adding blocks to. To allow the adding of blocks to indices with_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting tofalse
. You can update this setting in theelasticsearch.yml
file or by using the cluster update settings API. -
block
(Enum("metadata" | "read" | "read_only" | "write")): The block type to add to the index. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports a list of values, such asopen,hidden
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to-1
to indicate that the request should never timeout. -
timeout
(Optional, string | -1 | 0): The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response will indicate that it was not completely acknowledged. It can also be set to-1
to indicate that the request should never timeout.
-
analyze
editGet tokens from text analysis. The analyze API performs analysis on a text string and returns the resulting tokens.
Generating excessive amount of tokens may cause a node to run out of memory.
The index.analyze.max_token_count
setting enables you to limit the number of tokens that can be produced.
If more than this limit of tokens gets generated, an error occurs.
The _analyze
endpoint without a specified index will always use 10000
as its limit.
client.indices.analyze({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): Index used to derive the analyzer. If specified, theanalyzer
or field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer. -
analyzer
(Optional, string): The name of the analyzer that should be applied to the providedtext
. This could be a built-in analyzer, or an analyzer that’s been configured in the index. -
attributes
(Optional, string[]): Array of token attributes used to filter the output of theexplain
parameter. -
char_filter
(Optional, string | { type, escaped_tags } | { type, mappings, mappings_path } | { type, flags, pattern, replacement } | { type, mode, name } | { type, normalize_kana, normalize_kanji }[]): Array of character filters used to preprocess characters before the tokenizer. -
explain
(Optional, boolean): Iftrue
, the response includes token attributes and additional details. -
field
(Optional, string): Field used to derive the analyzer. To use this parameter, you must specify an index. If specified, theanalyzer
parameter overrides this value. -
filter
(Optional, string | { type, preserve_original } | { type, common_words, common_words_path, ignore_case, query_mode } | { type, filter, script } | { type, delimiter, encoding } | { type, max_gram, min_gram, side, preserve_original } | { type, articles, articles_path, articles_case } | { type, max_output_size, separator } | { type, dedup, dictionary, locale, longest_only } | { type } | { type, mode, types } | { type, keep_words, keep_words_case, keep_words_path } | { type, ignore_case, keywords, keywords_path, keywords_pattern } | { type } | { type, max, min } | { type, consume_all_tokens, max_token_count } | { type, language } | { type, filters, preserve_original } | { type, max_gram, min_gram, preserve_original } | { type, stoptags } | { type, patterns, preserve_original } | { type, all, flags, pattern, replacement } | { type } | { type, script } | { type } | { type } | { type, filler_token, max_shingle_size, min_shingle_size, output_unigrams, output_unigrams_if_no_shingles, token_separator } | { type, language } | { type, rules, rules_path } | { type, language } | { type, ignore_case, remove_trailing, stopwords, stopwords_path } | { type, expand, format, lenient, synonyms, synonyms_path, synonyms_set, tokenizer, updateable } | { type, expand, format, lenient, synonyms, synonyms_path, synonyms_set, tokenizer, updateable } | { type } | { type, length } | { type, only_on_same_position } | { type } | { type, adjust_offsets, catenate_all, catenate_numbers, catenate_words, generate_number_parts, generate_word_parts, ignore_keywords, preserve_original, protected_words, protected_words_path, split_on_case_change, split_on_numerics, stem_english_possessive, type_table, type_table_path } | { type, catenate_all, catenate_numbers, catenate_words, generate_number_parts, generate_word_parts, preserve_original, protected_words, protected_words_path, split_on_case_change, split_on_numerics, stem_english_possessive, type_table, type_table_path } | { type, minimum_length } | { type, use_romaji } | { type, stoptags } | { type, alternate, case_first, case_level, country, decomposition, hiragana_quaternary_mode, language, numeric, rules, strength, variable_top, variant } | { type, unicode_set_filter } | { type, name } | { type, dir, id } | { type, encoder, languageset, max_code_len, name_type, replace, rule_type } | { type }[]): Array of token filters used to apply after the tokenizer. -
normalizer
(Optional, string): Normalizer to use to convert text into a single token. -
text
(Optional, string | string[]): Text to analyze. If an array of strings is provided, it is analyzed as a multi-value field. -
tokenizer
(Optional, string | { type, tokenize_on_chars, max_token_length } | { type, max_token_length } | { type, custom_token_chars, max_gram, min_gram, token_chars } | { type, buffer_size } | { type } | { type } | { type, custom_token_chars, max_gram, min_gram, token_chars } | { type, buffer_size, delimiter, replacement, reverse, skip } | { type, flags, group, pattern } | { type, pattern } | { type, pattern } | { type, max_token_length } | { type } | { type, max_token_length } | { type, max_token_length } | { type, rule_files } | { type, discard_punctuation, mode, nbest_cost, nbest_examples, user_dictionary, user_dictionary_rules, discard_compound_token } | { type, decompound_mode, discard_punctuation, user_dictionary, user_dictionary_rules }): Tokenizer to use to convert text into tokens.
-
cancel_migrate_reindex
editCancel a migration reindex operation.
Cancel a migration reindex attempt for a data stream or index.
client.indices.cancelMigrateReindex({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): The index or data stream name
-
clear_cache
editClear the cache. Clear the cache of one or more indices. For data streams, the API clears the caches of the stream’s backing indices.
By default, the clear cache API clears all caches.
To clear only specific caches, use the fielddata
, query
, or request
parameters.
To clear the cache only of specific fields, use the fields
parameter.
client.indices.clearCache({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
fielddata
(Optional, boolean): Iftrue
, clears the fields cache. Use thefields
parameter to clear the cache of specific fields only. -
fields
(Optional, string | string[]): List of field names used to limit thefielddata
parameter. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
query
(Optional, boolean): Iftrue
, clears the query cache. -
request
(Optional, boolean): Iftrue
, clears the request cache.
-
clone
editClone an index. Clone an existing index into a new index. Each original primary shard is cloned into a new primary shard in the new index.
Elasticsearch does not apply index templates to the resulting index. The API also does not copy index metadata from the original index. Index metadata includes aliases, index lifecycle management phase definitions, and cross-cluster replication (CCR) follower information. For example, if you clone a CCR follower index, the resulting clone will not be a follower index.
The clone API copies most index settings from the source index to the resulting index, with the exception of index.number_of_replicas
and index.auto_expand_replicas
.
To set the number of replicas in the resulting index, configure these settings in the clone request.
Cloning works as follows:
- First, it creates a new target index with the same definition as the source index.
- Then it hard-links segments from the source index into the target index. If the file system does not support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Finally, it recovers the target index as though it were a closed index which had just been re-opened.
Indices can only be cloned if they meet the following requirements:
- The index must be marked as read-only and have a cluster health status of green.
- The target index must not exist.
- The source index must have the same number of primary shards as the target index.
- The node handling the clone process must have sufficient free disk space to accommodate a second copy of the existing index.
The current write index on a data stream cannot be cloned. In order to clone the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be cloned.
Mappings cannot be specified in the _clone
request. The mappings of the source index will be used for the target index.
Monitor the cloning process
The cloning process can be monitored with the cat recovery API or the cluster health API can be used to wait until all primary shards have been allocated by setting the wait_for_status
parameter to yellow
.
The _clone
API returns as soon as the target index has been added to the cluster state, before any shards have been allocated.
At this point, all shards are in the state unassigned.
If, for any reason, the target index can’t be allocated, its primary shard will remain unassigned until it can be allocated on that node.
Once the primary shard is allocated, it moves to state initializing, and the clone process begins. When the clone operation completes, the shard will become active. At that point, Elasticsearch will try to allocate any replicas and may decide to relocate the primary shard to another node.
Wait for active shards
Because the clone operation creates a new index to clone the shards to, the wait for active shards setting on index creation applies to the clone index action as well.
client.indices.clone({ index, target })
Arguments
edit-
Request (object):
-
index
(string): Name of the source index to clone. -
target
(string): Name of the target index to create. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the resulting index. -
settings
(Optional, Record<string, User-defined value>): Configuration options for the target index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
close
editClose an index. A closed index is blocked for read or write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. Closed indices do not have to maintain internal data structures for indexing or searching documents, which results in a smaller overhead on the cluster.
When opening or closing an index, the master node is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened and closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices.
An error is thrown if the request explicitly refers to a missing index.
This behaviour can be turned off using the ignore_unavailable=true
parameter.
By default, you must explicitly name the indices you are opening or closing.
To open or close indices with _all
, *
, or other wildcard expressions, change the` action.destructive_requires_name` setting to false
. This setting can also be changed with the cluster update settings API.
Closed indices consume a significant amount of disk-space which can cause problems in managed environments.
Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable
to false
.
client.indices.close({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List or wildcard expression of index names used to limit the request. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
create
editCreate an index. You can use the create index API to add a new index to an Elasticsearch cluster. When creating an index, you can specify the following:
- Settings for the index.
- Mappings for fields in the index.
- Index aliases
Wait for active shards
By default, index creation will only return a response to the client when the primary copies of each shard have been started, or the request times out.
The index creation response will indicate what happened.
For example, acknowledged
indicates whether the index was successfully created in the cluster, while shards_acknowledged
indicates whether the requisite number of shard copies were started for each shard in the index before timing out.
Note that it is still possible for either acknowledged
or shards_acknowledged
to be false
, but for the index creation to be successful.
These values simply indicate whether the operation completed before the timeout.
If acknowledged
is false, the request timed out before the cluster state was updated with the newly created index, but it probably will be created sometime soon.
If shards_acknowledged
is false, then the request timed out before the requisite number of shards were started (by default just the primaries), even if the cluster state was successfully updated to reflect the newly created index (that is to say, acknowledged
is true
).
You can change the default of only waiting for the primary shards to start through the index setting index.write.wait_for_active_shards
.
Note that changing this setting will also affect the wait_for_active_shards
value on all subsequent write operations.
client.indices.create({ index })
Arguments
edit-
Request (object):
-
index
(string): Name of the index you wish to create. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the index. -
mappings
(Optional, { all_field, date_detection, dynamic, dynamic_date_formats, dynamic_templates, _field_names, index_field, _meta, numeric_detection, properties, _routing, _size, _source, runtime, enabled, subobjects, _data_stream_timestamp }): Mapping for fields in the index. If specified, this mapping can include:- Field names
- Field data types
- Mapping parameters
-
settings
(Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store }): Configuration options for the index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
create_data_stream
editCreate a data stream.
You must have a matching index template with data stream enabled.
client.indices.createDataStream({ name })
Arguments
edit-
Request (object):
-
name
(string): Name of the data stream, which must meet the following criteria: Lowercase only; Cannot include\
,/
,*
,?
,"
,<
,>
,|
,,
,#
,:
, or a space character; Cannot start with-
,_
,+
, or.ds-
; Cannot be.
or..
; Cannot be longer than 255 bytes. Multi-byte characters count towards this limit faster. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
create_from
editCreate an index from a source index.
Copy the mappings and settings from the source index to a destination index while allowing request settings and mappings to override the source values.
client.indices.createFrom({ source, dest })
Arguments
edit-
Request (object):
-
source
(string): The source index or data stream name -
dest
(string): The destination index or data stream name -
create_from
(Optional, { mappings_override, settings_override, remove_index_blocks })
-
data_streams_stats
editGet data stream stats.
Get statistics for one or more data streams.
client.indices.dataStreamsStats({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): List of data streams used to limit the request. Wildcard expressions (*
) are supported. To target all data streams in a cluster, omit this parameter or use*
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values, such asopen,hidden
.
-
delete
editDelete indices. Deleting an index deletes its documents, shards, and metadata. It does not delete related Kibana components, such as data views, visualizations, or dashboards.
You cannot delete the current write index of a data stream. To delete the index, you must roll over the data stream so a new write index is created. You can then use the delete index API to delete the previous write index.
client.indices.delete({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of indices to delete. You cannot specify index aliases. By default, this parameter does not support wildcards (*
) or_all
. To use wildcards or_all
, set theaction.destructive_requires_name
cluster setting tofalse
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_alias
editDelete an alias. Removes a data stream or index from an alias.
client.indices.deleteAlias({ index, name })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams or indices used to limit the request. Supports wildcards (*
). -
name
(string | string[]): List of aliases to remove. Supports wildcards (*
). To remove all aliases, use*
or_all
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_data_lifecycle
editDelete data stream lifecycles. Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.
client.indices.deleteDataLifecycle({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): A list of data streams of which the data stream lifecycle will be deleted; use*
to get all data streams -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether wildcard expressions should get expanded to open or closed indices (default: open) -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master -
timeout
(Optional, string | -1 | 0): Explicit timestamp for the document
-
delete_data_stream
editDelete data streams. Deletes one or more data streams and their backing indices.
client.indices.deleteDataStream({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of data streams to delete. Wildcard (*
) expressions are supported. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values,such asopen,hidden
.
-
delete_index_template
editDelete an index template. The provided <index-template> may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.
client.indices.deleteIndexTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of index template names used to limit the request. Wildcard (*) expressions are supported. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_template
editDelete a legacy index template.
client.indices.deleteTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the legacy index template to delete. Wildcard (*
) expressions are supported. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
disk_usage
editAnalyze the index disk usage. Analyze the disk usage of each field of an index or data stream. This API might not support indices created in previous Elasticsearch versions. The result of a small index can be inaccurate as some parts of an index might not be analyzed by the API.
The total size of fields of the analyzed shards of the index in the response is usually smaller than the index store_size
value because some small metadata files are ignored and some parts of data files might not be scanned by the API.
Since stored fields are stored together in a compressed format, the sizes of stored fields are also estimates and can be inaccurate.
The stored size of the _id
field is likely underestimated while the _source
field is overestimated.
client.indices.diskUsage({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and aliases used to limit the request. It’s recommended to execute this API with a single index (or the latest backing index of a data stream) as the API consumes resources significantly. -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
flush
(Optional, boolean): Iftrue
, the API performs a flush before analysis. Iffalse
, the response may not include uncommitted data. -
ignore_unavailable
(Optional, boolean): Iftrue
, missing or closed indices are not included in the response. -
run_expensive_tasks
(Optional, boolean): Analyzing field disk usage is resource-intensive. To use the API, this parameter must be set totrue
.
-
downsample
editDownsample an index.
Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min
, max
, sum
, value_count
and avg
) for each metric field grouped by a configured time interval.
For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index.
All documents within an hour interval are summarized and stored as a single document in the downsample index.
Only indices in a time series data stream are supported.
Neither field nor document level security can be defined on the source index.
The source index must be read only (index.blocks.write: true
).
client.indices.downsample({ index, target_index })
Arguments
edit-
Request (object):
-
index
(string): Name of the time series index to downsample. -
target_index
(string): Name of the index to create. -
config
(Optional, { fixed_interval })
-
exists
editCheck indices. Check if one or more indices, index aliases, or data streams exist.
client.indices.exists({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and aliases. Supports wildcards (*
). -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
include_defaults
(Optional, boolean): Iftrue
, return all default settings in the response. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only.
-
exists_alias
editCheck aliases.
Check if one or more data stream or index aliases exist.
client.indices.existsAlias({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of aliases to check. Supports wildcards (*
). -
index
(Optional, string | string[]): List of data streams or indices used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, requests that include a missing data stream or index in the target indices or data streams return an error. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only.
-
exists_index_template
editCheck index templates.
Check whether index templates exist.
client.indices.existsIndexTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): List of index template names used to limit the request. Wildcard (*) expressions are supported. -
local
(Optional, boolean): If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node. -
flat_settings
(Optional, boolean): If true, returns settings in flat format. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
exists_template
editCheck existence of index templates. Get information about whether index templates exist. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.
This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
client.indices.existsTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): A list of index template names used to limit the request. Wildcard (*
) expressions are supported. -
flat_settings
(Optional, boolean): Indicates whether to use a flat format for the response. -
local
(Optional, boolean): Indicates whether to get information from the local node only. -
master_timeout
(Optional, string | -1 | 0): The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
.
-
explain_data_lifecycle
editGet the status for a data stream lifecycle. Get information about an index or data stream’s current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.
client.indices.explainDataLifecycle({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): The name of the index to explain -
include_defaults
(Optional, boolean): indicates if the API should return the default values the system uses for the index’s lifecycle -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master
-
field_usage_stats
editGet field usage stats. Get field usage information for each shard and field of an index. Field usage statistics are automatically captured when queries are running on a cluster. A shard-level search request that accesses a given field, even if multiple times during that request, is counted as a single use.
The response body reports the per-shard usage count of the data structures that back the fields in the index. A given request will increment each count by a maximum value of 1, even if the request accesses the same field multiple times.
client.indices.fieldUsageStats({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List or wildcard expression of index names used to limit the request. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
ignore_unavailable
(Optional, boolean): Iftrue
, missing or closed indices are not included in the response. -
fields
(Optional, string | string[]): List or wildcard expressions of fields to include in the statistics.
-
flush
editFlush data streams or indices. Flushing a data stream or index is the process of making sure that any data that is currently only stored in the transaction log is also permanently stored in the Lucene index. When restarting, Elasticsearch replays any unflushed operations from the transaction log into the Lucene index to bring it back into the state that it was in before the restart. Elasticsearch automatically triggers flushes as needed, using heuristics that trade off the size of the unflushed transaction log against the cost of performing each flush.
After each operation has been flushed it is permanently stored in the Lucene index. This may mean that there is no need to maintain an additional copy of it in the transaction log. The transaction log is made up of multiple files, called generations, and Elasticsearch will delete any generation files when they are no longer needed, freeing up disk space.
It is also possible to trigger a flush on one or more indices using the flush API, although it is rare for users to need to call this API directly. If you call the flush API after indexing some documents then a successful response indicates that Elasticsearch has flushed all the documents that were indexed before the flush API was called.
client.indices.flush({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases to flush. Supports wildcards (*
). To flush all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
force
(Optional, boolean): Iftrue
, the request forces a flush even if there are no changes to commit to the index. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
wait_if_ongoing
(Optional, boolean): Iftrue
, the flush operation blocks until execution when another flush operation is running. Iffalse
, Elasticsearch returns an error if you request a flush when another flush operation is running.
-
forcemerge
editForce a merge. Perform the force merge operation on the shards of one or more indices. For data streams, the API forces a merge on the shards of the stream’s backing indices.
Merging reduces the number of segments in each shard by merging some of them together and also frees up the space used by deleted documents. Merging normally happens automatically, but sometimes it is useful to trigger a merge manually.
We recommend force merging only a read-only index (meaning the index is no longer receiving writes). When documents are updated or deleted, the old version is not immediately removed but instead soft-deleted and marked with a "tombstone". These soft-deleted documents are automatically cleaned up during regular segment merges. But force merge can cause very large (greater than 5 GB) segments to be produced, which are not eligible for regular merges. So the number of soft-deleted documents can then grow rapidly, resulting in higher disk usage and worse search performance. If you regularly force merge an index receiving writes, this can also make snapshots more expensive, since the new documents can’t be backed up incrementally.
Blocks during a force merge
Calls to this API block until the merge is complete (unless request contains wait_for_completion=false
).
If the client connection is lost before completion then the force merge process will continue in the background.
Any new requests to force merge the same indices will also block until the ongoing force merge is complete.
Running force merge asynchronously
If the request contains wait_for_completion=false
, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to get the status of the task.
However, you can not cancel this task as the force merge task is not cancelable.
Elasticsearch creates a record of this task as a document at _tasks/<task_id>
.
When you are done with a task, you should delete the task document so Elasticsearch can reclaim the space.
Force merging multiple indices
You can force merge multiple indices with a single request by targeting:
- One or more data streams that contain multiple backing indices
- Multiple indices
- One or more aliases
- All data streams and indices in a cluster
Each targeted shard is force-merged separately using the force_merge threadpool.
By default each node only has a single force_merge
thread which means that the shards on that node are force-merged one at a time.
If you expand the force_merge
threadpool on a node then it will force merge its shards in parallel
Force merge makes the storage for the shard being merged temporarily increase, as it may require free space up to triple its size in case max_num_segments parameter
is set to 1
, to rewrite all segments into a new one.
Data streams and time-based indices
Force-merging is useful for managing a data stream’s older backing indices and other time-based indices, particularly after a rollover. In these cases, each index only receives indexing traffic for a certain period of time. Once an index receive no more writes, its shards can be force-merged to a single segment. This can be a good idea because single-segment shards can sometimes use simpler and more efficient data structures to perform searches. For example:
POST /.ds-my-data-stream-2099.03.07-000001/_forcemerge?max_num_segments=1
client.indices.forcemerge({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of index names; use_all
or empty string to perform the operation on all indices -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
flush
(Optional, boolean): Specify whether the index should be flushed after performing the operation (default: true) -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed) -
max_num_segments
(Optional, number): The number of segments the index should be merged into (default: dynamic) -
only_expunge_deletes
(Optional, boolean): Specify whether the operation should only expunge deleted documents -
wait_for_completion
(Optional, boolean): Should the request wait until the force merge is completed.
-
get
editGet index information. Get information about one or more indices. For data streams, the API returns information about the stream’s backing indices.
client.indices.get({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported. -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as open,hidden. -
flat_settings
(Optional, boolean): If true, returns settings in flat format. -
ignore_unavailable
(Optional, boolean): If false, requests that target a missing index return an error. -
include_defaults
(Optional, boolean): If true, return all default settings in the response. -
local
(Optional, boolean): If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
features
(Optional, { name, description } | { name, description }[]): Return only information on specified index features
-
get_alias
editGet aliases. Retrieves information for one or more data stream or index aliases.
client.indices.getAlias({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): List of aliases to retrieve. Supports wildcards (*
). To retrieve all aliases, omit this parameter or use*
or_all
. -
index
(Optional, string | string[]): List of data streams or indices used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only.
-
get_data_lifecycle
editGet data stream lifecycles.
Get the data stream lifecycle configuration of one or more data streams.
client.indices.getDataLifecycle({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of data streams to limit the request. Supports wildcards (*
). To target all data streams, omit this parameter or use*
or_all
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
include_defaults
(Optional, boolean): Iftrue
, return all default settings in the response. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_data_lifecycle_stats
editGet data stream lifecycle stats. Get statistics about the data streams that are managed by a data stream lifecycle.
client.indices.getDataLifecycleStats()
get_data_stream
editGet data streams.
Get information about one or more data streams.
client.indices.getDataStream({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): List of data stream names used to limit the request. Wildcard (*
) expressions are supported. If omitted, all data streams are returned. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values, such asopen,hidden
. -
include_defaults
(Optional, boolean): If true, returns all relevant default configurations for the index template. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
verbose
(Optional, boolean): Whether the maximum timestamp for each data stream should be calculated and returned.
-
get_field_mapping
editGet mapping definitions. Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.
This API is useful if you don’t need a complete mapping or if an index mapping contains a large number of fields.
client.indices.getFieldMapping({ fields })
Arguments
edit-
Request (object):
-
fields
(string | string[]): List or wildcard expression of fields used to limit returned information. Supports wildcards (*
). -
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
include_defaults
(Optional, boolean): Iftrue
, return all default settings in the response. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only.
-
get_index_template
editGet index templates. Get information about one or more index templates.
client.indices.getIndexTemplate({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): List of index template names used to limit the request. Wildcard (*) expressions are supported. -
local
(Optional, boolean): If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node. -
flat_settings
(Optional, boolean): If true, returns settings in flat format. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
include_defaults
(Optional, boolean): If true, returns all relevant default configurations for the index template.
-
get_mapping
editGet mapping definitions. For data streams, the API retrieves mappings for the stream’s backing indices.
client.indices.getMapping({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_migrate_reindex_status
editGet the migration reindexing status.
Get the status of a migration reindex attempt for a data stream or index.
client.indices.getMigrateReindexStatus({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): The index or data stream name.
-
get_settings
editGet index settings. Get setting information for one or more indices. For data streams, it returns setting information for the stream’s backing indices.
client.indices.getSettings({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
name
(Optional, string | string[]): List or wildcard expression of settings to retrieve. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts with foo but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
include_defaults
(Optional, boolean): Iftrue
, return all default settings in the response. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. Iffalse
, information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_template
editGet index templates. Get information about one or more index templates.
This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
client.indices.getTemplate({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): List of index template names used to limit the request. Wildcard (*
) expressions are supported. To return all index templates, omit this parameter or use a value of_all
or*
. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
migrate_reindex
editReindex legacy backing indices.
Reindex all legacy backing indices for a data stream. This operation occurs in a persistent task. The persistent task ID is returned immediately and the reindexing work is completed in that task.
client.indices.migrateReindex({ ... })
Arguments
edit-
Request (object):
-
reindex
(Optional, { mode, source })
-
migrate_to_data_stream
editConvert an index alias to a data stream.
Converts an index alias to a data stream.
You must have a matching index template that is data stream enabled.
The alias must meet the following criteria:
The alias must have a write index;
All indices for the alias must have a @timestamp
field mapping of a date
or date_nanos
field type;
The alias must not have any filters;
The alias must not use custom routing.
If successful, the request removes the alias and creates a data stream with the same name.
The indices for the alias become hidden backing indices for the stream.
The write index for the alias becomes the write index for the stream.
client.indices.migrateToDataStream({ name })
Arguments
edit-
Request (object):
-
name
(string): Name of the index alias to convert to a data stream. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
modify_data_stream
editUpdate data streams. Performs one or more data stream modification actions in a single atomic operation.
client.indices.modifyDataStream({ actions })
Arguments
edit-
Request (object):
-
actions
({ add_backing_index, remove_backing_index }[]): Actions to perform.
-
open
editOpen a closed index. For data streams, the API opens any closed backing indices.
A closed index is blocked for read/write operations and does not allow all operations that opened indices allow. It is not possible to index documents or to search for documents in a closed index. This allows closed indices to not have to maintain internal data structures for indexing or searching documents, resulting in a smaller overhead on the cluster.
When opening or closing an index, the master is responsible for restarting the index shards to reflect the new state of the index. The shards will then go through the normal recovery process. The data of opened or closed indices is automatically replicated by the cluster to ensure that enough shard copies are safely kept around at all times.
You can open and close multiple indices.
An error is thrown if the request explicitly refers to a missing index.
This behavior can be turned off by using the ignore_unavailable=true
parameter.
By default, you must explicitly name the indices you are opening or closing.
To open or close indices with _all
, *
, or other wildcard expressions, change the action.destructive_requires_name
setting to false
.
This setting can also be changed with the cluster update settings API.
Closed indices consume a significant amount of disk-space which can cause problems in managed environments.
Closing indices can be turned off with the cluster settings API by setting cluster.indices.close.enable
to false
.
Because opening or closing an index allocates its shards, the wait_for_active_shards
setting on index creation applies to the _open
and _close
index actions as well.
client.indices.open({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). By default, you must explicitly name the indices you using to limit the request. To limit a request using_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting to false. You can update this setting in theelasticsearch.yml
file or using the cluster update settings API. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
promote_data_stream
editPromote a data stream. Promote a data stream from a replicated data stream managed by cross-cluster replication (CCR) to a regular data stream.
With CCR auto following, a data stream from a remote cluster can be replicated to the local cluster. These data streams can’t be rolled over in the local cluster. These replicated data streams roll over only if the upstream data stream rolls over. In the event that the remote cluster is no longer available, the data stream in the local cluster can be promoted to a regular data stream, which allows these data streams to be rolled over in the local cluster.
When promoting a data stream, ensure the local cluster has a data stream enabled index template that matches the data stream. If this is missing, the data stream will not be able to roll over until a matching index template is created. This will affect the lifecycle management of the data stream and interfere with the data stream size and retention.
client.indices.promoteDataStream({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the data stream -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
put_alias
editCreate or update an alias. Adds a data stream or index to an alias.
client.indices.putAlias({ index, name })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams or indices to add. Supports wildcards (*
). Wildcard patterns that match both data streams and indices return an error. -
name
(string): Alias to update. If the alias doesn’t exist, the request creates it. Index alias names support date math. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Query used to limit documents the alias can access. -
index_routing
(Optional, string): Value used to route indexing operations to a specific shard. If specified, this overwrites therouting
value for indexing operations. Data stream aliases don’t support this parameter. -
is_write_index
(Optional, boolean): Iftrue
, sets the write index or data stream for the alias. If an alias points to multiple indices or data streams andis_write_index
isn’t set, the alias rejects write requests. If an index alias points to one index andis_write_index
isn’t set, the index automatically acts as the write index. Data stream aliases don’t automatically set a write data stream, even if the alias points to one data stream. -
routing
(Optional, string): Value used to route indexing and search operations to a specific shard. Data stream aliases don’t support this parameter. -
search_routing
(Optional, string): Value used to route search operations to a specific shard. If specified, this overwrites therouting
value for search operations. Data stream aliases don’t support this parameter. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_data_lifecycle
editUpdate data stream lifecycles. Update the data stream lifecycle of the specified data streams.
client.indices.putDataLifecycle({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of data streams used to limit the request. Supports wildcards (*
). To target all data streams use*
or_all
. -
data_retention
(Optional, string | -1 | 0): If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely. -
downsampling
(Optional, { rounds }): The downsampling configuration to execute for the managed backing index after rollover. -
enabled
(Optional, boolean): If defined, it turns data stream lifecycle on/off (true
/false
) for this data stream. A data stream lifecycle that’s disabled (enabled:false
) will have no effect on the data stream. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values, such asopen,hidden
. Valid values are:all
,hidden
,open
,closed
,none
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_index_template
editCreate or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.
Elasticsearch applies templates to new indices based on an wildcard pattern that matches the index name. Index templates are applied during data stream or index creation. For data streams, these settings and mappings are applied when the stream’s backing indices are created. Settings and mappings specified in a create index API request override any settings or mappings specified in an index template. Changes to index templates do not affect existing indices, including the existing backing indices of a data stream.
You can use C-style /* *\/
block comments in index templates.
You can include comments anywhere in the request body, except before the opening curly bracket.
Multiple matching templates
If multiple index templates match the name of a new index or data stream, the template with the highest priority is used.
Multiple templates with overlapping index patterns at the same priority are not allowed and an error will be thrown when attempting to create a template matching an existing index template at identical priorities.
Composing aliases, mappings, and settings
When multiple component templates are specified in the composed_of
field for an index template, they are merged in the order specified, meaning that later component templates override earlier component templates.
Any mappings, settings, or aliases from the parent index template are merged in next.
Finally, any configuration on the index request itself is merged.
Mapping definitions are merged recursively, which means that later mapping components can introduce new field mappings and update the mapping configuration.
If a field mapping is already contained in an earlier component, its definition will be completely overwritten by the later one.
This recursive merging strategy applies not only to field mappings, but also root options like dynamic_templates
and meta
.
If an earlier component contains a dynamic_templates
block, then by default new dynamic_templates
entries are appended onto the end.
If an entry already exists with the same key, then it is overwritten by the new definition.
client.indices.putIndexTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): Index or template name -
index_patterns
(Optional, string | string[]): Name of the index template to create. -
composed_of
(Optional, string[]): An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence. -
template
(Optional, { aliases, mappings, settings, lifecycle }): Template to be applied. It may optionally include analiases
,mappings
, orsettings
configuration. -
data_stream
(Optional, { hidden, allow_custom_routing }): If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with adata_stream
object. -
priority
(Optional, number): Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch. -
version
(Optional, number): Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch. External systems can use these version numbers to simplify template management. To unset a version, replace the template without specifying one. -
_meta
(Optional, Record<string, User-defined value>): Optional user metadata about the index template. It may have any contents. It is not automatically generated or used by Elasticsearch. This user-defined object is stored in the cluster state, so keeping it short is preferable To unset the metadata, replace the template without specifying it. -
allow_auto_create
(Optional, boolean): This setting overrides the value of theaction.auto_create_index
cluster setting. If set totrue
in a template, then indices can be automatically created using that template even if auto-creation of indices is disabled viaactions.auto_create_index
. If set tofalse
, then indices or data streams matching the template must always be explicitly created, and may never be automatically created. -
ignore_missing_component_templates
(Optional, string[]): The configuration option ignore_missing_component_templates can be used when an index template references a component template that might not exist -
deprecated
(Optional, boolean): Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning. -
create
(Optional, boolean): Iftrue
, this request cannot replace or update existing index templates. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
cause
(Optional, string): User defined reason for creating/updating the index template
-
put_mapping
editUpdate field mappings. Add new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields and add new properties to existing object fields. For data streams, these changes are applied to all backing indices by default.
Add multi-fields to an existing field
Multi-fields let you index the same field in different ways. You can use this API to update the fields mapping parameter and enable multi-fields for an existing field. WARNING: If an index (or data stream) contains documents when you add a multi-field, those documents will not have values for the new multi-field. You can populate the new multi-field with the update by query API.
Change supported mapping parameters for an existing field
The documentation for each mapping parameter indicates whether you can update it for an existing field using this API.
For example, you can use the update mapping API to update the ignore_above
parameter.
Change the mapping of an existing field
Except for supported mapping parameters, you can’t change the mapping or field type of an existing field. Changing an existing field could invalidate data that’s already indexed.
If you need to change the mapping of a field in a data stream’s backing indices, refer to documentation about modifying data streams. If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.
Rename a field
Renaming a field would invalidate data already indexed under the old field name. Instead, add an alias field to create an alternate field name.
client.indices.putMapping({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index names the mapping should be added to (supports wildcards); use_all
or omit to add the mapping on all indices. -
date_detection
(Optional, boolean): Controls whether dynamic date detection is enabled. -
dynamic
(Optional, Enum("strict" | "runtime" | true | false)): Controls whether new fields are added dynamically. -
dynamic_date_formats
(Optional, string[]): If date detection is enabled then new string fields are checked against dynamic_date_formats and if the value matches then a new date field is added instead of string. -
dynamic_templates
(Optional, Record<string, { mapping, runtime, match, path_match, unmatch, path_unmatch, match_mapping_type, unmatch_mapping_type, match_pattern }>[]): Specify dynamic templates for the mapping. -
_field_names
(Optional, { enabled }): Control whether field names are enabled for the index. -
_meta
(Optional, Record<string, User-defined value>): A mapping type can have custom meta data associated with it. These are not used at all by Elasticsearch, but can be used to store application-specific metadata. -
numeric_detection
(Optional, boolean): Automatically map strings into numeric data types for all fields. -
properties
(Optional, Record<string, { type } | { boost, fielddata, index, null_value, ignore_malformed, script, on_script_error, time_series_dimension, type } | { type, enabled, null_value, boost, coerce, script, on_script_error, ignore_malformed, time_series_metric, analyzer, eager_global_ordinals, index, index_options, index_phrases, index_prefixes, norms, position_increment_gap, search_analyzer, search_quote_analyzer, term_vector, format, precision_step, locale } | { relations, eager_global_ordinals, type } | { boost, eager_global_ordinals, index, index_options, script, on_script_error, normalizer, norms, null_value, similarity, split_queries_on_whitespace, time_series_dimension, type } | { type, fields, meta, copy_to } | { type } | { positive_score_impact, type } | { positive_score_impact, type } | { analyzer, index, index_options, max_shingle_size, norms, search_analyzer, search_quote_analyzer, similarity, term_vector, type } | { analyzer, boost, eager_global_ordinals, fielddata, fielddata_frequency_filter, index, index_options, index_phrases, index_prefixes, norms, position_increment_gap, search_analyzer, search_quote_analyzer, similarity, term_vector, type } | { type } | { type, null_value } | { boost, format, ignore_malformed, index, script, on_script_error, null_value, precision_step, type } | { boost, fielddata, format, ignore_malformed, index, script, on_script_error, null_value, precision_step, locale, type } | { type, default_metric, metrics, time_series_metric } | { type, dims, element_type, index, index_options, similarity } | { boost, depth_limit, doc_values, eager_global_ordinals, index, index_options, null_value, similarity, split_queries_on_whitespace, type } | { enabled, include_in_parent, include_in_root, type } | { enabled, subobjects, type } | { type, enabled, priority, time_series_dimension } | { type, meta, inference_id, search_inference_id } | { type } | { analyzer, contexts, max_input_length, preserve_position_increments, preserve_separators, search_analyzer, type } | { value, type } | { type, index } | { path, type } | { ignore_malformed, type } | { boost, index, ignore_malformed, null_value, on_script_error, script, time_series_dimension, type } | { type } | { analyzer, boost, index, null_value, enable_position_increments, type } | { ignore_malformed, ignore_z_value, null_value, index, on_script_error, script, type } | { coerce, ignore_malformed, ignore_z_value, index, orientation, strategy, type } | { ignore_malformed, ignore_z_value, null_value, type } | { coerce, ignore_malformed, ignore_z_value, orientation, type } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value, scaling_factor } | { type, null_value } | { type, null_value } | { format, type } | { type } | { type } | { type } | { type } | { type } | { type, norms, index_options, index, null_value, rules, language, country, variant, strength, decomposition, alternate, case_level, case_first, numeric, variable_top, hiragana_quaternary_mode }>): Mapping for a field. For new fields, this mapping can include:- Field name
- Field data type
- Mapping parameters
-
_routing
(Optional, { required }): Enable making a routing value required on indexed documents. -
_source
(Optional, { compress, compress_threshold, enabled, excludes, includes, mode }): Control whether the _source field is enabled on the index. -
runtime
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Mapping of runtime fields for the index. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
write_index_only
(Optional, boolean): Iftrue
, the mappings are applied only to the current write index for the target.
-
put_settings
editUpdate index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.
To revert a setting to the default value, use a null value.
The list of per-index settings that can be updated dynamically on live indices can be found in index module documentation.
To preserve existing settings from being updated, set the preserve_existing
parameter to true
.
You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream’s write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream’s write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream’s backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
client.indices.putSettings({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
settings
(Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store }) -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
ignore_unavailable
(Optional, boolean): Iftrue
, returns settings in flat format. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
preserve_existing
(Optional, boolean): Iftrue
, existing index settings remain unchanged. -
reopen
(Optional, boolean): Whether to close and reopen the index to apply non-dynamic settings. If set totrue
the indices to which the settings are being applied will be closed temporarily and then reopened in order to apply the changes. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_template
editCreate or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices. Elasticsearch applies templates to new indices based on an index pattern that matches the index name.
This documentation is about legacy index templates, which are deprecated and will be replaced by the composable templates introduced in Elasticsearch 7.8.
Composable templates always take precedence over legacy templates. If no composable template matches a new index, matching legacy templates are applied according to their order.
Index templates are only applied during index creation. Changes to index templates do not affect existing indices. Settings and mappings specified in create index API requests override any settings or mappings specified in an index template.
You can use C-style /* *\/
block comments in index templates.
You can include comments anywhere in the request body, except before the opening curly bracket.
Indices matching multiple templates
Multiple index templates can potentially match an index, in this case, both the settings and mappings are merged into the final configuration of the index. The order of the merging can be controlled using the order parameter, with lower order being applied first, and higher orders overriding them. NOTE: Multiple matching templates with the same order value will result in a non-deterministic merging order.
client.indices.putTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the template -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the index. -
index_patterns
(Optional, string | string[]): Array of wildcard expressions used to match the names of indices during creation. -
mappings
(Optional, { all_field, date_detection, dynamic, dynamic_date_formats, dynamic_templates, _field_names, index_field, _meta, numeric_detection, properties, _routing, _size, _source, runtime, enabled, subobjects, _data_stream_timestamp }): Mapping for fields in the index. -
order
(Optional, number): Order in which Elasticsearch applies this template if index matches multiple templates.
-
Templates with lower order values are merged first. Templates with higher
order values are merged later, overriding templates with lower values.
settings
(Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store }): Configuration options for the index.
version
(Optional, number): Version number used to manage index templates externally. This number
is not automatically generated by Elasticsearch.
To unset a version, replace the template without specifying one.
create
(Optional, boolean): If true, this request cannot replace or update existing index templates.
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is
received before the timeout expires, the request fails and returns an error.
* *cause
(Optional, string): User defined reason for creating/updating the index template
recovery
editGet index recovery information. Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream’s backing indices.
All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.
Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.
Recovery automatically occurs during the following processes:
- When creating an index for the first time.
- When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
- Creation of new replica shard copies from the primary.
- Relocation of a shard copy to a different node in the same cluster.
- A snapshot restore operation.
- A clone, shrink, or split operation.
You can determine the cause of a shard recovery using the recovery or cat recovery APIs.
The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.
client.indices.recovery({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
active_only
(Optional, boolean): Iftrue
, the response only includes ongoing shard recoveries. -
detailed
(Optional, boolean): Iftrue
, the response includes detailed information about shard recoveries.
-
refresh
editRefresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.
By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds.
You can change this default interval with the index.refresh_interval
setting.
Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it’s recommended to wait for Elasticsearch’s periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it’s recommended to use the index API’s refresh=wait_for
query parameter option.
This option ensures the indexing operation waits for a periodic refresh before running the search.
client.indices.refresh({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index.
-
reload_search_analyzers
editReload search analyzers. Reload an index’s search analyzers and their resources. For data streams, the API reloads search analyzers and resources for the stream’s backing indices.
After reloading the search analyzers you should clear the request cache to make sure it doesn’t contain responses derived from the previous versions of the analyzer.
You can use the reload search analyzers API to pick up changes to synonym files used in the synonym_graph
or synonym
token filter of a search analyzer.
To be eligible, the token filter must have an updateable
flag of true
and only be used in search analyzers.
This API does not perform a reload for each shard of an index. Instead, it performs a reload for each node containing index shards. As a result, the total shard count returned by the API can differ from the number of index shards. Because reloading affects every node with an index shard, it is important to update the synonym file on every data node in the cluster—including nodes that don’t contain a shard replica—before using this API. This ensures the synonym file is updated everywhere in the cluster in case shards are relocated in the future.
client.indices.reloadSearchAnalyzers({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index names to reload analyzers for -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed) -
resource
(Optional, string): Changed resource to reload analyzers from if applicable
-
resolve_cluster
editResolve the cluster.
Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.
This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.
You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.
For each cluster in the index expression, information is returned about:
-
Whether the querying ("local") cluster is currently connected to each remote cluster specified in the index expression. Note that this endpoint actively attempts to contact the remote clusters, unlike the
remote/info
endpoint. -
Whether each remote cluster is configured with
skip_unavailable
astrue
orfalse
. - Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
- Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
- Cluster version information, including the Elasticsearch server version.
For example, GET /_resolve/cluster/my-index-*,cluster*:my-index-*
returns information about the local cluster and all remotely configured clusters that start with the alias cluster*
.
Each cluster returns information about whether it has any indices, aliases or data streams that match my-index-*
.