API Reference
editAPI Reference
editbulk
editBulk index or delete documents. Performs multiple indexing or delete operations in a single API call. This reduces overhead and can greatly increase indexing speed.
client.bulk({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): Name of the data stream, index, or index alias to perform bulk actions on. -
operations
(Optional, { index, create, update, delete } | { detect_noop, doc, doc_as_upsert, script, scripted_upsert, _source, upsert } | object[]) -
pipeline
(Optional, string): ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to_none
disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
, Elasticsearch refreshes the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
do nothing with refreshes. Valid values:true
,false
,wait_for
. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]):true
orfalse
to return the_source
field or not, or a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. -
timeout
(Optional, string | -1 | 0): Period each action waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1
). -
require_alias
(Optional, boolean): Iftrue
, the request’s actions must target an index alias.
-
clear_scroll
editClear a scrolling search.
Clear the search context and results for a scrolling search.
client.clearScroll({ ... })
Arguments
edit-
Request (object):
-
scroll_id
(Optional, string | string[]): List of scroll IDs to clear. To clear all scroll IDs, use_all
.
-
close_point_in_time
editClose a point in time.
A point in time must be opened explicitly before being used in search requests.
The keep_alive
parameter tells Elasticsearch how long it should persist.
A point in time is automatically closed when the keep_alive
period has elapsed.
However, keeping points in time has a cost; close them as soon as they are no longer required for search requests.
client.closePointInTime({ id })
Arguments
edit-
Request (object):
-
id
(string): The ID of the point-in-time.
-
count
editCount search results. Get the number of documents matching a query.
client.count({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases to search. Supports wildcards (*
). To search all data streams and indices, omit this parameter or use*
or_all
. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Defines the search definition using the Query DSL. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
analyzer
(Optional, string): Analyzer to use for the query string. This parameter can only be used when theq
query string parameter is specified. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. This parameter can only be used when theq
query string parameter is specified. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. This parameter can only be used when theq
query string parameter is specified. -
df
(Optional, string): Field to use as default where no field prefix is given in the query string. This parameter can only be used when theq
query string parameter is specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
ignore_throttled
(Optional, boolean): Iftrue
, concrete, expanded or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
min_score
(Optional, number): Sets the minimum_score
value that documents must have to be included in the result. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
terminate_after
(Optional, number): Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. -
q
(Optional, string): Query in the Lucene query string syntax.
-
create
editIndex a document. Adds a JSON document to the specified data stream or index and makes it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.
client.create({ id, index })
Arguments
edit-
Request (object):
-
id
(string): Unique identifier for the document. -
index
(string): Name of the data stream or index to target. If the target doesn’t exist and matches the name or wildcard (*
) pattern of an index template with adata_stream
definition, this request creates the data stream. If the target doesn’t exist and doesn’t match a data stream template, this request creates the index. -
document
(Optional, object): A document. -
pipeline
(Optional, string): ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to_none
disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
, Elasticsearch refreshes the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
do nothing with refreshes. Valid values:true
,false
,wait_for
. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): Period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards. -
version
(Optional, number): Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): Specific version type:external
,external_gte
. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
delete
editDelete a document. Removes a JSON document from the specified index.
client.delete({ id, index })
Arguments
edit-
Request (object):
-
id
(string): Unique identifier for the document. -
index
(string): Name of the target index. -
if_primary_term
(Optional, number): Only perform the operation if the document has this primary term. -
if_seq_no
(Optional, number): Only perform the operation if the document has this sequence number. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
, Elasticsearch refreshes the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
do nothing with refreshes. Valid values:true
,false
,wait_for
. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): Period to wait for active shards. -
version
(Optional, number): Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): Specific version type:external
,external_gte
. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
delete_by_query
editDelete documents. Deletes documents that match the specified query.
client.deleteByQuery({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and aliases to search. Supports wildcards (*
). To search all data streams or indices, omit this parameter or use*
or_all
. -
max_docs
(Optional, number): The maximum number of documents to delete. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Specifies the documents to delete using the Query DSL. -
slice
(Optional, { field, id, max }): Slice the request manually using the provided slice ID and total number of slices. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
analyzer
(Optional, string): Analyzer to use for the query string. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. -
conflicts
(Optional, Enum("abort" | "proceed")): What to do if delete by query hits version conflicts:abort
orproceed
. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. -
df
(Optional, string): Field to use as default where no field prefix is given in the query string. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
from
(Optional, number): Starting offset (default: 0) -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
refresh
(Optional, boolean): Iftrue
, Elasticsearch refreshes all shards involved in the delete by query after the request completes. -
request_cache
(Optional, boolean): Iftrue
, the request cache is used for this request. Defaults to the index-level setting. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
q
(Optional, string): Query in the Lucene query string syntax. -
scroll
(Optional, string | -1 | 0): Period to retain the search context for scrolling. -
scroll_size
(Optional, number): Size of the scroll request that powers the operation. -
search_timeout
(Optional, string | -1 | 0): Explicit timeout for each search request. Defaults to no timeout. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): The type of the search operation. Available options:query_then_fetch
,dfs_query_then_fetch
. -
slices
(Optional, number | Enum("auto")): The number of slices this task should be divided into. -
sort
(Optional, string[]): A list of <field>:<direction> pairs. -
stats
(Optional, string[]): Specifictag
of the request for logging and statistical purposes. -
terminate_after
(Optional, number): Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers. -
timeout
(Optional, string | -1 | 0): Period each deletion request waits for active shards. -
version
(Optional, boolean): Iftrue
, returns the document version as part of a hit. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1
). -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the operation is complete.
-
delete_by_query_rethrottle
editThrottle a delete by query operation.
Change the number of requests per second for a particular delete by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.
client.deleteByQueryRethrottle({ task_id })
Arguments
edit-
Request (object):
-
task_id
(string | number): The ID for the task. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second.
-
delete_script
editDelete a script or search template. Deletes a stored script or search template.
client.deleteScript({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the stored script or search template. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
exists
editCheck a document. Checks if a specified document exists.
client.exists({ id, index })
Arguments
edit-
Request (object):
-
id
(string): Identifier of the document. -
index
(string): List of data streams, indices, and aliases. Supports wildcards (*
). -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
realtime
(Optional, boolean): Iftrue
, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): Iftrue
, Elasticsearch refreshes all shards involved in the delete by query after the request completes. -
routing
(Optional, string): Target the specified primary shard. -
_source
(Optional, boolean | string | string[]):true
orfalse
to return the_source
field or not, or a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude in the response. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. -
stored_fields
(Optional, string | string[]): List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the_source
parameter defaults to false. -
version
(Optional, number): Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): Specific version type:external
,external_gte
.
-
exists_source
editCheck for a document source.
Checks if a document’s _source
is stored.
client.existsSource({ id, index })
Arguments
edit-
Request (object):
-
id
(string): Identifier of the document. -
index
(string): List of data streams, indices, and aliases. Supports wildcards (*
). -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
realtime
(Optional, boolean): If true, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): Iftrue
, Elasticsearch refreshes all shards involved in the delete by query after the request completes. -
routing
(Optional, string): Target the specified primary shard. -
_source
(Optional, boolean | string | string[]):true
orfalse
to return the_source
field or not, or a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude in the response. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. -
version
(Optional, number): Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): Specific version type:external
,external_gte
.
-
explain
editExplain a document match result. Returns information about why a specific document matches, or doesn’t match, a query.
client.explain({ id, index })
Arguments
edit-
Request (object):
-
id
(string): Defines the document ID. -
index
(string): Index names used to limit the request. Only a single index name can be provided to this parameter. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Defines the search definition using the Query DSL. -
analyzer
(Optional, string): Analyzer to use for the query string. This parameter can only be used when theq
query string parameter is specified. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. -
df
(Optional, string): Field to use as default where no field prefix is given in the query string. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]): True or false to return the_source
field or not, or a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. -
stored_fields
(Optional, string | string[]): A list of stored fields to return in the response. -
q
(Optional, string): Query in the Lucene query string syntax.
-
field_caps
editGet the field capabilities.
Get information about the capabilities of fields among multiple indices.
For data streams, the API returns field capabilities among the stream’s backing indices.
It returns runtime fields like any other field.
For example, a runtime field with a type of keyword is returned the same as any other field that belongs to the keyword
family.
client.fieldCaps({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all. -
fields
(Optional, string | string[]): List of fields to retrieve capabilities for. Wildcard (*
) expressions are supported. -
index_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Allows to filter indices if the provided query rewrites to match_none on every shard. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Defines ad-hoc runtime fields in the request similar to the way it is done in search requests. These fields exist only as part of the query and take precedence over fields defined with the same name in the index mappings. -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts with foo but no index starts with bar. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
ignore_unavailable
(Optional, boolean): Iftrue
, missing or closed indices are not included in the response. -
include_unmapped
(Optional, boolean): If true, unmapped fields are included in the response. -
filters
(Optional, string): An optional set of filters: can include +metadata,-metadata,-nested,-multifield,-parent -
types
(Optional, string[]): Only return results for fields that have one of the types in the list -
include_empty_fields
(Optional, boolean): If false, empty fields are not included in the response.
-
get
editGet a document by its ID. Retrieves the document with the specified ID from an index.
client.get({ id, index })
Arguments
edit-
Request (object):
-
id
(string): Unique identifier of the document. -
index
(string): Name of the index that contains the document. -
force_synthetic_source
(Optional, boolean): Should this request force synthetic _source? Use this to test if the mapping supports synthetic _source and to get a sense of the worst case performance. Fetches with this enabled will be slower the enabling synthetic source natively in the index. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
realtime
(Optional, boolean): Iftrue
, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If false, do nothing with refreshes. -
routing
(Optional, string): Target the specified primary shard. -
_source
(Optional, boolean | string | string[]): True or false to return the _source field or not, or a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude in the response. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. -
stored_fields
(Optional, string | string[]): List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the_source
parameter defaults to false. -
version
(Optional, number): Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): Specific version type: internal, external, external_gte.
-
get_script
editGet a script or search template. Retrieves a stored script or search template.
client.getScript({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the stored script or search template. -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master
-
get_script_context
editGet script contexts.
Get a list of supported script contexts and their methods.
client.getScriptContext()
get_script_languages
editGet script languages.
Get a list of available script types, languages, and contexts.
client.getScriptLanguages()
get_source
editGet a document’s source. Returns the source of a document.
client.getSource({ id, index })
Arguments
edit-
Request (object):
-
id
(string): Unique identifier of the document. -
index
(string): Name of the index that contains the document. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
realtime
(Optional, boolean): Boolean) If true, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If false, do nothing with refreshes. -
routing
(Optional, string): Target the specified primary shard. -
_source
(Optional, boolean | string | string[]): True or false to return the _source field or not, or a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude in the response. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. -
stored_fields
(Optional, string | string[]) -
version
(Optional, number): Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): Specific version type: internal, external, external_gte.
-
health_report
editGet the cluster health. Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.
Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.
The cluster’s status is controlled by the worst indicator status.
In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.
Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.
The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.
client.healthReport({ ... })
Arguments
edit-
Request (object):
-
feature
(Optional, string | string[]): A feature of the cluster, as returned by the top-level health report API. -
timeout
(Optional, string | -1 | 0): Explicit operation timeout. -
verbose
(Optional, boolean): Opt-in for more information about the health of the system. -
size
(Optional, number): Limit the number of affected resources the health report API returns.
-
index
editIndex a document. Adds a JSON document to the specified data stream or index and makes it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.
client.index({ index })
Arguments
edit-
Request (object):
-
index
(string): Name of the data stream or index to target. -
id
(Optional, string): Unique identifier for the document. -
document
(Optional, object): A document. -
if_primary_term
(Optional, number): Only perform the operation if the document has this primary term. -
if_seq_no
(Optional, number): Only perform the operation if the document has this sequence number. -
op_type
(Optional, Enum("index" | "create")): Set to create to only index the document if it does not already exist (put if absent). If a document with the specified_id
already exists, the indexing operation will fail. Same as using the<index>/_create
endpoint. Valid values:index
,create
. If document id is specified, it defaults toindex
. Otherwise, it defaults tocreate
. -
pipeline
(Optional, string): ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to_none
disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
, Elasticsearch refreshes the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
do nothing with refreshes. Valid values:true
,false
,wait_for
. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): Period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards. -
version
(Optional, number): Explicit version number for concurrency control. The specified version must match the current version of the document for the request to succeed. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): Specific version type:external
,external_gte
. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1
). -
require_alias
(Optional, boolean): Iftrue
, the destination must be an index alias.
-
info
editGet cluster info. Returns basic information about the cluster.
client.info()
knn_search
editRun a knn search.
The kNN search API has been replaced by the knn
option in the search API.
Perform a k-nearest neighbor (kNN) search on a dense_vector field and return the matching documents. Given a query vector, the API finds the k closest vectors and returns those documents as search hits.
Elasticsearch uses the HNSW algorithm to support efficient kNN search. Like most kNN algorithms, HNSW is an approximate method that sacrifices result accuracy for improved search speed. This means the results returned are not always the true k closest neighbors.
The kNN search API supports restricting the search using a filter. The search will return the top k documents that also match the filter query.
client.knnSearch({ index, knn })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index names to search; use_all
or to perform the operation on all indices -
knn
({ field, query_vector, k, num_candidates }): kNN query to execute -
_source
(Optional, boolean | { excludes, includes }): Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response. -
docvalue_fields
(Optional, { field, format, include_unmapped }[]): The request returns doc values for field names matching these patterns in the hits.fields property of the response. Accepts wildcard (*) patterns. -
stored_fields
(Optional, string | string[]): List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. You can pass _source: true to return both source fields and stored fields in the search response. -
fields
(Optional, string | string[]): The request returns values for field names matching these patterns in the hits.fields property of the response. Accepts wildcard (*) patterns. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type } | { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }[]): Query to filter the documents that can match. The kNN search will return the topk
documents that also match this filter. The value can be a single query or a list of queries. Iffilter
isn’t provided, all documents are allowed to match. -
routing
(Optional, string): A list of specific routing values
-
mget
editGet multiple documents.
Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.
client.mget({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): Name of the index to retrieve documents from whenids
are specified, or when a document in thedocs
array does not specify an index. -
docs
(Optional, { _id, _index, routing, _source, stored_fields, version, version_type }[]): The documents you want to retrieve. Required if no index is specified in the request URI. -
ids
(Optional, string | string[]): The IDs of the documents you want to retrieve. Allowed when the index is specified in the request URI. -
force_synthetic_source
(Optional, boolean): Should this request force synthetic _source? Use this to test if the mapping supports synthetic _source and to get a sense of the worst case performance. Fetches with this enabled will be slower the enabling synthetic source natively in the index. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
realtime
(Optional, boolean): Iftrue
, the request is real-time as opposed to near-real-time. -
refresh
(Optional, boolean): Iftrue
, the request refreshes relevant shards before retrieving documents. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
_source
(Optional, boolean | string | string[]): True or false to return the_source
field or not, or a list of fields to return. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in_source_includes
query parameter. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
stored_fields
(Optional, string | string[]): Iftrue
, retrieves the document fields stored in the index rather than the document_source
.
-
msearch
editRun multiple searches.
The format of the request is similar to the bulk API format and makes use of the newline delimited JSON (NDJSON) format. The structure is as follows:
header\n body\n header\n body\n
This structure is specifically optimized to reduce parsing if a specific search ends up redirected to another node.
The final line of data must end with a newline character \n
.
Each newline character may be preceded by a carriage return \r
.
When sending requests to this endpoint the Content-Type
header should be set to application/x-ndjson
.
client.msearch({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and index aliases to search. -
searches
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, index, preference, request_cache, routing, search_type, ccs_minimize_roundtrips, allow_partial_search_results, ignore_throttled } | { aggregations, collapse, query, explain, ext, stored_fields, docvalue_fields, knn, from, highlight, indices_boost, min_score, post_filter, profile, rescore, script_fields, search_after, size, sort, _source, fields, terminate_after, stats, timeout, track_scores, track_total_hits, version, runtime_mappings, seq_no_primary_term, pit, suggest }[]) -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. -
ccs_minimize_roundtrips
(Optional, boolean): If true, network roundtrips between the coordinating node and remote clusters are minimized for cross-cluster search requests. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. -
ignore_throttled
(Optional, boolean): If true, concrete, expanded or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): If true, missing or closed indices are not included in the response. -
include_named_queries_score
(Optional, boolean): Indicates whether hit.matched_queries should be rendered as a map that includes the name of the matched query associated with its score (true) or as an array containing the name of the matched queries (false) This functionality reruns each named query on every hit in a search response. Typically, this adds a small overhead to a request. However, using computationally expensive named queries on a large number of hits may add significant overhead. -
max_concurrent_searches
(Optional, number): Maximum number of concurrent searches the multi search API can execute. -
max_concurrent_shard_requests
(Optional, number): Maximum number of concurrent shard requests that each sub-search request executes per node. -
pre_filter_shard_size
(Optional, number): Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method i.e., if date filters are mandatory to match but the shard bounds and the query are disjoint. -
rest_total_hits_as_int
(Optional, boolean): If true, hits.total are returned as an integer in the response. Defaults to false, which returns an object. -
routing
(Optional, string): Custom routing value used to route search operations to a specific shard. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): Indicates whether global term and document frequencies should be used when scoring returned documents. -
typed_keys
(Optional, boolean): Specifies whether aggregation and suggester names should be prefixed by their respective types in the response.
-
msearch_template
editRun multiple templated searches.
client.msearchTemplate({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases to search. Supports wildcards (*
). To search all data streams and indices, omit this parameter or use*
. -
search_templates
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, index, preference, request_cache, routing, search_type, ccs_minimize_roundtrips, allow_partial_search_results, ignore_throttled } | { aggregations, collapse, query, explain, ext, stored_fields, docvalue_fields, knn, from, highlight, indices_boost, min_score, post_filter, profile, rescore, script_fields, search_after, size, sort, _source, fields, terminate_after, stats, timeout, track_scores, track_total_hits, version, runtime_mappings, seq_no_primary_term, pit, suggest }[]) -
ccs_minimize_roundtrips
(Optional, boolean): Iftrue
, network round-trips are minimized for cross-cluster search requests. -
max_concurrent_searches
(Optional, number): Maximum number of concurrent searches the API can run. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): The type of the search operation. Available options:query_then_fetch
,dfs_query_then_fetch
. -
rest_total_hits_as_int
(Optional, boolean): Iftrue
, the response returnshits.total
as an integer. Iffalse
, it returnshits.total
as an object. -
typed_keys
(Optional, boolean): Iftrue
, the response prefixes aggregation and suggester names with their respective types.
-
mtermvectors
editGet multiple term vectors.
You can specify existing documents by index and ID or provide artificial documents in the body of the request.
You can specify the index in the request body or request URI.
The response contains a docs
array with all the fetched termvectors.
Each element has the structure provided by the termvectors API.
client.mtermvectors({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): Name of the index that contains the documents. -
docs
(Optional, { _id, _index, routing, _source, stored_fields, version, version_type }[]): Array of existing or artificial documents. -
ids
(Optional, string[]): Simplified syntax to specify documents by their ID if they’re in the same index. -
fields
(Optional, string | string[]): List or wildcard expressions of fields to include in the statistics. Used as the default list unless a specific field list is provided in thecompletion_fields
orfielddata_fields
parameters. -
field_statistics
(Optional, boolean): Iftrue
, the response includes the document count, sum of document frequencies, and sum of total term frequencies. -
offsets
(Optional, boolean): Iftrue
, the response includes term offsets. -
payloads
(Optional, boolean): Iftrue
, the response includes term payloads. -
positions
(Optional, boolean): Iftrue
, the response includes term positions. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
realtime
(Optional, boolean): If true, the request is real-time as opposed to near-real-time. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
term_statistics
(Optional, boolean): If true, the response includes term frequency and document frequency. -
version
(Optional, number): Iftrue
, returns the document version as part of a hit. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): Specific version type.
-
open_point_in_time
editOpen a point in time.
A search request by default runs against the most recent visible data of the target indices,
which is called point in time. Elasticsearch pit (point in time) is a lightweight view into the
state of the data as it existed when initiated. In some cases, it’s preferred to perform multiple
search requests using the same point in time. For example, if refreshes happen between
search_after
requests, then the results of those requests might not be consistent as changes happening
between searches are only visible to the more recent point in time.
A point in time must be opened explicitly before being used in search requests.
The keep_alive
parameter tells Elasticsearch how long it should persist.
client.openPointInTime({ index, keep_alive })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index names to open point in time; use_all
or empty string to perform the operation on all indices -
keep_alive
(string | -1 | 0): Extends the time to live of the corresponding point in time. -
index_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Allows to filter indices if the provided query rewrites tomatch_none
on every shard. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
allow_partial_search_results
(Optional, boolean): Iffalse
, creating a point in time request when a shard is missing or unavailable will throw an exception. Iftrue
, the point in time will contain all the shards that are available at the time of the request.
-
ping
editPing the cluster. Get information about whether the cluster is running.
client.ping()
put_script
editCreate or update a script or search template. Creates or updates a stored script or search template.
client.putScript({ id, script })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the stored script or search template. Must be unique within the cluster. -
script
({ lang, options, source }): Contains the script or search template, its parameters, and its language. -
context
(Optional, string): Context in which the script or search template should run. To prevent errors, the API immediately compiles the script or template in this context. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
rank_eval
editEvaluate ranked search results.
Evaluate the quality of ranked search results over a set of typical search queries.
client.rankEval({ requests })
Arguments
edit-
Request (object):
-
requests
({ id, request, ratings, template_id, params }[]): A set of typical search requests, together with their provided ratings. -
index
(Optional, string | string[]): List of data streams, indices, and index aliases used to limit the request. Wildcard (*
) expressions are supported. To target all data streams and indices in a cluster, omit this parameter or use_all
or*
. -
metric
(Optional, { precision, recall, mean_reciprocal_rank, dcg, expected_reciprocal_rank }): Definition of the evaluation metric to calculate. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
ignore_unavailable
(Optional, boolean): Iftrue
, missing or closed indices are not included in the response. -
search_type
(Optional, string): Search operation type
-
reindex
editReindex documents. Copies documents from a source to a destination. The source can be any existing index, alias, or data stream. The destination must differ from the source. For example, you cannot reindex a data stream into itself.
client.reindex({ dest, source })
Arguments
edit-
Request (object):
-
dest
({ index, op_type, pipeline, routing, version_type }): The destination you are copying to. -
source
({ index, query, remote, size, slice, sort, _source, runtime_mappings }): The source you are copying from. -
conflicts
(Optional, Enum("abort" | "proceed")): Set to proceed to continue reindexing even if there are conflicts. -
max_docs
(Optional, number): The maximum number of documents to reindex. -
script
(Optional, { source, id, params, lang, options }): The script to run to update the document source or metadata when reindexing. -
size
(Optional, number) -
refresh
(Optional, boolean): Iftrue
, the request refreshes affected shards to make this operation visible to search. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second. Defaults to no throttle. -
scroll
(Optional, string | -1 | 0): Specifies how long a consistent view of the index should be maintained for scrolled search. -
slices
(Optional, number | Enum("auto")): The number of slices this task should be divided into. Defaults to 1 slice, meaning the task isn’t sliced into subtasks. -
timeout
(Optional, string | -1 | 0): Period each indexing waits for automatic index creation, dynamic mapping updates, and waiting for active shards. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the operation is complete. -
require_alias
(Optional, boolean): Iftrue
, the destination must be an index alias.
-
reindex_rethrottle
editThrottle a reindex operation.
Change the number of requests per second for a particular reindex operation.
client.reindexRethrottle({ task_id })
Arguments
edit-
Request (object):
-
task_id
(string): Identifier for the task. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second.
-
render_search_template
editRender a search template.
Render a search template as a search request body.
client.renderSearchTemplate({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): ID of the search template to render. If nosource
is specified, this or theid
request body parameter is required. -
file
(Optional, string) -
params
(Optional, Record<string, User-defined value>): Key-value pairs used to replace Mustache variables in the template. The key is the variable name. The value is the variable value. -
source
(Optional, string): An inline search template. Supports the same parameters as the search API’s request body. These parameters also support Mustache variables. If noid
or<templated-id>
is specified, this parameter is required.
-
scripts_painless_execute
editRun a script. Runs a script and returns a result.
client.scriptsPainlessExecute({ ... })
Arguments
edit-
Request (object):
-
context
(Optional, string): The context that the script should run in. -
context_setup
(Optional, { document, index, query }): Additional parameters for thecontext
. -
script
(Optional, { source, id, params, lang, options }): The Painless script to execute.
-
scroll
editRun a scrolling search.
The scroll API is no longer recommend for deep pagination. If you need to preserve the index state while paging through more than 10,000 hits, use the search_after
parameter with a point in time (PIT).
The scroll API gets large sets of results from a single scrolling search request.
To get the necessary scroll ID, submit a search API request that includes an argument for the scroll
query parameter.
The scroll
parameter indicates how long Elasticsearch should retain the search context for the request.
The search response returns a scroll ID in the _scroll_id
response body parameter.
You can then use the scroll ID with the scroll API to retrieve the next batch of results for the request.
If the Elasticsearch security features are enabled, the access to the results of a specific scroll ID is restricted to the user or API key that submitted the search.
You can also use the scroll API to specify a new scroll parameter that extends or shortens the retention period for the search context.
Results from a scrolling search reflect the state of the index at the time of the initial search request. Subsequent indexing or document changes only affect later search and scroll requests.
client.scroll({ scroll_id })
Arguments
edit-
Request (object):
-
scroll_id
(string): Scroll ID of the search. -
scroll
(Optional, string | -1 | 0): Period to retain the search context for scrolling. -
rest_total_hits_as_int
(Optional, boolean): If true, the API response’s hit.total property is returned as an integer. If false, the API response’s hit.total property is returned as an object.
-
search
editRun a search.
Get search hits that match the query defined in the request.
You can provide search queries using the q
query string parameter or the request body.
If both are specified, only the query parameter is used.
client.search({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases to search. Supports wildcards (*
). To search all data streams and indices, omit this parameter or use*
or_all
. -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): Defines the aggregations that are run as part of the search request. -
collapse
(Optional, { field, inner_hits, max_concurrent_group_searches, collapse }): Collapses search results the values of the specified field. -
explain
(Optional, boolean): If true, returns detailed information about score computation as part of a hit. -
ext
(Optional, Record<string, User-defined value>): Configuration of search extensions defined by Elasticsearch plugins. -
from
(Optional, number): Starting document offset. Needs to be non-negative. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
highlight
(Optional, { encoder, fields }): Specifies the highlighter to use for retrieving highlighted snippets from one or more fields in your search results. -
track_total_hits
(Optional, boolean | number): Number of hits matching the query to count accurately. Iftrue
, the exact number of hits is returned at the cost of some performance. Iffalse
, the response does not include the total number of hits matching the query. -
indices_boost
(Optional, Record<string, number>[]): Boosts the _score of documents from specified indices. -
docvalue_fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*
) patterns. The request returns doc values for field names matching these patterns in thehits.fields
property of the response. -
knn
(Optional, { field, query_vector, query_vector_builder, k, num_candidates, boost, filter, similarity, inner_hits } | { field, query_vector, query_vector_builder, k, num_candidates, boost, filter, similarity, inner_hits }[]): Defines the approximate kNN search to run. -
rank
(Optional, { rrf }): Defines the Reciprocal Rank Fusion (RRF) to use. -
min_score
(Optional, number): Minimum_score
for matching documents. Documents with a lower_score
are not included in the search results. -
post_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Use thepost_filter
parameter to filter search results. The search hits are filtered after the aggregations are calculated. A post filter has no impact on the aggregation results. -
profile
(Optional, boolean): Set totrue
to return detailed timing information about the execution of individual components in a search request. NOTE: This is a debugging tool and adds significant overhead to search execution. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Defines the search definition using the Query DSL. -
rescore
(Optional, { window_size, query, learning_to_rank } | { window_size, query, learning_to_rank }[]): Can be used to improve precision by reordering just the top (for example 100 - 500) documents returned by thequery
andpost_filter
phases. -
retriever
(Optional, { standard, knn, rrf, text_similarity_reranker, rule }): A retriever is a specification to describe top documents returned from a search. A retriever replaces other elements of the search API that also return top documents such as query and knn. -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Retrieve a script evaluation (based on different fields) for each hit. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): Used to retrieve the next page of hits using a set of sort values from the previous page. -
size
(Optional, number): The number of hits to return. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
slice
(Optional, { field, id, max }): Can be used to split a scrolled search into multiple slices that can be consumed independently. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): A list of <field>:<direction> pairs. -
_source
(Optional, boolean | { excludes, includes }): Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response. -
fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*
) patterns. The request returns values for field names matching these patterns in thehits.fields
property of the response. -
suggest
(Optional, { text }): Defines a suggester that provides similar looking terms based on a provided text. -
terminate_after
(Optional, number): Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers. If set to0
(default), the query does not terminate early. -
timeout
(Optional, string): Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout. -
track_scores
(Optional, boolean): If true, calculate and return document scores, even if the scores are not used for sorting. -
version
(Optional, boolean): If true, returns document version as part of a hit. -
seq_no_primary_term
(Optional, boolean): Iftrue
, returns sequence number and primary term of the last modification of each hit. -
stored_fields
(Optional, string | string[]): List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the_source
parameter defaults tofalse
. You can pass_source: true
to return both source fields and stored fields in the search response. -
pit
(Optional, { id, keep_alive }): Limits the search to a point in time (PIT). If you provide a PIT, you cannot specify an<index>
in the request path. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name. -
stats
(Optional, string[]): Stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
allow_partial_search_results
(Optional, boolean): If true, returns partial results if there are shard request timeouts or shard failures. If false, returns an error with no partial results. -
analyzer
(Optional, string): Analyzer to use for the query string. This parameter can only be used when the q query string parameter is specified. -
analyze_wildcard
(Optional, boolean): If true, wildcard and prefix queries are analyzed. This parameter can only be used when the q query string parameter is specified. -
batched_reduce_size
(Optional, number): The number of shard results that should be reduced at once on the coordinating node. This value should be used as a protection mechanism to reduce the memory overhead per search request if the potential number of shards in the request can be large. -
ccs_minimize_roundtrips
(Optional, boolean): If true, network round-trips between the coordinating node and the remote clusters are minimized when executing cross-cluster search (CCS) requests. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query: AND or OR. This parameter can only be used when theq
query string parameter is specified. -
df
(Optional, string): Field to use as default where no field prefix is given in the query string. This parameter can only be used when the q query string parameter is specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
ignore_throttled
(Optional, boolean): Iftrue
, concrete, expanded or aliased indices will be ignored when frozen. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
include_named_queries_score
(Optional, boolean): Indicates whether hit.matched_queries should be rendered as a map that includes the name of the matched query associated with its score (true) or as an array containing the name of the matched queries (false) This functionality reruns each named query on every hit in a search response. Typically, this adds a small overhead to a request. However, using computationally expensive named queries on a large number of hits may add significant overhead. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can only be used when theq
query string parameter is specified. -
max_concurrent_shard_requests
(Optional, number): Defines the number of concurrent shard requests per node this search executes concurrently. This value should be used to limit the impact of the search on the cluster in order to limit the number of concurrent shard requests. -
min_compatible_shard_node
(Optional, string): The minimum version of the node that can handle the request Any handling node with a lower version will fail the request. -
preference
(Optional, string): Nodes and shards used for the search. By default, Elasticsearch selects from eligible nodes and shards using adaptive replica selection, accounting for allocation awareness. Valid values are:_only_local
to run the search only on shards on the local node;_local
to, if possible, run the search on shards on the local node, or if not, select shards using the default method;_only_nodes:<node-id>,<node-id>
to run the search on only the specified nodes IDs, where, if suitable shards exist on more than one selected node, use shards on those nodes using the default method, or if none of the specified nodes are available, select shards from any available node using the default method;_prefer_nodes:<node-id>,<node-id>
to if possible, run the search on the specified nodes IDs, or if not, select shards using the default method;_shards:<shard>,<shard>
to run the search only on the specified shards;<custom-string>
(any string that does not start with_
) to route searches with the same<custom-string>
to the same shards in the same order. -
pre_filter_shard_size
(Optional, number): Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method (if date filters are mandatory to match but the shard bounds and the query are disjoint). When unspecified, the pre-filter phase is executed if any of these conditions is met: the request targets more than 128 shards; the request targets one or more read-only index; the primary sort of the query targets an indexed field. -
request_cache
(Optional, boolean): Iftrue
, the caching of search results is enabled for requests wheresize
is0
. Defaults to index level settings. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
scroll
(Optional, string | -1 | 0): Period to retain the search context for scrolling. See Scroll search results. By default, this value cannot exceed1d
(24 hours). You can change this limit using thesearch.max_keep_alive
cluster-level setting. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): How distributed term frequencies are calculated for relevance scoring. -
suggest_field
(Optional, string): Specifies which field to use for suggestions. -
suggest_mode
(Optional, Enum("missing" | "popular" | "always")): Specifies the suggest mode. This parameter can only be used when thesuggest_field
andsuggest_text
query string parameters are specified. -
suggest_size
(Optional, number): Number of suggestions to return. This parameter can only be used when thesuggest_field
andsuggest_text
query string parameters are specified. -
suggest_text
(Optional, string): The source text for which the suggestions should be returned. This parameter can only be used when thesuggest_field
andsuggest_text
query string parameters are specified. -
typed_keys
(Optional, boolean): Iftrue
, aggregation and suggester names are be prefixed by their respective types in the response. -
rest_total_hits_as_int
(Optional, boolean): Indicates whetherhits.total
should be rendered as an integer or an object in the rest search response. -
_source_excludes
(Optional, string | string[]): A list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in_source_includes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
_source_includes
(Optional, string | string[]): A list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
q
(Optional, string): Query in the Lucene query string syntax using query parameter search. Query parameter searches do not support the full Elasticsearch Query DSL but are handy for testing. -
force_synthetic_source
(Optional, boolean): Should this request force synthetic _source? Use this to test if the mapping supports synthetic _source and to get a sense of the worst case performance. Fetches with this enabled will be slower the enabling synthetic source natively in the index.
-
search_mvt
editSearch a vector tile.
Search a vector tile for geospatial values.
client.searchMvt({ index, field, zoom, x, y })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, or aliases to search -
field
(string): Field containing geospatial data to return -
zoom
(number): Zoom level for the vector tile to search -
x
(number): X coordinate for the vector tile to search -
y
(number): Y coordinate for the vector tile to search -
aggs
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): Sub-aggregations for the geotile_grid.
-
Supports the following aggregation types:
- avg
- cardinality
- max
- min
- sum
buffer
(Optional, number): Size, in pixels, of a clipping buffer outside the tile. This allows renderers
to avoid outline artifacts from geometries that extend past the extent of the tile.
exact_bounds
(Optional, boolean): If false, the meta layer’s feature is the bounding box of the tile.
If true, the meta layer’s feature is a bounding box resulting from a
geo_bounds aggregation. The aggregation runs on <field> values that intersect
the <zoom>/<x>/<y> tile with wrap_longitude set to false. The resulting
bounding box may be larger than the vector tile.
extent
(Optional, number): Size, in pixels, of a side of the tile. Vector tiles are square with equal sides.
fields
(Optional, string | string[]): Fields to return in the hits
layer. Supports wildcards (*
).
This parameter does not support fields with array values. Fields with array
values may return inconsistent results.
grid_agg
(Optional, Enum("geotile" | "geohex")): Aggregation used to create a grid for the field
.
grid_precision
(Optional, number): Additional zoom levels available through the aggs layer. For example, if <zoom> is 7
and grid_precision is 8, you can zoom in up to level 15. Accepts 0-8. If 0, results
don’t include the aggs layer.
grid_type
(Optional, Enum("grid" | "point" | "centroid")): Determines the geometry type for features in the aggs layer. In the aggs layer,
each feature represents a geotile_grid cell. If grid each feature is a Polygon
of the cells bounding box. If point each feature is a Point that is the centroid
of the cell.
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Query DSL used to filter documents for the search.
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Defines one or more runtime fields in the search request. These fields take
precedence over mapped fields with the same name.
size
(Optional, number): Maximum number of features to return in the hits layer. Accepts 0-10000.
If 0, results don’t include the hits layer.
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): Sorts features in the hits layer. By default, the API calculates a bounding
box for each feature. It sorts features based on this box’s diagonal length,
from longest to shortest.
track_total_hits
(Optional, boolean | number): Number of hits matching the query to count accurately. If true
, the exact number
of hits is returned at the cost of some performance. If false
, the response does
not include the total number of hits matching the query.
* *with_labels
(Optional, boolean): If true
, the hits and aggs layers will contain additional point features representing
suggested label positions for the original features.
search_shards
editGet the search shards.
Get the indices and shards that a search request would be run against. This information can be useful for working out issues or planning optimizations with routing and shard preferences. When filtered aliases are used, the filter is returned as part of the indices section.
client.searchShards({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): Returns the indices and shards that a search request would be executed against. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
routing
(Optional, string): Custom value used to route operations to a specific shard.
-
search_template
editRun a search with a search template.
client.searchTemplate({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases to search. Supports wildcards (*). -
explain
(Optional, boolean): Iftrue
, returns detailed information about score calculation as part of each hit. -
id
(Optional, string): ID of the search template to use. If no source is specified, this parameter is required. -
params
(Optional, Record<string, User-defined value>): Key-value pairs used to replace Mustache variables in the template. The key is the variable name. The value is the variable value. -
profile
(Optional, boolean): Iftrue
, the query execution is profiled. -
source
(Optional, string): An inline search template. Supports the same parameters as the search API’s request body. Also supports Mustache variables. If no id is specified, this parameter is required. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
ccs_minimize_roundtrips
(Optional, boolean): Iftrue
, network round-trips are minimized for cross-cluster search requests. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_throttled
(Optional, boolean): Iftrue
, specified concrete, expanded, or aliased indices are not included in the response when throttled. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
scroll
(Optional, string | -1 | 0): Specifies how long a consistent view of the index should be maintained for scrolled search. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): The type of the search operation. -
rest_total_hits_as_int
(Optional, boolean): If true, hits.total are rendered as an integer in the response. -
typed_keys
(Optional, boolean): Iftrue
, the response prefixes aggregation and suggester names with their respective types.
-
terms_enum
editGet terms in an index.
Discover terms that match a partial string in an index. This "terms enum" API is designed for low-latency look-ups used in auto-complete scenarios.
If the complete
property in the response is false, the returned terms set may be incomplete and should be treated as approximate.
This can occur due to a few reasons, such as a request timeout or a node error.
The terms enum API may return terms from deleted documents. Deleted documents are initially only marked as deleted. It is not until their segments are merged that documents are actually deleted. Until that happens, the terms enum API will return terms from these documents.
client.termsEnum({ index, field })
Arguments
edit-
Request (object):
-
index
(string): List of data streams, indices, and index aliases to search. Wildcard (*) expressions are supported. -
field
(string): The string to match at the start of indexed terms. If not provided, all terms in the field are considered. -
size
(Optional, number): How many matching terms to return. -
timeout
(Optional, string | -1 | 0): The maximum length of time to spend collecting results. Defaults to "1s" (one second). If the timeout is exceeded the complete flag set to false in the response and the results may be partial or empty. -
case_insensitive
(Optional, boolean): When true the provided search string is matched against index terms without case sensitivity. -
index_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Allows to filter an index shard if the provided query rewrites to match_none. -
string
(Optional, string): The string after which terms in the index should be returned. Allows for a form of pagination if the last result from one request is passed as the search_after parameter for a subsequent request. -
search_after
(Optional, string)
-
termvectors
editGet term vector information.
Get information and statistics about terms in the fields of a particular document.
client.termvectors({ index })
Arguments
edit-
Request (object):
-
index
(string): Name of the index that contains the document. -
id
(Optional, string): Unique identifier of the document. -
doc
(Optional, object): An artificial document (a document not present in the index) for which you want to retrieve term vectors. -
filter
(Optional, { max_doc_freq, max_num_terms, max_term_freq, max_word_length, min_doc_freq, min_term_freq, min_word_length }): Filter terms based on their tf-idf scores. -
per_field_analyzer
(Optional, Record<string, string>): Overrides the default per-field analyzer. -
fields
(Optional, string | string[]): List or wildcard expressions of fields to include in the statistics. Used as the default list unless a specific field list is provided in thecompletion_fields
orfielddata_fields
parameters. -
field_statistics
(Optional, boolean): Iftrue
, the response includes the document count, sum of document frequencies, and sum of total term frequencies. -
offsets
(Optional, boolean): Iftrue
, the response includes term offsets. -
payloads
(Optional, boolean): Iftrue
, the response includes term payloads. -
positions
(Optional, boolean): Iftrue
, the response includes term positions. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
realtime
(Optional, boolean): If true, the request is real-time as opposed to near-real-time. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
term_statistics
(Optional, boolean): Iftrue
, the response includes term frequency and document frequency. -
version
(Optional, number): Iftrue
, returns the document version as part of a hit. -
version_type
(Optional, Enum("internal" | "external" | "external_gte" | "force")): Specific version type.
-
update
editUpdate a document. Updates a document by running a script or passing a partial document.
client.update({ id, index })
Arguments
edit-
Request (object):
-
id
(string): Document ID -
index
(string): The name of the index -
detect_noop
(Optional, boolean): Set to false to disable setting result in the response to noop if no change to the document occurred. -
doc
(Optional, object): A partial update to an existing document. -
doc_as_upsert
(Optional, boolean): Set to true to use the contents of doc as the value of upsert -
script
(Optional, { source, id, params, lang, options }): Script to execute to update the document. -
scripted_upsert
(Optional, boolean): Set to true to execute the script whether or not the document exists. -
_source
(Optional, boolean | { excludes, includes }): Set to false to disable source retrieval. You can also specify a comma-separated list of the fields you want to retrieve. -
upsert
(Optional, object): If the document does not already exist, the contents of upsert are inserted as a new document. If the document exists, the script is executed. -
if_primary_term
(Optional, number): Only perform the operation if the document has this primary term. -
if_seq_no
(Optional, number): Only perform the operation if the document has this sequence number. -
lang
(Optional, string): The script language. -
refresh
(Optional, Enum(true | false | "wait_for")): If true, Elasticsearch refreshes the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false do nothing with refreshes. -
require_alias
(Optional, boolean): If true, the destination must be an index alias. -
retry_on_conflict
(Optional, number): Specify how many times should the operation be retried when a conflict occurs. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): Period to wait for dynamic mapping updates and active shards. This guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operations. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). Defaults to 1 meaning the primary shard. -
_source_excludes
(Optional, string | string[]): Specify the source fields you want to exclude. -
_source_includes
(Optional, string | string[]): Specify the source fields you want to retrieve.
-
update_by_query
editUpdate documents. Updates documents that match the specified query. If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes.
client.updateByQuery({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and aliases to search. Supports wildcards (*
). To search all data streams or indices, omit this parameter or use*
or_all
. -
max_docs
(Optional, number): The maximum number of documents to update. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Specifies the documents to update using the Query DSL. -
script
(Optional, { source, id, params, lang, options }): The script to run to update the document source or metadata when updating. -
slice
(Optional, { field, id, max }): Slice the request manually using the provided slice ID and total number of slices. -
conflicts
(Optional, Enum("abort" | "proceed")): What to do if update by query hits version conflicts:abort
orproceed
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
analyzer
(Optional, string): Analyzer to use for the query string. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. -
df
(Optional, string): Field to use as default where no field prefix is given in the query string. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
from
(Optional, number): Starting offset (default: 0) -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
pipeline
(Optional, string): ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to_none
disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter. -
preference
(Optional, string): Specifies the node or shard the operation should be performed on. Random by default. -
q
(Optional, string): Query in the Lucene query string syntax. -
refresh
(Optional, boolean): Iftrue
, Elasticsearch refreshes affected shards to make the operation visible to search. -
request_cache
(Optional, boolean): Iftrue
, the request cache is used for this request. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
scroll
(Optional, string | -1 | 0): Period to retain the search context for scrolling. -
scroll_size
(Optional, number): Size of the scroll request that powers the operation. -
search_timeout
(Optional, string | -1 | 0): Explicit timeout for each search request. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): The type of the search operation. Available options:query_then_fetch
,dfs_query_then_fetch
. -
slices
(Optional, number | Enum("auto")): The number of slices this task should be divided into. -
sort
(Optional, string[]): A list of <field>:<direction> pairs. -
stats
(Optional, string[]): Specifictag
of the request for logging and statistical purposes. -
terminate_after
(Optional, number): Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers. -
timeout
(Optional, string | -1 | 0): Period each update request waits for the following operations: dynamic mapping updates, waiting for active shards. -
version
(Optional, boolean): Iftrue
, returns the document version as part of a hit. -
version_type
(Optional, boolean): Should the document increment the version number (internal) on hit or not (reindex) -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
). -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the operation is complete.
-
update_by_query_rethrottle
editThrottle an update by query operation.
Change the number of requests per second for a particular update by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.
client.updateByQueryRethrottle({ task_id })
Arguments
edit-
Request (object):
-
task_id
(string): The ID for the task. -
requests_per_second
(Optional, float): The throttle for this request in sub-requests per second.
-
async_search
editdelete
editDelete an async search.
If the asynchronous search is still running, it is cancelled.
Otherwise, the saved search results are deleted.
If the Elasticsearch security features are enabled, the deletion of a specific async search is restricted to: the authenticated user that submitted the original search request; users that have the cancel_task
cluster privilege.
client.asyncSearch.delete({ id })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the async search.
-
get
editGet async search results.
Retrieve the results of a previously submitted asynchronous search request. If the Elasticsearch security features are enabled, access to the results of a specific async search is restricted to the user or API key that submitted it.
client.asyncSearch.get({ id })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the async search. -
keep_alive
(Optional, string | -1 | 0): Specifies how long the async search should be available in the cluster. When not specified, thekeep_alive
set with the corresponding submit async request will be used. Otherwise, it is possible to override the value and extend the validity of the request. When this period expires, the search, if still running, is cancelled. If the search is completed, its saved results are deleted. -
typed_keys
(Optional, boolean): Specify whether aggregation and suggester names should be prefixed by their respective types in the response -
wait_for_completion_timeout
(Optional, string | -1 | 0): Specifies to wait for the search to be completed up until the provided timeout. Final results will be returned if available before the timeout expires, otherwise the currently available results will be returned once the timeout expires. By default no timeout is set meaning that the currently available results will be returned without any additional wait.
-
status
editGet the async search status.
Get the status of a previously submitted async search request given its identifier, without retrieving search results.
If the Elasticsearch security features are enabled, use of this API is restricted to the monitoring_user
role.
client.asyncSearch.status({ id })
Arguments
edit-
Request (object):
-
id
(string): A unique identifier for the async search. -
keep_alive
(Optional, string | -1 | 0): Specifies how long the async search needs to be available. Ongoing async searches and any saved search results are deleted after this period.
-
submit
editRun an async search.
When the primary sort of the results is an indexed field, shards get sorted based on minimum and maximum value that they hold for that field. Partial results become available following the sort criteria that was requested.
Warning: Asynchronous search does not support scroll or search requests that include only the suggest section.
By default, Elasticsearch does not allow you to store an async search response larger than 10Mb and an attempt to do this results in an error.
The maximum allowed size for a stored async search response can be set by changing the search.max_async_search_response_size
cluster level setting.
client.asyncSearch.submit({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of index names to search; use_all
or empty string to perform the operation on all indices -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>) -
collapse
(Optional, { field, inner_hits, max_concurrent_group_searches, collapse }) -
explain
(Optional, boolean): If true, returns detailed information about score computation as part of a hit. -
ext
(Optional, Record<string, User-defined value>): Configuration of search extensions defined by Elasticsearch plugins. -
from
(Optional, number): Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter. -
highlight
(Optional, { encoder, fields }) -
track_total_hits
(Optional, boolean | number): Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits. -
indices_boost
(Optional, Record<string, number>[]): Boosts the _score of documents from specified indices. -
docvalue_fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The request returns doc values for field names matching these patterns in the hits.fields property of the response. -
knn
(Optional, { field, query_vector, query_vector_builder, k, num_candidates, boost, filter, similarity, inner_hits } | { field, query_vector, query_vector_builder, k, num_candidates, boost, filter, similarity, inner_hits }[]): Defines the approximate kNN search to run. -
min_score
(Optional, number): Minimum _score for matching documents. Documents with a lower _score are not included in the search results. -
post_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }) -
profile
(Optional, boolean) -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Defines the search definition using the Query DSL. -
rescore
(Optional, { window_size, query, learning_to_rank } | { window_size, query, learning_to_rank }[]) -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Retrieve a script evaluation (based on different fields) for each hit. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]) -
size
(Optional, number): The number of hits to return. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter. -
slice
(Optional, { field, id, max }) -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]) -
_source
(Optional, boolean | { excludes, includes }): Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response. -
fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The request returns values for field names matching these patterns in the hits.fields property of the response. -
suggest
(Optional, { text }) -
terminate_after
(Optional, number): Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Defaults to 0, which does not terminate query execution early. -
timeout
(Optional, string): Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout. -
track_scores
(Optional, boolean): If true, calculate and return document scores, even if the scores are not used for sorting. -
version
(Optional, boolean): If true, returns document version as part of a hit. -
seq_no_primary_term
(Optional, boolean): If true, returns sequence number and primary term of the last modification of each hit. See Optimistic concurrency control. -
stored_fields
(Optional, string | string[]): List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. You can pass _source: true to return both source fields and stored fields in the search response. -
pit
(Optional, { id, keep_alive }): Limits the search to a point in time (PIT). If you provide a PIT, you cannot specify an <index> in the request path. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name. -
stats
(Optional, string[]): Stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API. -
wait_for_completion_timeout
(Optional, string | -1 | 0): Blocks and waits until the search is completed up to a certain timeout. When the async search completes within the timeout, the response won’t include the ID as the results are not stored in the cluster. -
keep_on_completion
(Optional, boolean): Iftrue
, results are stored for later retrieval when the search completes within thewait_for_completion_timeout
. -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
allow_partial_search_results
(Optional, boolean): Indicate if an error should be returned if there is a partial search failure or timeout -
analyzer
(Optional, string): The analyzer to use for the query string -
analyze_wildcard
(Optional, boolean): Specify whether wildcard and prefix queries should be analyzed (default: false) -
batched_reduce_size
(Optional, number): Affects how often partial results become available, which happens whenever shard results are reduced. A partial reduction is performed every time the coordinating node has received a certain number of new shard responses (5 by default). -
ccs_minimize_roundtrips
(Optional, boolean): The default value is the only supported value. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query (AND or OR) -
df
(Optional, string): The field to use as default where no field prefix is given in the query string -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
ignore_throttled
(Optional, boolean): Whether specified concrete, expanded or aliased indices should be ignored when throttled -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed) -
lenient
(Optional, boolean): Specify whether format-based query failures (such as providing text to a numeric field) should be ignored -
max_concurrent_shard_requests
(Optional, number): The number of concurrent shard requests per node this search executes concurrently. This value should be used to limit the impact of the search on the cluster in order to limit the number of concurrent shard requests -
min_compatible_shard_node
(Optional, string) -
preference
(Optional, string): Specify the node or shard the operation should be performed on (default: random) -
request_cache
(Optional, boolean): Specify if request cache should be used for this request or not, defaults to true -
routing
(Optional, string): A list of specific routing values -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): Search operation type -
suggest_field
(Optional, string): Specifies which field to use for suggestions. -
suggest_mode
(Optional, Enum("missing" | "popular" | "always")): Specify suggest mode -
suggest_size
(Optional, number): How many suggestions to return in response -
suggest_text
(Optional, string): The source text for which the suggestions should be returned. -
typed_keys
(Optional, boolean): Specify whether aggregation and suggester names should be prefixed by their respective types in the response -
rest_total_hits_as_int
(Optional, boolean): Indicates whether hits.total should be rendered as an integer or an object in the rest search response -
_source_excludes
(Optional, string | string[]): A list of fields to exclude from the returned _source field -
_source_includes
(Optional, string | string[]): A list of fields to extract and return from the _source field -
q
(Optional, string): Query in the Lucene query string syntax
-
autoscaling
editdelete_autoscaling_policy
editDelete an autoscaling policy.
This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
client.autoscaling.deleteAutoscalingPolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): the name of the autoscaling policy -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_autoscaling_capacity
editGet the autoscaling capacity.
This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
This API gets the current autoscaling capacity based on the configured autoscaling policy. It will return information to size the cluster appropriately to the current workload.
The required_capacity
is calculated as the maximum of the required_capacity
result of all individual deciders that are enabled for the policy.
The operator should verify that the current_nodes
match the operator’s knowledge of the cluster to avoid making autoscaling decisions based on stale or incomplete information.
The response contains decider-specific information you can use to diagnose how and why autoscaling determined a certain capacity was required. This information is provided for diagnosis only. Do not use this information to make autoscaling decisions.
client.autoscaling.getAutoscalingCapacity({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_autoscaling_policy
editGet an autoscaling policy.
This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
client.autoscaling.getAutoscalingPolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): the name of the autoscaling policy -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
put_autoscaling_policy
editCreate or update an autoscaling policy.
This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
client.autoscaling.putAutoscalingPolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): the name of the autoscaling policy -
policy
(Optional, { roles, deciders }) -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
cat
editaliases
editGet aliases. Retrieves the cluster’s index aliases, including filter and routing information. The API does not return data stream aliases.
CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.
client.cat.aliases({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): A list of aliases to retrieve. Supports wildcards (*
). To retrieve all aliases, omit this parameter or use*
or_all
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both.
-
allocation
editProvides a snapshot of the number of shards allocated to each data node and their disk space. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
client.cat.allocation({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): List of node identifiers or names used to limit the returned information. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values.
-
component_templates
editGet component templates. Returns information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API.
client.cat.componentTemplates({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): The name of the component template. Accepts wildcard expressions. If omitted, all component templates are returned.
-
count
editGet a document count. Provides quick access to a document count for a data stream, an index, or an entire cluster. The document count only includes live documents, not deleted documents which have not yet been removed by the merge process.
CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the count API.
client.cat.count({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
.
-
fielddata
editReturns the amount of heap memory currently used by the field data cache on every data node in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes stats API.
client.cat.fielddata({ ... })
Arguments
edit-
Request (object):
-
fields
(Optional, string | string[]): List of fields used to limit returned information. To retrieve all fields, omit this parameter. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values.
-
health
editReturns the health status of a cluster, similar to the cluster health API.
IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console.
They are not intended for use by applications. For application consumption, use the cluster health API.
This API is often used to check malfunctioning clusters.
To help you track cluster health alongside log files and alerting systems, the API returns timestamps in two formats:
HH:MM:SS
, which is human-readable but includes no date information;
Unix epoch time
, which is machine-sortable and includes date information.
The latter format is useful for cluster recoveries that take multiple days.
You can use the cat health API to verify cluster health across multiple nodes.
You also can use the API to track the recovery of a large cluster over a longer period of time.
client.cat.health({ ... })
Arguments
edit-
Request (object):
-
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values. -
ts
(Optional, boolean): If true, returnsHH:MM:SS
and Unix epoch timestamps.
-
help
editGet CAT help. Returns help for the CAT APIs.
client.cat.help()
indices
editGet index information. Returns high-level information about indices in a cluster, including backing indices for data streams.
Use this request to get the following information for each index in a cluster: - shard count - document count - deleted document count - primary store size - total store size of all shards, including shard replicas
These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.
CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.
client.cat.indices({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): The type of index that wildcard patterns can match. -
health
(Optional, Enum("green" | "yellow" | "red")): The health status used to limit returned indices. By default, the response includes indices of any health status. -
include_unloaded_segments
(Optional, boolean): If true, the response includes information from segments that are not loaded into memory. -
pri
(Optional, boolean): If true, the response only includes information from primary shards. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values.
-
master
editReturns information about the master node, including the ID, bound IP address, and name. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.master()
ml_data_frame_analytics
editGet data frame analytics jobs. Returns configuration and usage information about data frame analytics jobs.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.
client.cat.mlDataFrameAnalytics({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): The ID of the data frame analytics to fetch -
allow_no_match
(Optional, boolean): Whether to ignore if a wildcard expression matches no configs. (This includes_all
string or when no configs have been specified) -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit in which to display byte values -
h
(Optional, Enum("assignment_explanation" | "create_time" | "description" | "dest_index" | "failure_reason" | "id" | "model_memory_limit" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "progress" | "source_index" | "state" | "type" | "version") | Enum("assignment_explanation" | "create_time" | "description" | "dest_index" | "failure_reason" | "id" | "model_memory_limit" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "progress" | "source_index" | "state" | "type" | "version")[]): List of column names to display. -
s
(Optional, Enum("assignment_explanation" | "create_time" | "description" | "dest_index" | "failure_reason" | "id" | "model_memory_limit" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "progress" | "source_index" | "state" | "type" | "version") | Enum("assignment_explanation" | "create_time" | "description" | "dest_index" | "failure_reason" | "id" | "model_memory_limit" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "progress" | "source_index" | "state" | "type" | "version")[]): List of column names or column aliases used to sort the response. -
time
(Optional, string | -1 | 0): Unit used to display time values.
-
ml_datafeeds
editGet datafeeds.
Returns configuration and usage information about datafeeds.
This API returns a maximum of 10,000 datafeeds.
If the Elasticsearch security features are enabled, you must have monitor_ml
, monitor
, manage_ml
, or manage
cluster privileges to use this API.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.
client.cat.mlDatafeeds({ ... })
Arguments
edit-
Request (object):
-
datafeed_id
(Optional, string): A numerical character string that uniquely identifies the datafeed. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:
-
- Contains wildcard expressions and there are no datafeeds that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
If true
, the API returns an empty datafeeds array when there are no matches and the subset of results when
there are partial matches. If false
, the API returns a 404 status code when there are no matches or only
partial matches.
h
(Optional, Enum("ae" | "bc" | "id" | "na" | "ne" | "ni" | "nn" | "sba" | "sc" | "seah" | "st" | "s") | Enum("ae" | "bc" | "id" | "na" | "ne" | "ni" | "nn" | "sba" | "sc" | "seah" | "st" | "s")[]): List of column names to display.
s
(Optional, Enum("ae" | "bc" | "id" | "na" | "ne" | "ni" | "nn" | "sba" | "sc" | "seah" | "st" | "s") | Enum("ae" | "bc" | "id" | "na" | "ne" | "ni" | "nn" | "sba" | "sc" | "seah" | "st" | "s")[]): List of column names or column aliases used to sort the response.
* *time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values.
ml_jobs
editGet anomaly detection jobs.
Returns configuration and usage information for anomaly detection jobs.
This API returns a maximum of 10,000 jobs.
If the Elasticsearch security features are enabled, you must have monitor_ml
,
monitor
, manage_ml
, or manage
cluster privileges to use this API.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.
client.cat.mlJobs({ ... })
Arguments
edit-
Request (object):
-
job_id
(Optional, string): Identifier for the anomaly detection job. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:
-
- Contains wildcard expressions and there are no jobs that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
If true
, the API returns an empty jobs array when there are no matches and the subset of results when there
are partial matches. If false
, the API returns a 404 status code when there are no matches or only partial
matches.
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values.
h
(Optional, Enum("assignment_explanation" | "buckets.count" | "buckets.time.exp_avg" | "buckets.time.exp_avg_hour" | "buckets.time.max" | "buckets.time.min" | "buckets.time.total" | "data.buckets" | "data.earliest_record" | "data.empty_buckets" | "data.input_bytes" | "data.input_fields" | "data.input_records" | "data.invalid_dates" | "data.last" | "data.last_empty_bucket" | "data.last_sparse_bucket" | "data.latest_record" | "data.missing_fields" | "data.out_of_order_timestamps" | "data.processed_fields" | "data.processed_records" | "data.sparse_buckets" | "forecasts.memory.avg" | "forecasts.memory.max" | "forecasts.memory.min" | "forecasts.memory.total" | "forecasts.records.avg" | "forecasts.records.max" | "forecasts.records.min" | "forecasts.records.total" | "forecasts.time.avg" | "forecasts.time.max" | "forecasts.time.min" | "forecasts.time.total" | "forecasts.total" | "id" | "model.bucket_allocation_failures" | "model.by_fields" | "model.bytes" | "model.bytes_exceeded" | "model.categorization_status" | "model.categorized_doc_count" | "model.dead_category_count" | "model.failed_category_count" | "model.frequent_category_count" | "model.log_time" | "model.memory_limit" | "model.memory_status" | "model.over_fields" | "model.partition_fields" | "model.rare_category_count" | "model.timestamp" | "model.total_category_count" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "opened_time" | "state") | Enum("assignment_explanation" | "buckets.count" | "buckets.time.exp_avg" | "buckets.time.exp_avg_hour" | "buckets.time.max" | "buckets.time.min" | "buckets.time.total" | "data.buckets" | "data.earliest_record" | "data.empty_buckets" | "data.input_bytes" | "data.input_fields" | "data.input_records" | "data.invalid_dates" | "data.last" | "data.last_empty_bucket" | "data.last_sparse_bucket" | "data.latest_record" | "data.missing_fields" | "data.out_of_order_timestamps" | "data.processed_fields" | "data.processed_records" | "data.sparse_buckets" | "forecasts.memory.avg" | "forecasts.memory.max" | "forecasts.memory.min" | "forecasts.memory.total" | "forecasts.records.avg" | "forecasts.records.max" | "forecasts.records.min" | "forecasts.records.total" | "forecasts.time.avg" | "forecasts.time.max" | "forecasts.time.min" | "forecasts.time.total" | "forecasts.total" | "id" | "model.bucket_allocation_failures" | "model.by_fields" | "model.bytes" | "model.bytes_exceeded" | "model.categorization_status" | "model.categorized_doc_count" | "model.dead_category_count" | "model.failed_category_count" | "model.frequent_category_count" | "model.log_time" | "model.memory_limit" | "model.memory_status" | "model.over_fields" | "model.partition_fields" | "model.rare_category_count" | "model.timestamp" | "model.total_category_count" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "opened_time" | "state")[]): List of column names to display.
s
(Optional, Enum("assignment_explanation" | "buckets.count" | "buckets.time.exp_avg" | "buckets.time.exp_avg_hour" | "buckets.time.max" | "buckets.time.min" | "buckets.time.total" | "data.buckets" | "data.earliest_record" | "data.empty_buckets" | "data.input_bytes" | "data.input_fields" | "data.input_records" | "data.invalid_dates" | "data.last" | "data.last_empty_bucket" | "data.last_sparse_bucket" | "data.latest_record" | "data.missing_fields" | "data.out_of_order_timestamps" | "data.processed_fields" | "data.processed_records" | "data.sparse_buckets" | "forecasts.memory.avg" | "forecasts.memory.max" | "forecasts.memory.min" | "forecasts.memory.total" | "forecasts.records.avg" | "forecasts.records.max" | "forecasts.records.min" | "forecasts.records.total" | "forecasts.time.avg" | "forecasts.time.max" | "forecasts.time.min" | "forecasts.time.total" | "forecasts.total" | "id" | "model.bucket_allocation_failures" | "model.by_fields" | "model.bytes" | "model.bytes_exceeded" | "model.categorization_status" | "model.categorized_doc_count" | "model.dead_category_count" | "model.failed_category_count" | "model.frequent_category_count" | "model.log_time" | "model.memory_limit" | "model.memory_status" | "model.over_fields" | "model.partition_fields" | "model.rare_category_count" | "model.timestamp" | "model.total_category_count" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "opened_time" | "state") | Enum("assignment_explanation" | "buckets.count" | "buckets.time.exp_avg" | "buckets.time.exp_avg_hour" | "buckets.time.max" | "buckets.time.min" | "buckets.time.total" | "data.buckets" | "data.earliest_record" | "data.empty_buckets" | "data.input_bytes" | "data.input_fields" | "data.input_records" | "data.invalid_dates" | "data.last" | "data.last_empty_bucket" | "data.last_sparse_bucket" | "data.latest_record" | "data.missing_fields" | "data.out_of_order_timestamps" | "data.processed_fields" | "data.processed_records" | "data.sparse_buckets" | "forecasts.memory.avg" | "forecasts.memory.max" | "forecasts.memory.min" | "forecasts.memory.total" | "forecasts.records.avg" | "forecasts.records.max" | "forecasts.records.min" | "forecasts.records.total" | "forecasts.time.avg" | "forecasts.time.max" | "forecasts.time.min" | "forecasts.time.total" | "forecasts.total" | "id" | "model.bucket_allocation_failures" | "model.by_fields" | "model.bytes" | "model.bytes_exceeded" | "model.categorization_status" | "model.categorized_doc_count" | "model.dead_category_count" | "model.failed_category_count" | "model.frequent_category_count" | "model.log_time" | "model.memory_limit" | "model.memory_status" | "model.over_fields" | "model.partition_fields" | "model.rare_category_count" | "model.timestamp" | "model.total_category_count" | "node.address" | "node.ephemeral_id" | "node.id" | "node.name" | "opened_time" | "state")[]): List of column names or column aliases used to sort the response.
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values.
ml_trained_models
editGet trained models. Returns configuration and usage information about inference trained models.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.
client.cat.mlTrainedModels({ ... })
Arguments
edit-
Request (object):
-
model_id
(Optional, string): A unique identifier for the trained model. -
allow_no_match
(Optional, boolean): Specifies what to do when the request: contains wildcard expressions and there are no models that match; contains the_all
string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. Iftrue
, the API returns an empty array when there are no matches and the subset of results when there are partial matches. Iffalse
, the API returns a 404 status code when there are no matches or only partial matches. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
h
(Optional, Enum("create_time" | "created_by" | "data_frame_analytics_id" | "description" | "heap_size" | "id" | "ingest.count" | "ingest.current" | "ingest.failed" | "ingest.pipelines" | "ingest.time" | "license" | "operations" | "version") | Enum("create_time" | "created_by" | "data_frame_analytics_id" | "description" | "heap_size" | "id" | "ingest.count" | "ingest.current" | "ingest.failed" | "ingest.pipelines" | "ingest.time" | "license" | "operations" | "version")[]): A list of column names to display. -
s
(Optional, Enum("create_time" | "created_by" | "data_frame_analytics_id" | "description" | "heap_size" | "id" | "ingest.count" | "ingest.current" | "ingest.failed" | "ingest.pipelines" | "ingest.time" | "license" | "operations" | "version") | Enum("create_time" | "created_by" | "data_frame_analytics_id" | "description" | "heap_size" | "id" | "ingest.count" | "ingest.current" | "ingest.failed" | "ingest.pipelines" | "ingest.time" | "license" | "operations" | "version")[]): A list of column names or aliases used to sort the response. -
from
(Optional, number): Skips the specified number of transforms. -
size
(Optional, number): The maximum number of transforms to display.
-
nodeattrs
editReturns information about custom node attributes. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.nodeattrs()
nodes
editReturns information about the nodes in a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.nodes({ ... })
Arguments
edit-
Request (object):
-
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
full_id
(Optional, boolean | string): Iftrue
, return the full node ID. Iffalse
, return the shortened node ID. -
include_unloaded_segments
(Optional, boolean): If true, the response includes information from segments that are not loaded into memory.
-
pending_tasks
editReturns cluster-level changes that have not yet been executed. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the pending cluster tasks API.
client.cat.pendingTasks()
plugins
editReturns a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.plugins()
recovery
editReturns information about ongoing and completed shard recoveries. Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or syncing a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing. For data streams, the API returns information about the stream’s backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index recovery API.
client.cat.recovery({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
active_only
(Optional, boolean): Iftrue
, the response only includes ongoing shard recoveries. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values. -
detailed
(Optional, boolean): Iftrue
, the response includes detailed information about shard recoveries.
-
repositories
editReturns the snapshot repositories for a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot repository API.
client.cat.repositories()
segments
editReturns low-level information about the Lucene segments in index shards. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the index segments API.
client.cat.segments({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values.
-
shards
editReturns information about the shards in a cluster. For data streams, the API returns information about the backing indices. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications.
client.cat.shards({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
bytes
(Optional, Enum("b" | "kb" | "mb" | "gb" | "tb" | "pb")): The unit used to display byte values.
-
snapshots
editReturns information about the snapshots stored in one or more repositories. A snapshot is a backup of an index or running Elasticsearch cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot API.
client.cat.snapshots({ ... })
Arguments
edit-
Request (object):
-
repository
(Optional, string | string[]): A list of snapshot repositories used to limit the request. Accepts wildcard expressions._all
returns all repositories. If any repository fails during the request, Elasticsearch returns an error. -
ignore_unavailable
(Optional, boolean): Iftrue
, the response does not include information from unavailable snapshots.
-
tasks
editReturns information about tasks currently executing in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.
client.cat.tasks({ ... })
Arguments
edit-
Request (object):
-
actions
(Optional, string[]): The task action names, which are used to limit the response. -
detailed
(Optional, boolean): Iftrue
, the response includes detailed information about shard recoveries. -
node_id
(Optional, string[]): Unique node identifiers, which are used to limit the response. -
parent_task_id
(Optional, string): The parent task identifier, which is used to limit the response.
-
templates
editReturns information about index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
client.cat.templates({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): The name of the template to return. Accepts wildcard expressions. If omitted, all templates are returned.
-
thread_pool
editReturns thread pool statistics for each node in a cluster. Returned information includes all built-in thread pools and custom thread pools. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
client.cat.threadPool({ ... })
Arguments
edit-
Request (object):
-
thread_pool_patterns
(Optional, string | string[]): A list of thread pool names used to limit the request. Accepts wildcard expressions. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values.
-
transforms
editGet transforms. Returns configuration and usage information about transforms.
CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.
client.cat.transforms({ ... })
Arguments
edit-
Request (object):
-
transform_id
(Optional, string): A transform identifier or a wildcard expression. If you do not specify one of these options, the API returns information for all transforms. -
allow_no_match
(Optional, boolean): Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the_all
string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. Iftrue
, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. Iffalse
, the request returns a 404 status code when there are no matches or only partial matches. -
from
(Optional, number): Skips the specified number of transforms. -
h
(Optional, Enum("changes_last_detection_time" | "checkpoint" | "checkpoint_duration_time_exp_avg" | "checkpoint_progress" | "create_time" | "delete_time" | "description" | "dest_index" | "documents_deleted" | "documents_indexed" | "docs_per_second" | "documents_processed" | "frequency" | "id" | "index_failure" | "index_time" | "index_total" | "indexed_documents_exp_avg" | "last_search_time" | "max_page_search_size" | "pages_processed" | "pipeline" | "processed_documents_exp_avg" | "processing_time" | "reason" | "search_failure" | "search_time" | "search_total" | "source_index" | "state" | "transform_type" | "trigger_count" | "version") | Enum("changes_last_detection_time" | "checkpoint" | "checkpoint_duration_time_exp_avg" | "checkpoint_progress" | "create_time" | "delete_time" | "description" | "dest_index" | "documents_deleted" | "documents_indexed" | "docs_per_second" | "documents_processed" | "frequency" | "id" | "index_failure" | "index_time" | "index_total" | "indexed_documents_exp_avg" | "last_search_time" | "max_page_search_size" | "pages_processed" | "pipeline" | "processed_documents_exp_avg" | "processing_time" | "reason" | "search_failure" | "search_time" | "search_total" | "source_index" | "state" | "transform_type" | "trigger_count" | "version")[]): List of column names to display. -
s
(Optional, Enum("changes_last_detection_time" | "checkpoint" | "checkpoint_duration_time_exp_avg" | "checkpoint_progress" | "create_time" | "delete_time" | "description" | "dest_index" | "documents_deleted" | "documents_indexed" | "docs_per_second" | "documents_processed" | "frequency" | "id" | "index_failure" | "index_time" | "index_total" | "indexed_documents_exp_avg" | "last_search_time" | "max_page_search_size" | "pages_processed" | "pipeline" | "processed_documents_exp_avg" | "processing_time" | "reason" | "search_failure" | "search_time" | "search_total" | "source_index" | "state" | "transform_type" | "trigger_count" | "version") | Enum("changes_last_detection_time" | "checkpoint" | "checkpoint_duration_time_exp_avg" | "checkpoint_progress" | "create_time" | "delete_time" | "description" | "dest_index" | "documents_deleted" | "documents_indexed" | "docs_per_second" | "documents_processed" | "frequency" | "id" | "index_failure" | "index_time" | "index_total" | "indexed_documents_exp_avg" | "last_search_time" | "max_page_search_size" | "pages_processed" | "pipeline" | "processed_documents_exp_avg" | "processing_time" | "reason" | "search_failure" | "search_time" | "search_total" | "source_index" | "state" | "transform_type" | "trigger_count" | "version")[]): List of column names or column aliases used to sort the response. -
time
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The unit used to display time values. -
size
(Optional, number): The maximum number of transforms to obtain.
-
ccr
editdelete_auto_follow_pattern
editDeletes auto-follow patterns.
client.ccr.deleteAutoFollowPattern({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the auto follow pattern.
-
follow
editCreates a new follower index configured to follow the referenced leader index.
client.ccr.follow({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the follower index -
leader_index
(Optional, string) -
max_outstanding_read_requests
(Optional, number) -
max_outstanding_write_requests
(Optional, number) -
max_read_request_operation_count
(Optional, number) -
max_read_request_size
(Optional, string) -
max_retry_delay
(Optional, string | -1 | 0) -
max_write_buffer_count
(Optional, number) -
max_write_buffer_size
(Optional, string) -
max_write_request_operation_count
(Optional, number) -
max_write_request_size
(Optional, string) -
read_poll_timeout
(Optional, string | -1 | 0) -
remote_cluster
(Optional, string) -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): Sets the number of shard copies that must be active before returning. Defaults to 0. Set toall
for all shard copies, otherwise set to any non-negative value less than or equal to the total number of copies for the shard (number of replicas + 1)
-
follow_info
editRetrieves information about all follower indices, including parameters and status for each follower index
client.ccr.followInfo({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index patterns; use_all
to perform the operation on all indices
-
follow_stats
editRetrieves follower stats. return shard-level stats about the following tasks associated with each shard for the specified indices.
client.ccr.followStats({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index patterns; use_all
to perform the operation on all indices
-
forget_follower
editRemoves the follower retention leases from the leader.
client.ccr.forgetFollower({ index })
Arguments
edit-
Request (object):
-
index
(string): the name of the leader index for which specified follower retention leases should be removed -
follower_cluster
(Optional, string) -
follower_index
(Optional, string) -
follower_index_uuid
(Optional, string) -
leader_remote_cluster
(Optional, string)
-
get_auto_follow_pattern
editGets configured auto-follow patterns. Returns the specified auto-follow pattern collection.
client.ccr.getAutoFollowPattern({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): Specifies the auto-follow pattern collection that you want to retrieve. If you do not specify a name, the API returns information for all collections.
-
pause_auto_follow_pattern
editPauses an auto-follow pattern
client.ccr.pauseAutoFollowPattern({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the auto follow pattern that should pause discovering new indices to follow.
-
pause_follow
editPauses a follower index. The follower index will not fetch any additional operations from the leader index.
client.ccr.pauseFollow({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the follower index that should pause following its leader index.
-
put_auto_follow_pattern
editCreates a new named collection of auto-follow patterns against a specified remote cluster. Newly created indices on the remote cluster matching any of the specified patterns will be automatically configured as follower indices.
client.ccr.putAutoFollowPattern({ name, remote_cluster })
Arguments
edit-
Request (object):
-
name
(string): The name of the collection of auto-follow patterns. -
remote_cluster
(string): The remote cluster containing the leader indices to match against. -
follow_index_pattern
(Optional, string): The name of follower index. The template {{leader_index}} can be used to derive the name of the follower index from the name of the leader index. When following a data stream, use {{leader_index}}; CCR does not support changes to the names of a follower data stream’s backing indices. -
leader_index_patterns
(Optional, string[]): An array of simple index patterns to match against indices in the remote cluster specified by the remote_cluster field. -
leader_index_exclusion_patterns
(Optional, string[]): An array of simple index patterns that can be used to exclude indices from being auto-followed. Indices in the remote cluster whose names are matching one or more leader_index_patterns and one or more leader_index_exclusion_patterns won’t be followed. -
max_outstanding_read_requests
(Optional, number): The maximum number of outstanding reads requests from the remote cluster. -
settings
(Optional, Record<string, User-defined value>): Settings to override from the leader index. Note that certain settings can not be overrode (e.g., index.number_of_shards). -
max_outstanding_write_requests
(Optional, number): The maximum number of outstanding reads requests from the remote cluster. -
read_poll_timeout
(Optional, string | -1 | 0): The maximum time to wait for new operations on the remote cluster when the follower index is synchronized with the leader index. When the timeout has elapsed, the poll for operations will return to the follower so that it can update some statistics. Then the follower will immediately attempt to read from the leader again. -
max_read_request_operation_count
(Optional, number): The maximum number of operations to pull per read from the remote cluster. -
max_read_request_size
(Optional, number | string): The maximum size in bytes of per read of a batch of operations pulled from the remote cluster. -
max_retry_delay
(Optional, string | -1 | 0): The maximum time to wait before retrying an operation that failed exceptionally. An exponential backoff strategy is employed when retrying. -
max_write_buffer_count
(Optional, number): The maximum number of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the number of queued operations goes below the limit. -
max_write_buffer_size
(Optional, number | string): The maximum total bytes of operations that can be queued for writing. When this limit is reached, reads from the remote cluster will be deferred until the total bytes of queued operations goes below the limit. -
max_write_request_operation_count
(Optional, number): The maximum number of operations per bulk write request executed on the follower. -
max_write_request_size
(Optional, number | string): The maximum total bytes of operations per bulk write request executed on the follower.
-
resume_auto_follow_pattern
editResumes an auto-follow pattern that has been paused
client.ccr.resumeAutoFollowPattern({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the auto follow pattern to resume discovering new indices to follow.
-
resume_follow
editResumes a follower index that has been paused
client.ccr.resumeFollow({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the follow index to resume following. -
max_outstanding_read_requests
(Optional, number) -
max_outstanding_write_requests
(Optional, number) -
max_read_request_operation_count
(Optional, number) -
max_read_request_size
(Optional, string) -
max_retry_delay
(Optional, string | -1 | 0) -
max_write_buffer_count
(Optional, number) -
max_write_buffer_size
(Optional, string) -
max_write_request_operation_count
(Optional, number) -
max_write_request_size
(Optional, string) -
read_poll_timeout
(Optional, string | -1 | 0)
-
stats
editGets all stats related to cross-cluster replication.
client.ccr.stats()
unfollow
editStops the following task associated with a follower index and removes index metadata and settings associated with cross-cluster replication.
client.ccr.unfollow({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the follower index that should be turned into a regular index.
-
cluster
editallocation_explain
editExplain the shard allocations. Get explanations for shard allocations in the cluster. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.
client.cluster.allocationExplain({ ... })
Arguments
edit-
Request (object):
-
current_node
(Optional, string): Specifies the node ID or the name of the node to only explain a shard that is currently located on the specified node. -
index
(Optional, string): Specifies the name of the index that you would like an explanation for. -
primary
(Optional, boolean): If true, returns explanation for the primary shard for the given shard ID. -
shard
(Optional, number): Specifies the ID of the shard that you would like an explanation for. -
include_disk_info
(Optional, boolean): If true, returns information about disk usage and shard sizes. -
include_yes_decisions
(Optional, boolean): If true, returns YES decisions in explanation.
-
delete_component_template
editDelete component templates. Deletes component templates. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
client.cluster.deleteComponentTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List or wildcard expression of component template names used to limit the request. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_voting_config_exclusions
editClear cluster voting config exclusions. Remove master-eligible nodes from the voting configuration exclusion list.
client.cluster.deleteVotingConfigExclusions({ ... })
Arguments
edit-
Request (object):
-
wait_for_removal
(Optional, boolean): Specifies whether to wait for all excluded nodes to be removed from the cluster before clearing the voting configuration exclusions list. Defaults to true, meaning that all excluded nodes must be removed from the cluster before this API takes any action. If set to false then the voting configuration exclusions list is cleared even if some excluded nodes are still in the cluster.
-
exists_component_template
editCheck component templates. Returns information about whether a particular component template exists.
client.cluster.existsComponentTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of component template names used to limit the request. Wildcard (*) expressions are supported. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
local
(Optional, boolean): If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node.
-
get_component_template
editGet component templates. Retrieves information about component templates.
client.cluster.getComponentTemplate({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): List of component template names used to limit the request. Wildcard (*
) expressions are supported. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
include_defaults
(Optional, boolean): Return all default configurations for the component template (default: false) -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. Iffalse
, information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_settings
editGet cluster-wide settings. By default, it returns only settings that have been explicitly defined.
client.cluster.getSettings({ ... })
Arguments
edit-
Request (object):
-
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
include_defaults
(Optional, boolean): Iftrue
, returns default cluster settings from the local node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
health
editGet the cluster health status. You can also use the API to get the health status of only specified data streams and indices. For data streams, the API retrieves the health status of the stream’s backing indices.
The cluster health status is: green, yellow or red. On the shard level, a red status indicates that the specific shard is not allocated in the cluster. Yellow means that the primary shard is allocated but replicas are not. Green means that all shards are allocated. The index level status is controlled by the worst shard status.
One of the main benefits of the API is the ability to wait until the cluster reaches a certain high watermark health level. The cluster status is controlled by the worst index status.
client.cluster.health({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*
) are supported. To target all data streams and indices in a cluster, omit this parameter or use _all or*
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
level
(Optional, Enum("cluster" | "indices" | "shards")): Can be one of cluster, indices or shards. Controls the details level of the health information returned. -
local
(Optional, boolean): If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): A number controlling to how many active shards to wait for, all to wait for all shards in the cluster to be active, or 0 to not wait. -
wait_for_events
(Optional, Enum("immediate" | "urgent" | "high" | "normal" | "low" | "languid")): Can be one of immediate, urgent, high, normal, low, languid. Wait until all currently queued events with the given priority are processed. -
wait_for_nodes
(Optional, string | number): The request waits until the specified number N of nodes is available. It also accepts >=N, ⇐N, >N and <N. Alternatively, it is possible to use ge(N), le(N), gt(N) and lt(N) notation. -
wait_for_no_initializing_shards
(Optional, boolean): A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard initializations. Defaults to false, which means it will not wait for initializing shards. -
wait_for_no_relocating_shards
(Optional, boolean): A boolean value which controls whether to wait (until the timeout provided) for the cluster to have no shard relocations. Defaults to false, which means it will not wait for relocating shards. -
wait_for_status
(Optional, Enum("green" | "yellow" | "red")): One of green, yellow or red. Will wait (until the timeout provided) until the status of the cluster changes to the one provided or better, i.e. green > yellow > red. By default, will not wait for any status.
-
info
editGet cluster info. Returns basic information about the cluster.
client.cluster.info({ target })
Arguments
edit-
Request (object):
-
target
(Enum("_all" | "http" | "ingest" | "thread_pool" | "script") | Enum("_all" | "http" | "ingest" | "thread_pool" | "script")[]): Limits the information returned to the specific target. Supports a list, such as http,ingest.
-
pending_tasks
editGet the pending cluster tasks. Get information about cluster-level changes (such as create index, update mapping, allocate or fail shard) that have not yet taken effect.
This API returns a list of any pending updates to the cluster state. These are distinct from the tasks reported by the task management API which include periodic tasks and tasks initiated by the user, such as node stats, search queries, or create index requests. However, if a user-initiated task such as a create index command causes a cluster state update, the activity of this task might be reported by both task api and pending cluster tasks API.
client.cluster.pendingTasks({ ... })
Arguments
edit-
Request (object):
-
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. Iffalse
, information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
post_voting_config_exclusions
editUpdate voting configuration exclusions. Update the cluster voting config exclusions by node IDs or node names. By default, if there are more than three master-eligible nodes in the cluster and you remove fewer than half of the master-eligible nodes in the cluster at once, the voting configuration automatically shrinks. If you want to shrink the voting configuration to contain fewer than three nodes or to remove half or more of the master-eligible nodes in the cluster at once, use this API to remove departing nodes from the voting configuration manually. The API adds an entry for each specified node to the cluster’s voting configuration exclusions list. It then waits until the cluster has reconfigured its voting configuration to exclude the specified nodes.
Clusters should have no voting configuration exclusions in normal operation.
Once the excluded nodes have stopped, clear the voting configuration exclusions with DELETE /_cluster/voting_config_exclusions
.
This API waits for the nodes to be fully removed from the cluster before it returns.
If your cluster has voting configuration exclusions for nodes that you no longer intend to remove, use DELETE /_cluster/voting_config_exclusions?wait_for_removal=false
to clear the voting configuration exclusions without waiting for the nodes to leave the cluster.
A response to POST /_cluster/voting_config_exclusions
with an HTTP status code of 200 OK guarantees that the node has been removed from the voting configuration and will not be reinstated until the voting configuration exclusions are cleared by calling DELETE /_cluster/voting_config_exclusions
.
If the call to POST /_cluster/voting_config_exclusions
fails or returns a response with an HTTP status code other than 200 OK then the node may not have been removed from the voting configuration.
In that case, you may safely retry the call.
Voting exclusions are required only when you remove at least half of the master-eligible nodes from a cluster in a short time period. They are not required when removing master-ineligible nodes or when removing fewer than half of the master-eligible nodes.
client.cluster.postVotingConfigExclusions({ ... })
Arguments
edit-
Request (object):
-
node_names
(Optional, string | string[]): A list of the names of the nodes to exclude from the voting configuration. If specified, you may not also specify node_ids. -
node_ids
(Optional, string | string[]): A list of the persistent ids of the nodes to exclude from the voting configuration. If specified, you may not also specify node_names. -
timeout
(Optional, string | -1 | 0): When adding a voting configuration exclusion, the API waits for the specified nodes to be excluded from the voting configuration before returning. If the timeout expires before the appropriate condition is satisfied, the request fails and returns an error.
-
put_component_template
editCreate or update a component template. Creates or updates a component template. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases.
An index template can be composed of multiple component templates.
To use a component template, specify it in an index template’s composed_of
list.
Component templates are only applied to new data streams and indices as part of a matching index template.
Settings and mappings specified directly in the index template or the create index request override any settings or mappings specified in a component template.
Component templates are only used during index creation. For data streams, this includes data stream creation and the creation of a stream’s backing indices. Changes to component templates do not affect existing indices, including a stream’s backing indices.
You can use C-style /* *\/
block comments in component templates.
You can include comments anywhere in the request body except before the opening curly bracket.
client.cluster.putComponentTemplate({ name, template })
Arguments
edit-
Request (object):
-
name
(string): Name of the component template to create. Elasticsearch includes the following built-in component templates:logs-mappings
;logs-settings
;metrics-mappings
;metrics-settings
;synthetics-mapping
;synthetics-settings
. Elastic Agent uses these templates to configure backing indices for its data streams. If you use Elastic Agent and want to overwrite one of these templates, set theversion
for your replacement template higher than the current version. If you don’t use Elastic Agent and want to disable all built-in component and index templates, setstack.templates.enabled
tofalse
using the cluster update settings API. -
template
({ aliases, mappings, settings, defaults, data_stream, lifecycle }): The template to be applied which includes mappings, settings, or aliases configuration. -
version
(Optional, number): Version number used to manage component templates externally. This number isn’t automatically generated or incremented by Elasticsearch. To unset a version, replace the template without specifying a version. -
_meta
(Optional, Record<string, User-defined value>): Optional user metadata about the component template. May have any contents. This map is not automatically generated by Elasticsearch. This information is stored in the cluster state, so keeping it short is preferable. To unset_meta
, replace the template without specifying this information. -
deprecated
(Optional, boolean): Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning. -
create
(Optional, boolean): Iftrue
, this request cannot replace or update existing component templates. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
put_settings
editUpdate the cluster settings.
Configure and update dynamic settings on a running cluster.
You can also configure dynamic settings locally on an unstarted or shut down node in elasticsearch.yml
.
Updates made with this API can be persistent, which apply across cluster restarts, or transient, which reset after a cluster restart. You can also reset transient or persistent settings by assigning them a null value.
If you configure the same setting using multiple methods, Elasticsearch applies the settings in following order of precedence: 1) Transient setting; 2) Persistent setting; 3) elasticsearch.yml
setting; 4) Default setting value.
For example, you can apply a transient setting to override a persistent setting or elasticsearch.yml
setting.
However, a change to an elasticsearch.yml
setting will not override a defined transient or persistent setting.
In Elastic Cloud, use the user settings feature to configure all cluster settings. This method automatically rejects unsafe settings that could break your cluster.
If you run Elasticsearch on your own hardware, use this API to configure dynamic cluster settings.
Only use elasticsearch.yml
for static cluster settings and node settings.
The API doesn’t require a restart and ensures a setting’s value is the same on all nodes.
Transient cluster settings are no longer recommended. Use persistent cluster settings instead. If a cluster becomes unstable, transient settings can clear unexpectedly, resulting in a potentially undesired cluster configuration.
client.cluster.putSettings({ ... })
Arguments
edit-
Request (object):
-
persistent
(Optional, Record<string, User-defined value>) -
transient
(Optional, Record<string, User-defined value>) -
flat_settings
(Optional, boolean): Return settings in flat format (default: false) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
remote_info
editGet remote cluster information. Get all of the configured remote cluster information. This API returns connection and endpoint information keyed by the configured remote cluster alias.
client.cluster.remoteInfo()
reroute
editReroute the cluster. Manually change the allocation of individual shards in the cluster. For example, a shard can be moved from one node to another explicitly, an allocation can be canceled, and an unassigned shard can be explicitly allocated to a specific node.
It is important to note that after processing any reroute commands Elasticsearch will perform rebalancing as normal (respecting the values of settings such as cluster.routing.rebalance.enable
) in order to remain in a balanced state.
For example, if the requested allocation includes moving a shard from node1 to node2 then this may cause a shard to be moved from node2 back to node1 to even things out.
The cluster can be set to disable allocations using the cluster.routing.allocation.enable
setting.
If allocations are disabled then the only allocations that will be performed are explicit ones given using the reroute command, and consequent allocations due to rebalancing.
The cluster will attempt to allocate a shard a maximum of index.allocation.max_retries
times in a row (defaults to 5
), before giving up and leaving the shard unallocated.
This scenario can be caused by structural problems such as having an analyzer which refers to a stopwords file which doesn’t exist on all nodes.
Once the problem has been corrected, allocation can be manually retried by calling the reroute API with the ?retry_failed
URI query parameter, which will attempt a single retry round for these shards.
client.cluster.reroute({ ... })
Arguments
edit-
Request (object):
-
commands
(Optional, { cancel, move, allocate_replica, allocate_stale_primary, allocate_empty_primary }[]): Defines the commands to perform. -
dry_run
(Optional, boolean): If true, then the request simulates the operation. It will calculate the result of applying the commands to the current cluster state and return the resulting cluster state after the commands (and rebalancing) have been applied; it will not actually perform the requested changes. -
explain
(Optional, boolean): If true, then the response contains an explanation of why the commands can or cannot run. -
metric
(Optional, string | string[]): Limits the information returned to the specified metrics. -
retry_failed
(Optional, boolean): If true, then retries allocation of shards that are blocked due to too many subsequent allocation failures. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
state
editGet the cluster state. Get comprehensive information about the state of the cluster.
The cluster state is an internal data structure which keeps track of a variety of information needed by every node, including the identity and attributes of the other nodes in the cluster; cluster-wide settings; index metadata, including the mapping and settings for each index; the location and status of every shard copy in the cluster.
The elected master node ensures that every node in the cluster has a copy of the same cluster state. This API lets you retrieve a representation of this internal state for debugging or diagnostic purposes. You may need to consult the Elasticsearch source code to determine the precise meaning of the response.
By default the API will route requests to the elected master node since this node is the authoritative source of cluster states.
You can also retrieve the cluster state held on the node handling the API request by adding the ?local=true
query parameter.
Elasticsearch may need to expend significant effort to compute a response to this API in larger clusters, and the response may comprise a very large quantity of data. If you use this API repeatedly, your cluster may become unstable.
The response is a representation of an internal data structure. Its format is not subject to the same compatibility guarantees as other more stable APIs and may change from version to version. Do not query this API using external monitoring tools. Instead, obtain the information you require using other more stable cluster APIs.
client.cluster.state({ ... })
Arguments
edit-
Request (object):
-
metric
(Optional, string | string[]): Limit the information returned to the specified metrics -
index
(Optional, string | string[]): A list of index names; use_all
or empty string to perform the operation on all indices -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
flat_settings
(Optional, boolean): Return settings in flat format (default: false) -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed) -
local
(Optional, boolean): Return local information, do not retrieve the state from master node (default: false) -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master -
wait_for_metadata_version
(Optional, number): Wait for the metadata version to be equal or greater than the specified metadata version -
wait_for_timeout
(Optional, string | -1 | 0): The maximum time to wait for wait_for_metadata_version before timing out
-
stats
editGet cluster statistics. Get basic index metrics (shard numbers, store size, memory usage) and information about the current nodes that form the cluster (number, roles, os, jvm versions, memory usage, cpu and installed plugins).
client.cluster.stats({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): List of node filters used to limit returned information. Defaults to all nodes in the cluster. -
include_remotes
(Optional, boolean): Include remote cluster data into the response -
timeout
(Optional, string | -1 | 0): Period to wait for each node to respond. If a node does not respond before its timeout expires, the response does not include its stats. However, timed out nodes are included in the response’s_nodes.failed
property. Defaults to no timeout.
-
connector
editcheck_in
editCheck in a connector.
Update the last_seen
field in the connector and set it to the current timestamp.
client.connector.checkIn({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be checked in
-
delete
editDelete a connector.
Removes a connector and associated sync jobs. This is a destructive action that is not recoverable. NOTE: This action doesn’t delete any API keys, ingest pipelines, or data indices associated with the connector. These need to be removed manually.
client.connector.delete({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be deleted -
delete_sync_jobs
(Optional, boolean): A flag indicating if associated sync jobs should be also removed. Defaults to false.
-
get
editGet a connector.
Get the details about a connector.
client.connector.get({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector
-
list
editGet all connectors.
Get information about all connectors.
client.connector.list({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): Starting offset (default: 0) -
size
(Optional, number): Specifies a max number of results to get -
index_name
(Optional, string | string[]): A list of connector index names to fetch connector documents for -
connector_name
(Optional, string | string[]): A list of connector names to fetch connector documents for -
service_type
(Optional, string | string[]): A list of connector service types to fetch connector documents for -
query
(Optional, string): A wildcard query string that filters connectors with matching name, description or index name
-
post
editCreate a connector.
Connectors are Elasticsearch integrations that bring content from third-party data sources, which can be deployed on Elastic Cloud or hosted on your own infrastructure. Elastic managed connectors (Native connectors) are a managed service on Elastic Cloud. Self-managed connectors (Connector clients) are self-managed on your infrastructure.
client.connector.post({ ... })
Arguments
edit-
Request (object):
-
description
(Optional, string) -
index_name
(Optional, string) -
is_native
(Optional, boolean) -
language
(Optional, string) -
name
(Optional, string) -
service_type
(Optional, string)
-
put
editCreate or update a connector.
client.connector.put({ ... })
Arguments
edit-
Request (object):
-
connector_id
(Optional, string): The unique identifier of the connector to be created or updated. ID is auto-generated if not provided. -
description
(Optional, string) -
index_name
(Optional, string) -
is_native
(Optional, boolean) -
language
(Optional, string) -
name
(Optional, string) -
service_type
(Optional, string)
-
sync_job_cancel
editCancel a connector sync job.
Cancel a connector sync job, which sets the status to cancelling and updates cancellation_requested_at
to the current time.
The connector service is then responsible for setting the status of connector sync jobs to cancelled.
client.connector.syncJobCancel({ connector_sync_job_id })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier of the connector sync job
-
sync_job_check_in
editChecks in a connector sync job (refreshes last_seen).
client.connector.syncJobCheckIn()
sync_job_claim
editClaims a connector sync job.
client.connector.syncJobClaim()
sync_job_delete
editDelete a connector sync job.
Remove a connector sync job and its associated data. This is a destructive action that is not recoverable.
client.connector.syncJobDelete({ connector_sync_job_id })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier of the connector sync job to be deleted
-
sync_job_error
editSets an error for a connector sync job.
client.connector.syncJobError()
sync_job_get
editGet a connector sync job.
client.connector.syncJobGet({ connector_sync_job_id })
Arguments
edit-
Request (object):
-
connector_sync_job_id
(string): The unique identifier of the connector sync job
-
sync_job_list
editGet all connector sync jobs.
Get information about all stored connector sync jobs listed by their creation date in ascending order.
client.connector.syncJobList({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): Starting offset (default: 0) -
size
(Optional, number): Specifies a max number of results to get -
status
(Optional, Enum("canceling" | "canceled" | "completed" | "error" | "in_progress" | "pending" | "suspended")): A sync job status to fetch connector sync jobs for -
connector_id
(Optional, string): A connector id to fetch connector sync jobs for -
job_type
(Optional, Enum("full" | "incremental" | "access_control") | Enum("full" | "incremental" | "access_control")[]): A list of job types to fetch the sync jobs for
-
sync_job_post
editCreate a connector sync job.
Create a connector sync job document in the internal index and initialize its counters and timestamps with default values.
client.connector.syncJobPost({ id })
Arguments
edit-
Request (object):
-
id
(string): The id of the associated connector -
job_type
(Optional, Enum("full" | "incremental" | "access_control")) -
trigger_method
(Optional, Enum("on_demand" | "scheduled"))
-
sync_job_update_stats
editUpdates the stats fields in the connector sync job document.
client.connector.syncJobUpdateStats()
update_active_filtering
editActivate the connector draft filter.
Activates the valid draft filtering for a connector.
client.connector.updateActiveFiltering({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated
-
update_api_key_id
editUpdate the connector API key ID.
Update the api_key_id
and api_key_secret_id
fields of a connector.
You can specify the ID of the API key used for authorization and the ID of the connector secret where the API key is stored.
The connector secret ID is required only for Elastic managed (native) connectors.
Self-managed connectors (connector clients) do not use this field.
client.connector.updateApiKeyId({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
api_key_id
(Optional, string) -
api_key_secret_id
(Optional, string)
-
update_configuration
editUpdate the connector configuration.
Update the configuration field in the connector document.
client.connector.updateConfiguration({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
configuration
(Optional, Record<string, { category, default_value, depends_on, display, label, options, order, placeholder, required, sensitive, tooltip, type, ui_restrictions, validations, value }>) -
values
(Optional, Record<string, User-defined value>)
-
update_error
editUpdate the connector error field.
Set the error field for the connector. If the error provided in the request body is non-null, the connector’s status is updated to error. Otherwise, if the error is reset to null, the connector status is updated to connected.
client.connector.updateError({ connector_id, error })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
error
(T | null)
-
update_features
editUpdates the connector features in the connector document.
client.connector.updateFeatures()
update_filtering
editUpdate the connector filtering.
Update the draft filtering configuration of a connector and marks the draft validation state as edited. The filtering draft is activated once validated by the running Elastic connector service. The filtering property is used to configure sync rules (both basic and advanced) for a connector.
client.connector.updateFiltering({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
filtering
(Optional, { active, domain, draft }[]) -
rules
(Optional, { created_at, field, id, order, policy, rule, updated_at, value }[]) -
advanced_snippet
(Optional, { created_at, updated_at, value })
-
update_filtering_validation
editUpdate the connector draft filtering validation.
Update the draft filtering validation info for a connector.
client.connector.updateFilteringValidation({ connector_id, validation })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
validation
({ errors, state })
-
update_index_name
editUpdate the connector index name.
Update the index_name
field of a connector, specifying the index where the data ingested by the connector is stored.
client.connector.updateIndexName({ connector_id, index_name })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
index_name
(T | null)
-
update_name
editUpdate the connector name and description.
client.connector.updateName({ connector_id })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
name
(Optional, string) -
description
(Optional, string)
-
update_native
editUpdate the connector is_native flag.
client.connector.updateNative({ connector_id, is_native })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
is_native
(boolean)
-
update_pipeline
editUpdate the connector pipeline.
When you create a new connector, the configuration of an ingest pipeline is populated with default settings.
client.connector.updatePipeline({ connector_id, pipeline })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
pipeline
({ extract_binary_content, name, reduce_whitespace, run_ml_inference })
-
update_scheduling
editUpdate the connector scheduling.
client.connector.updateScheduling({ connector_id, scheduling })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
scheduling
({ access_control, full, incremental })
-
update_service_type
editUpdate the connector service type.
client.connector.updateServiceType({ connector_id, service_type })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
service_type
(string)
-
update_status
editUpdate the connector status.
client.connector.updateStatus({ connector_id, status })
Arguments
edit-
Request (object):
-
connector_id
(string): The unique identifier of the connector to be updated -
status
(Enum("created" | "needs_configuration" | "configured" | "connected" | "error"))
-
dangling_indices
editdelete_dangling_index
editDelete a dangling index.
If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
For example, this can happen if you delete more than cluster.indices.tombstones.size
indices while an Elasticsearch node is offline.
client.danglingIndices.deleteDanglingIndex({ index_uuid, accept_data_loss })
Arguments
edit-
Request (object):
-
index_uuid
(string): The UUID of the index to delete. Use the get dangling indices API to find the UUID. -
accept_data_loss
(boolean): This parameter must be set to true to acknowledge that it will no longer be possible to recove data from the dangling index. -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
import_dangling_index
editImport a dangling index.
If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
For example, this can happen if you delete more than cluster.indices.tombstones.size
indices while an Elasticsearch node is offline.
client.danglingIndices.importDanglingIndex({ index_uuid, accept_data_loss })
Arguments
edit-
Request (object):
-
index_uuid
(string): The UUID of the index to import. Use the get dangling indices API to locate the UUID. -
accept_data_loss
(boolean): This parameter must be set to true to import a dangling index. Because Elasticsearch cannot know where the dangling index data came from or determine which shard copies are fresh and which are stale, it cannot guarantee that the imported data represents the latest state of the index when it was last in the cluster. -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
list_dangling_indices
editGet the dangling indices.
If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
For example, this can happen if you delete more than cluster.indices.tombstones.size
indices while an Elasticsearch node is offline.
Use this API to list dangling indices, which you can then import or delete.
client.danglingIndices.listDanglingIndices()
enrich
editdelete_policy
editDelete an enrich policy. Deletes an existing enrich policy and its enrich index.
client.enrich.deletePolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): Enrich policy to delete.
-
execute_policy
editRun an enrich policy. Create the enrich index for an existing enrich policy.
client.enrich.executePolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): Enrich policy to execute. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks other enrich policy execution requests until complete.
-
get_policy
editGet an enrich policy. Returns information about an enrich policy.
client.enrich.getPolicy({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): List of enrich policy names used to limit the request. To return information for all enrich policies, omit this parameter.
-
put_policy
editCreate an enrich policy. Creates an enrich policy.
client.enrich.putPolicy({ name })
Arguments
edit-
Request (object):
-
name
(string): Name of the enrich policy to create or update. -
geo_match
(Optional, { enrich_fields, indices, match_field, query, name, elasticsearch_version }): Matches enrich data to incoming documents based on ageo_shape
query. -
match
(Optional, { enrich_fields, indices, match_field, query, name, elasticsearch_version }): Matches enrich data to incoming documents based on aterm
query. -
range
(Optional, { enrich_fields, indices, match_field, query, name, elasticsearch_version }): Matches a number, date, or IP address in incoming documents to a range in the enrich index based on aterm
query.
-
stats
editGet enrich stats. Returns enrich coordinator statistics and information about enrich policies that are currently executing.
client.enrich.stats()
eql
editdelete
editDelete an async EQL search. Delete an async EQL search or a stored synchronous EQL search. The API also deletes results for the search.
client.eql.delete({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the search to delete. A search ID is provided in the EQL search API’s response for an async search. A search ID is also provided if the request’skeep_on_completion
parameter istrue
.
-
get
editGet async EQL search results. Get the current status and available results for an async EQL search or a stored synchronous EQL search.
client.eql.get({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the search. -
keep_alive
(Optional, string | -1 | 0): Period for which the search and its results are stored on the cluster. Defaults to the keep_alive value set by the search’s EQL search API request. -
wait_for_completion_timeout
(Optional, string | -1 | 0): Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.
-
get_status
editGet the async EQL status. Get the current status for an async EQL search or a stored synchronous EQL search without returning results.
client.eql.getStatus({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the search.
-
search
editGet EQL search results. Returns search results for an Event Query Language (EQL) query. EQL assumes each document in a data stream or index corresponds to an event.
client.eql.search({ index, query })
Arguments
edit-
Request (object):
-
index
(string | string[]): The name of the index to scope the operation -
query
(string): EQL query you wish to run. -
case_sensitive
(Optional, boolean) -
event_category_field
(Optional, string): Field containing the event classification, such as process, file, or network. -
tiebreaker_field
(Optional, string): Field used to sort hits with the same timestamp in ascending order -
timestamp_field
(Optional, string): Field containing event timestamp. Default "@timestamp" -
fetch_size
(Optional, number): Maximum number of events to search at a time for sequence queries. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type } | { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }[]): Query, written in Query DSL, used to filter the events on which the EQL query runs. -
keep_alive
(Optional, string | -1 | 0) -
keep_on_completion
(Optional, boolean) -
wait_for_completion_timeout
(Optional, string | -1 | 0) -
size
(Optional, number): For basic queries, the maximum number of matching events to return. Defaults to 10 -
fields
(Optional, { field, format, include_unmapped } | { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The response returns values for field names matching these patterns in the fields property of each hit. -
result_position
(Optional, Enum("tail" | "head")) -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>) -
allow_no_indices
(Optional, boolean) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]) -
ignore_unavailable
(Optional, boolean): If true, missing or closed indices are not included in the response.
-
esql
editasync_query
editExecutes an ESQL request asynchronously
client.esql.asyncQuery()
async_query_get
editRetrieves the results of a previously submitted async query request given its ID.
client.esql.asyncQueryGet()
query
editRun an ES|QL query. Get search results for an ES|QL (Elasticsearch query language) query.
client.esql.query({ query })
Arguments
edit-
Request (object):
-
query
(string): The ES|QL query API accepts an ES|QL query string in the query parameter, runs it, and returns the results. -
columnar
(Optional, boolean): By default, ES|QL returns results as rows. For example, FROM returns each individual document as one row. For the JSON, YAML, CBOR and smile formats, ES|QL can return the results in a columnar fashion where one row represents all the values of a certain column in the results. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Specify a Query DSL query in the filter parameter to filter the set of documents that an ES|QL query runs on. -
locale
(Optional, string) -
params
(Optional, number | number | string | boolean | null | User-defined value[]): To avoid any attempts of hacking or code injection, extract the values in a separate list of parameters. Use question mark placeholders (?) in the query string for each of the parameters. -
profile
(Optional, boolean): If provided andtrue
the response will include an extraprofile
object with information on how the query was executed. This information is for human debugging and its format can change at any time but it can give some insight into the performance of each part of the query. -
tables
(Optional, Record<string, Record<string, { integer, keyword, long, double }>>): Tables to use with the LOOKUP operation. The top level key is the table name and the next level key is the column name. -
format
(Optional, Enum("csv" | "json" | "tsv" | "txt" | "yaml" | "cbor" | "smile" | "arrow")): A short version of the Accept header, e.g. json, yaml. -
delimiter
(Optional, string): The character to use between values within a CSV row. Only valid for the CSV format. -
drop_null_columns
(Optional, boolean): Should columns that are entirelynull
be removed from thecolumns
andvalues
portion of the results? Defaults tofalse
. Iftrue
then the response will include an extra section under the nameall_columns
which has the name of all columns.
-
features
editget_features
editGets a list of features which can be included in snapshots using the feature_states field when creating a snapshot
client.features.getFeatures()
reset_features
editResets the internal state of features, usually by deleting system indices
client.features.resetFeatures()
fleet
editglobal_checkpoints
editReturns the current global checkpoints for an index. This API is design for internal use by the fleet server project.
client.fleet.globalCheckpoints({ index })
Arguments
edit-
Request (object):
-
index
(string | string): A single index or index alias that resolves to a single index. -
wait_for_advance
(Optional, boolean): A boolean value which controls whether to wait (until the timeout) for the global checkpoints to advance past the providedcheckpoints
. -
wait_for_index
(Optional, boolean): A boolean value which controls whether to wait (until the timeout) for the target index to exist and all primary shards be active. Can only be true whenwait_for_advance
is true. -
checkpoints
(Optional, number[]): A comma separated list of previous global checkpoints. When used in combination withwait_for_advance
, the API will only return once the global checkpoints advances past the checkpoints. Providing an empty list will cause Elasticsearch to immediately return the current global checkpoints. -
timeout
(Optional, string | -1 | 0): Period to wait for a global checkpoints to advance pastcheckpoints
.
-
msearch
editExecutes several [fleet searches](https://www.elastic.co/guide/en/elasticsearch/reference/current/fleet-search.html) with a single API request. The API follows the same structure as the [multi search](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html) API. However, similar to the fleet search API, it supports the wait_for_checkpoints parameter.
client.fleet.msearch({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string): A single target to search. If the target is an index alias, it must resolve to a single index. -
searches
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, index, preference, request_cache, routing, search_type, ccs_minimize_roundtrips, allow_partial_search_results, ignore_throttled } | { aggregations, collapse, query, explain, ext, stored_fields, docvalue_fields, knn, from, highlight, indices_boost, min_score, post_filter, profile, rescore, script_fields, search_after, size, sort, _source, fields, terminate_after, stats, timeout, track_scores, track_total_hits, version, runtime_mappings, seq_no_primary_term, pit, suggest }[]) -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. -
ccs_minimize_roundtrips
(Optional, boolean): If true, network roundtrips between the coordinating node and remote clusters are minimized for cross-cluster search requests. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. -
ignore_throttled
(Optional, boolean): If true, concrete, expanded or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): If true, missing or closed indices are not included in the response. -
max_concurrent_searches
(Optional, number): Maximum number of concurrent searches the multi search API can execute. -
max_concurrent_shard_requests
(Optional, number): Maximum number of concurrent shard requests that each sub-search request executes per node. -
pre_filter_shard_size
(Optional, number): Defines a threshold that enforces a pre-filter roundtrip to prefilter search shards based on query rewriting if the number of shards the search request expands to exceeds the threshold. This filter roundtrip can limit the number of shards significantly if for instance a shard can not match any documents based on its rewrite method i.e., if date filters are mandatory to match but the shard bounds and the query are disjoint. -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")): Indicates whether global term and document frequencies should be used when scoring returned documents. -
rest_total_hits_as_int
(Optional, boolean): If true, hits.total are returned as an integer in the response. Defaults to false, which returns an object. -
typed_keys
(Optional, boolean): Specifies whether aggregation and suggester names should be prefixed by their respective types in the response. -
wait_for_checkpoints
(Optional, number[]): A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search. -
allow_partial_search_results
(Optional, boolean): If true, returns partial results if there are shard request timeouts or [shard failures](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-replication.html#shard-failures). If false, returns an error with no partial results. Defaults to the configured cluster settingsearch.default_allow_partial_results
which is true by default.
-
search
editThe purpose of the fleet search api is to provide a search api where the search will only be executed after provided checkpoint has been processed and is visible for searches inside of Elasticsearch.
client.fleet.search({ index })
Arguments
edit-
Request (object):
-
index
(string | string): A single target to search. If the target is an index alias, it must resolve to a single index. -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>) -
collapse
(Optional, { field, inner_hits, max_concurrent_group_searches, collapse }) -
explain
(Optional, boolean): If true, returns detailed information about score computation as part of a hit. -
ext
(Optional, Record<string, User-defined value>): Configuration of search extensions defined by Elasticsearch plugins. -
from
(Optional, number): Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter. -
highlight
(Optional, { encoder, fields }) -
track_total_hits
(Optional, boolean | number): Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits. -
indices_boost
(Optional, Record<string, number>[]): Boosts the _score of documents from specified indices. -
docvalue_fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The request returns doc values for field names matching these patterns in the hits.fields property of the response. -
min_score
(Optional, number): Minimum _score for matching documents. Documents with a lower _score are not included in the search results. -
post_filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }) -
profile
(Optional, boolean) -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Defines the search definition using the Query DSL. -
rescore
(Optional, { window_size, query, learning_to_rank } | { window_size, query, learning_to_rank }[]) -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Retrieve a script evaluation (based on different fields) for each hit. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]) -
size
(Optional, number): The number of hits to return. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use the search_after parameter. -
slice
(Optional, { field, id, max }) -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]) -
_source
(Optional, boolean | { excludes, includes }): Indicates which source fields are returned for matching documents. These fields are returned in the hits._source property of the search response. -
fields
(Optional, { field, format, include_unmapped }[]): Array of wildcard (*) patterns. The request returns values for field names matching these patterns in the hits.fields property of the response. -
suggest
(Optional, { text }) -
terminate_after
(Optional, number): Maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting. Defaults to 0, which does not terminate query execution early. -
timeout
(Optional, string): Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout. -
track_scores
(Optional, boolean): If true, calculate and return document scores, even if the scores are not used for sorting. -
version
(Optional, boolean): If true, returns document version as part of a hit. -
seq_no_primary_term
(Optional, boolean): If true, returns sequence number and primary term of the last modification of each hit. See Optimistic concurrency control. -
stored_fields
(Optional, string | string[]): List of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the _source parameter defaults to false. You can pass _source: true to return both source fields and stored fields in the search response. -
pit
(Optional, { id, keep_alive }): Limits the search to a point in time (PIT). If you provide a PIT, you cannot specify an <index> in the request path. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name. -
stats
(Optional, string[]): Stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API. -
allow_no_indices
(Optional, boolean) -
analyzer
(Optional, string) -
analyze_wildcard
(Optional, boolean) -
batched_reduce_size
(Optional, number) -
ccs_minimize_roundtrips
(Optional, boolean) -
default_operator
(Optional, Enum("and" | "or")) -
df
(Optional, string) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]) -
ignore_throttled
(Optional, boolean) -
ignore_unavailable
(Optional, boolean) -
lenient
(Optional, boolean) -
max_concurrent_shard_requests
(Optional, number) -
min_compatible_shard_node
(Optional, string) -
preference
(Optional, string) -
pre_filter_shard_size
(Optional, number) -
request_cache
(Optional, boolean) -
routing
(Optional, string) -
scroll
(Optional, string | -1 | 0) -
search_type
(Optional, Enum("query_then_fetch" | "dfs_query_then_fetch")) -
suggest_field
(Optional, string): Specifies which field to use for suggestions. -
suggest_mode
(Optional, Enum("missing" | "popular" | "always")) -
suggest_size
(Optional, number) -
suggest_text
(Optional, string): The source text for which the suggestions should be returned. -
typed_keys
(Optional, boolean) -
rest_total_hits_as_int
(Optional, boolean) -
_source_excludes
(Optional, string | string[]) -
_source_includes
(Optional, string | string[]) -
q
(Optional, string) -
wait_for_checkpoints
(Optional, number[]): A comma separated list of checkpoints. When configured, the search API will only be executed on a shard after the relevant checkpoint has become visible for search. Defaults to an empty list which will cause Elasticsearch to immediately execute the search. -
allow_partial_search_results
(Optional, boolean): If true, returns partial results if there are shard request timeouts or [shard failures](https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-replication.html#shard-failures). If false, returns an error with no partial results. Defaults to the configured cluster settingsearch.default_allow_partial_results
which is true by default.
-
graph
editexplore
editExplore graph analytics.
Extract and summarize information about the documents and terms in an Elasticsearch data stream or index.
The easiest way to understand the behavior of this API is to use the Graph UI to explore connections.
An initial request to the _explore
API contains a seed query that identifies the documents of interest and specifies the fields that define the vertices and connections you want to include in the graph.
Subsequent requests enable you to spider out from one more vertices of interest.
You can exclude vertices that have already been returned.
client.graph.explore({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): Name of the index. -
connections
(Optional, { connections, query, vertices }): Specifies or more fields from which you want to extract terms that are associated with the specified vertices. -
controls
(Optional, { sample_diversity, sample_size, timeout, use_significance }): Direct the Graph API how to build the graph. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): A seed query that identifies the documents of interest. Can be any valid Elasticsearch query. -
vertices
(Optional, { exclude, field, include, min_doc_count, shard_min_doc_count, size }[]): Specifies one or more fields that contain the terms you want to include in the graph as vertices. -
routing
(Optional, string): Custom value used to route operations to a specific shard. -
timeout
(Optional, string | -1 | 0): Specifies the period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.
-
ilm
editdelete_lifecycle
editDeletes the specified lifecycle policy definition. You cannot delete policies that are currently in use. If the policy is being used to manage any indices, the request fails and returns an error.
client.ilm.deleteLifecycle({ policy })
Arguments
edit-
Request (object):
-
policy
(string): Identifier for the policy. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
explain_lifecycle
editRetrieves information about the index’s current lifecycle state, such as the currently executing phase, action, and step. Shows when the index entered each one, the definition of the running phase, and information about any failures.
client.ilm.explainLifecycle({ index })
Arguments
edit-
Request (object):
-
index
(string): List of data streams, indices, and aliases to target. Supports wildcards (*
). To target all data streams and indices, use*
or_all
. -
only_errors
(Optional, boolean): Filters the returned indices to only indices that are managed by ILM and are in an error state, either due to an encountering an error while executing the policy, or attempting to use a policy that does not exist. -
only_managed
(Optional, boolean): Filters the returned indices to only indices that are managed by ILM. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_lifecycle
editRetrieves a lifecycle policy.
client.ilm.getLifecycle({ ... })
Arguments
edit-
Request (object):
-
policy
(Optional, string): Identifier for the policy. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_status
editRetrieves the current index lifecycle management (ILM) status.
client.ilm.getStatus()
migrate_to_data_tiers
editSwitches the indices, ILM policies, and legacy, composable and component templates from using custom node attributes and attribute-based allocation filters to using data tiers, and optionally deletes one legacy index template.+ Using node roles enables ILM to automatically move the indices between data tiers.
client.ilm.migrateToDataTiers({ ... })
Arguments
edit-
Request (object):
-
legacy_template_to_delete
(Optional, string) -
node_attribute
(Optional, string) -
dry_run
(Optional, boolean): If true, simulates the migration from node attributes based allocation filters to data tiers, but does not perform the migration. This provides a way to retrieve the indices and ILM policies that need to be migrated.
-
move_to_step
editManually moves an index into the specified step and executes that step.
client.ilm.moveToStep({ index, current_step, next_step })
Arguments
edit-
Request (object):
-
index
(string): The name of the index whose lifecycle step is to change -
current_step
({ action, name, phase }) -
next_step
({ action, name, phase })
-
put_lifecycle
editCreates a lifecycle policy. If the specified policy exists, the policy is replaced and the policy version is incremented.
client.ilm.putLifecycle({ policy })
Arguments
edit-
Request (object):
-
policy
(string): Identifier for the policy. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
remove_policy
editRemoves the assigned lifecycle policy and stops managing the specified index
client.ilm.removePolicy({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the index to remove policy on
-
retry
editRetries executing the policy for an index that is in the ERROR step.
client.ilm.retry({ index })
Arguments
edit-
Request (object):
-
index
(string): The name of the indices (comma-separated) whose failed lifecycle step is to be retry
-
start
editStart the index lifecycle management (ILM) plugin.
client.ilm.start({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0) -
timeout
(Optional, string | -1 | 0)
-
stop
editHalts all lifecycle management operations and stops the index lifecycle management (ILM) plugin
client.ilm.stop({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0) -
timeout
(Optional, string | -1 | 0)
-
indices
editadd_block
editAdd an index block. Limits the operations allowed on an index by blocking specific operation types.
client.indices.addBlock({ index, block })
Arguments
edit-
Request (object):
-
index
(string): A comma separated list of indices to add a block to -
block
(Enum("metadata" | "read" | "read_only" | "write")): The block to add (one of read, write, read_only or metadata) -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed) -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
analyze
editGet tokens from text analysis. The analyze API performs [analysis](https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis.html) on a text string and returns the resulting tokens.
client.indices.analyze({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): Index used to derive the analyzer. If specified, theanalyzer
or field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer. -
analyzer
(Optional, string): The name of the analyzer that should be applied to the providedtext
. This could be a built-in analyzer, or an analyzer that’s been configured in the index. -
attributes
(Optional, string[]): Array of token attributes used to filter the output of theexplain
parameter. -
char_filter
(Optional, string | { type, escaped_tags } | { type, mappings, mappings_path } | { type, flags, pattern, replacement } | { type, mode, name } | { type, normalize_kana, normalize_kanji }[]): Array of character filters used to preprocess characters before the tokenizer. -
explain
(Optional, boolean): Iftrue
, the response includes token attributes and additional details. -
field
(Optional, string): Field used to derive the analyzer. To use this parameter, you must specify an index. If specified, theanalyzer
parameter overrides this value. -
filter
(Optional, string | { type, preserve_original } | { type, common_words, common_words_path, ignore_case, query_mode } | { type, filter, script } | { type, delimiter, encoding } | { type, max_gram, min_gram, side, preserve_original } | { type, articles, articles_path, articles_case } | { type, max_output_size, separator } | { type, dedup, dictionary, locale, longest_only } | { type } | { type, mode, types } | { type, keep_words, keep_words_case, keep_words_path } | { type, ignore_case, keywords, keywords_path, keywords_pattern } | { type } | { type, max, min } | { type, consume_all_tokens, max_token_count } | { type, language } | { type, filters, preserve_original } | { type, max_gram, min_gram, preserve_original } | { type, stoptags } | { type, patterns, preserve_original } | { type, all, flags, pattern, replacement } | { type } | { type, script } | { type } | { type } | { type, filler_token, max_shingle_size, min_shingle_size, output_unigrams, output_unigrams_if_no_shingles, token_separator } | { type, language } | { type, rules, rules_path } | { type, language } | { type, ignore_case, remove_trailing, stopwords, stopwords_path } | { type, expand, format, lenient, synonyms, synonyms_path, synonyms_set, tokenizer, updateable } | { type, expand, format, lenient, synonyms, synonyms_path, synonyms_set, tokenizer, updateable } | { type } | { type, length } | { type, only_on_same_position } | { type } | { type, adjust_offsets, catenate_all, catenate_numbers, catenate_words, generate_number_parts, generate_word_parts, ignore_keywords, preserve_original, protected_words, protected_words_path, split_on_case_change, split_on_numerics, stem_english_possessive, type_table, type_table_path } | { type, catenate_all, catenate_numbers, catenate_words, generate_number_parts, generate_word_parts, preserve_original, protected_words, protected_words_path, split_on_case_change, split_on_numerics, stem_english_possessive, type_table, type_table_path } | { type, minimum_length } | { type, use_romaji } | { type, stoptags } | { type, alternate, case_first, case_level, country, decomposition, hiragana_quaternary_mode, language, numeric, rules, strength, variable_top, variant } | { type, unicode_set_filter } | { type, name } | { type, dir, id } | { type, encoder, languageset, max_code_len, name_type, replace, rule_type } | { type }[]): Array of token filters used to apply after the tokenizer. -
normalizer
(Optional, string): Normalizer to use to convert text into a single token. -
text
(Optional, string | string[]): Text to analyze. If an array of strings is provided, it is analyzed as a multi-value field. -
tokenizer
(Optional, string | { type, tokenize_on_chars, max_token_length } | { type, max_token_length } | { type, custom_token_chars, max_gram, min_gram, token_chars } | { type, buffer_size } | { type } | { type } | { type, custom_token_chars, max_gram, min_gram, token_chars } | { type, buffer_size, delimiter, replacement, reverse, skip } | { type, flags, group, pattern } | { type, pattern } | { type, pattern } | { type, max_token_length } | { type } | { type, max_token_length } | { type, max_token_length } | { type, rule_files } | { type, discard_punctuation, mode, nbest_cost, nbest_examples, user_dictionary, user_dictionary_rules, discard_compound_token } | { type, decompound_mode, discard_punctuation, user_dictionary, user_dictionary_rules }): Tokenizer to use to convert text into tokens.
-
clear_cache
editClears the caches of one or more indices. For data streams, the API clears the caches of the stream’s backing indices.
client.indices.clearCache({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
fielddata
(Optional, boolean): Iftrue
, clears the fields cache. Use thefields
parameter to clear the cache of specific fields only. -
fields
(Optional, string | string[]): List of field names used to limit thefielddata
parameter. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
query
(Optional, boolean): Iftrue
, clears the query cache. -
request
(Optional, boolean): Iftrue
, clears the request cache.
-
clone
editClones an existing index.
client.indices.clone({ index, target })
Arguments
edit-
Request (object):
-
index
(string): Name of the source index to clone. -
target
(string): Name of the target index to create. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the resulting index. -
settings
(Optional, Record<string, User-defined value>): Configuration options for the target index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
close
editCloses an index.
client.indices.close({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List or wildcard expression of index names used to limit the request. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
create
editCreate an index. Creates a new index.
client.indices.create({ index })
Arguments
edit-
Request (object):
-
index
(string): Name of the index you wish to create. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the index. -
mappings
(Optional, { all_field, date_detection, dynamic, dynamic_date_formats, dynamic_templates, _field_names, index_field, _meta, numeric_detection, properties, _routing, _size, _source, runtime, enabled, subobjects, _data_stream_timestamp }): Mapping for fields in the index. If specified, this mapping can include:- Field names
- Field data types
- Mapping parameters
-
settings
(Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store }): Configuration options for the index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
create_data_stream
editCreate a data stream. Creates a data stream. You must have a matching index template with data stream enabled.
client.indices.createDataStream({ name })
Arguments
edit-
Request (object):
-
name
(string): Name of the data stream, which must meet the following criteria: Lowercase only; Cannot include\
,/
,*
,?
,"
,<
,>
,|
,,
,#
,:
, or a space character; Cannot start with-
,_
,+
, or.ds-
; Cannot be.
or..
; Cannot be longer than 255 bytes. Multi-byte characters count towards this limit faster. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
data_streams_stats
editGet data stream stats. Retrieves statistics for one or more data streams.
client.indices.dataStreamsStats({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): List of data streams used to limit the request. Wildcard expressions (*
) are supported. To target all data streams in a cluster, omit this parameter or use*
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values, such asopen,hidden
.
-
delete
editDelete indices. Deletes one or more indices.
client.indices.delete({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of indices to delete. You cannot specify index aliases. By default, this parameter does not support wildcards (*
) or_all
. To use wildcards or_all
, set theaction.destructive_requires_name
cluster setting tofalse
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_alias
editDelete an alias. Removes a data stream or index from an alias.
client.indices.deleteAlias({ index, name })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams or indices used to limit the request. Supports wildcards (*
). -
name
(string | string[]): List of aliases to remove. Supports wildcards (*
). To remove all aliases, use*
or_all
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_data_lifecycle
editDelete data stream lifecycles. Removes the data stream lifecycle from a data stream, rendering it not managed by the data stream lifecycle.
client.indices.deleteDataLifecycle({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): A list of data streams of which the data stream lifecycle will be deleted; use*
to get all data streams -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether wildcard expressions should get expanded to open or closed indices (default: open) -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master -
timeout
(Optional, string | -1 | 0): Explicit timestamp for the document
-
delete_data_stream
editDelete data streams. Deletes one or more data streams and their backing indices.
client.indices.deleteDataStream({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of data streams to delete. Wildcard (*
) expressions are supported. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values,such asopen,hidden
.
-
delete_index_template
editDelete an index template. The provided <index-template> may contain multiple template names separated by a comma. If multiple template names are specified then there is no wildcard support and the provided names should match completely with existing templates.
client.indices.deleteIndexTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of index template names used to limit the request. Wildcard (*) expressions are supported. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_template
editDeletes a legacy index template.
client.indices.deleteTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the legacy index template to delete. Wildcard (*
) expressions are supported. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
disk_usage
editAnalyzes the disk usage of each field of an index or data stream.
client.indices.diskUsage({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and aliases used to limit the request. It’s recommended to execute this API with a single index (or the latest backing index of a data stream) as the API consumes resources significantly. -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
flush
(Optional, boolean): Iftrue
, the API performs a flush before analysis. Iffalse
, the response may not include uncommitted data. -
ignore_unavailable
(Optional, boolean): Iftrue
, missing or closed indices are not included in the response. -
run_expensive_tasks
(Optional, boolean): Analyzing field disk usage is resource-intensive. To use the API, this parameter must be set totrue
.
-
downsample
editAggregates a time series (TSDS) index and stores pre-computed statistical summaries (min
, max
, sum
, value_count
and avg
) for each metric field grouped by a configured time interval.
client.indices.downsample({ index, target_index })
Arguments
edit-
Request (object):
-
index
(string): Name of the time series index to downsample. -
target_index
(string): Name of the index to create. -
config
(Optional, { fixed_interval })
-
exists
editCheck indices. Checks if one or more indices, index aliases, or data streams exist.
client.indices.exists({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and aliases. Supports wildcards (*
). -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
include_defaults
(Optional, boolean): Iftrue
, return all default settings in the response. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only.
-
exists_alias
editCheck aliases. Checks if one or more data stream or index aliases exist.
client.indices.existsAlias({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of aliases to check. Supports wildcards (*
). -
index
(Optional, string | string[]): List of data streams or indices used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, requests that include a missing data stream or index in the target indices or data streams return an error. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only.
-
exists_index_template
editCheck index templates. Check whether index templates exist.
client.indices.existsIndexTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): List of index template names used to limit the request. Wildcard (*) expressions are supported. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
exists_template
editCheck existence of index templates. Returns information about whether a particular index template exists.
client.indices.existsTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): The comma separated names of the index templates -
flat_settings
(Optional, boolean): Return settings in flat format (default: false) -
local
(Optional, boolean): Return local information, do not retrieve the state from master node (default: false) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node
-
explain_data_lifecycle
editGet the status for a data stream lifecycle. Retrieves information about an index or data stream’s current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.
client.indices.explainDataLifecycle({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): The name of the index to explain -
include_defaults
(Optional, boolean): indicates if the API should return the default values the system uses for the index’s lifecycle -
master_timeout
(Optional, string | -1 | 0): Specify timeout for connection to master
-
field_usage_stats
editReturns field usage information for each shard and field of an index.
client.indices.fieldUsageStats({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List or wildcard expression of index names used to limit the request. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
ignore_unavailable
(Optional, boolean): Iftrue
, missing or closed indices are not included in the response. -
fields
(Optional, string | string[]): List or wildcard expressions of fields to include in the statistics. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
flush
editFlushes one or more data streams or indices.
client.indices.flush({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases to flush. Supports wildcards (*
). To flush all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
force
(Optional, boolean): Iftrue
, the request forces a flush even if there are no changes to commit to the index. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
wait_if_ongoing
(Optional, boolean): Iftrue
, the flush operation blocks until execution when another flush operation is running. Iffalse
, Elasticsearch returns an error if you request a flush when another flush operation is running.
-
forcemerge
editPerforms the force merge operation on one or more indices.
client.indices.forcemerge({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of index names; use_all
or empty string to perform the operation on all indices -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
flush
(Optional, boolean): Specify whether the index should be flushed after performing the operation (default: true) -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed) -
max_num_segments
(Optional, number): The number of segments the index should be merged into (default: dynamic) -
only_expunge_deletes
(Optional, boolean): Specify whether the operation should only expunge deleted documents -
wait_for_completion
(Optional, boolean): Should the request wait until the force merge is completed.
-
get
editGet index information. Returns information about one or more indices. For data streams, the API returns information about the stream’s backing indices.
client.indices.get({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and index aliases used to limit the request. Wildcard expressions (*) are supported. -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard expressions can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such as open,hidden. -
flat_settings
(Optional, boolean): If true, returns settings in flat format. -
ignore_unavailable
(Optional, boolean): If false, requests that target a missing index return an error. -
include_defaults
(Optional, boolean): If true, return all default settings in the response. -
local
(Optional, boolean): If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
features
(Optional, { name, description } | { name, description }[]): Return only information on specified index features
-
get_alias
editGet aliases. Retrieves information for one or more data stream or index aliases.
client.indices.getAlias({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): List of aliases to retrieve. Supports wildcards (*
). To retrieve all aliases, omit this parameter or use*
or_all
. -
index
(Optional, string | string[]): List of data streams or indices used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only.
-
get_data_lifecycle
editGet data stream lifecycles. Retrieves the data stream lifecycle configuration of one or more data streams.
client.indices.getDataLifecycle({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of data streams to limit the request. Supports wildcards (*
). To target all data streams, omit this parameter or use*
or_all
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
include_defaults
(Optional, boolean): Iftrue
, return all default settings in the response. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_data_stream
editGet data streams. Retrieves information about one or more data streams.
client.indices.getDataStream({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): List of data stream names used to limit the request. Wildcard (*
) expressions are supported. If omitted, all data streams are returned. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values, such asopen,hidden
. -
include_defaults
(Optional, boolean): If true, returns all relevant default configurations for the index template. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
verbose
(Optional, boolean): Whether the maximum timestamp for each data stream should be calculated and returned.
-
get_field_mapping
editGet mapping definitions. Retrieves mapping definitions for one or more fields. For data streams, the API retrieves field mappings for the stream’s backing indices.
client.indices.getFieldMapping({ fields })
Arguments
edit-
Request (object):
-
fields
(string | string[]): List or wildcard expression of fields used to limit returned information. -
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
include_defaults
(Optional, boolean): Iftrue
, return all default settings in the response. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only.
-
get_index_template
editGet index templates. Returns information about one or more index templates.
client.indices.getIndexTemplate({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): List of index template names used to limit the request. Wildcard (*) expressions are supported. -
local
(Optional, boolean): If true, the request retrieves information from the local node only. Defaults to false, which means information is retrieved from the master node. -
flat_settings
(Optional, boolean): If true, returns settings in flat format. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
include_defaults
(Optional, boolean): If true, returns all relevant default configurations for the index template.
-
get_mapping
editGet mapping definitions. Retrieves mapping definitions for one or more indices. For data streams, the API retrieves mappings for the stream’s backing indices.
client.indices.getMapping({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_settings
editGet index settings. Returns setting information for one or more indices. For data streams, returns setting information for the stream’s backing indices.
client.indices.getSettings({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
name
(Optional, string | string[]): List or wildcard expression of settings to retrieve. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts with foo but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
include_defaults
(Optional, boolean): Iftrue
, return all default settings in the response. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. Iffalse
, information is retrieved from the master node. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_template
editGet index templates. Retrieves information about one or more index templates.
client.indices.getTemplate({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): List of index template names used to limit the request. Wildcard (*
) expressions are supported. To return all index templates, omit this parameter or use a value of_all
or*
. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
local
(Optional, boolean): Iftrue
, the request retrieves information from the local node only. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
migrate_to_data_stream
editConvert an index alias to a data stream.
Converts an index alias to a data stream.
You must have a matching index template that is data stream enabled.
The alias must meet the following criteria:
The alias must have a write index;
All indices for the alias must have a @timestamp
field mapping of a date
or date_nanos
field type;
The alias must not have any filters;
The alias must not use custom routing.
If successful, the request removes the alias and creates a data stream with the same name.
The indices for the alias become hidden backing indices for the stream.
The write index for the alias becomes the write index for the stream.
client.indices.migrateToDataStream({ name })
Arguments
edit-
Request (object):
-
name
(string): Name of the index alias to convert to a data stream. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
modify_data_stream
editUpdate data streams. Performs one or more data stream modification actions in a single atomic operation.
client.indices.modifyDataStream({ actions })
Arguments
edit-
Request (object):
-
actions
({ add_backing_index, remove_backing_index }[]): Actions to perform.
-
open
editOpens a closed index. For data streams, the API opens any closed backing indices.
client.indices.open({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). By default, you must explicitly name the indices you using to limit the request. To limit a request using_all
,*
, or other wildcard expressions, change theaction.destructive_requires_name
setting to false. You can update this setting in theelasticsearch.yml
file or using the cluster update settings API. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
promote_data_stream
editPromotes a data stream from a replicated data stream managed by CCR to a regular data stream
client.indices.promoteDataStream({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the data stream -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
put_alias
editCreate or update an alias. Adds a data stream or index to an alias.
client.indices.putAlias({ index, name })
Arguments
edit-
Request (object):
-
index
(string | string[]): List of data streams or indices to add. Supports wildcards (*
). Wildcard patterns that match both data streams and indices return an error. -
name
(string): Alias to update. If the alias doesn’t exist, the request creates it. Index alias names support date math. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Query used to limit documents the alias can access. -
index_routing
(Optional, string): Value used to route indexing operations to a specific shard. If specified, this overwrites therouting
value for indexing operations. Data stream aliases don’t support this parameter. -
is_write_index
(Optional, boolean): Iftrue
, sets the write index or data stream for the alias. If an alias points to multiple indices or data streams andis_write_index
isn’t set, the alias rejects write requests. If an index alias points to one index andis_write_index
isn’t set, the index automatically acts as the write index. Data stream aliases don’t automatically set a write data stream, even if the alias points to one data stream. -
routing
(Optional, string): Value used to route indexing and search operations to a specific shard. Data stream aliases don’t support this parameter. -
search_routing
(Optional, string): Value used to route search operations to a specific shard. If specified, this overwrites therouting
value for search operations. Data stream aliases don’t support this parameter. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_data_lifecycle
editUpdate data stream lifecycles. Update the data stream lifecycle of the specified data streams.
client.indices.putDataLifecycle({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): List of data streams used to limit the request. Supports wildcards (*
). To target all data streams use*
or_all
. -
data_retention
(Optional, string | -1 | 0): If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely. -
downsampling
(Optional, { rounds }): If defined, every backing index will execute the configured downsampling configuration after the backing index is not the data stream write index anymore. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of data stream that wildcard patterns can match. Supports a list of values, such asopen,hidden
. Valid values are:all
,hidden
,open
,closed
,none
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_index_template
editCreate or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.
client.indices.putIndexTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): Index or template name -
index_patterns
(Optional, string | string[]): Name of the index template to create. -
composed_of
(Optional, string[]): An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence. -
template
(Optional, { aliases, mappings, settings, lifecycle }): Template to be applied. It may optionally include analiases
,mappings
, orsettings
configuration. -
data_stream
(Optional, { hidden, allow_custom_routing }): If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with adata_stream
object. -
priority
(Optional, number): Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch. -
version
(Optional, number): Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch. -
_meta
(Optional, Record<string, User-defined value>): Optional user metadata about the index template. May have any contents. This map is not automatically generated by Elasticsearch. -
allow_auto_create
(Optional, boolean): This setting overrides the value of theaction.auto_create_index
cluster setting. If set totrue
in a template, then indices can be automatically created using that template even if auto-creation of indices is disabled viaactions.auto_create_index
. If set tofalse
, then indices or data streams matching the template must always be explicitly created, and may never be automatically created. -
ignore_missing_component_templates
(Optional, string[]): The configuration option ignore_missing_component_templates can be used when an index template references a component template that might not exist -
deprecated
(Optional, boolean): Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning. -
create
(Optional, boolean): Iftrue
, this request cannot replace or update existing index templates. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
cause
(Optional, string): User defined reason for creating/updating the index template
-
put_mapping
editUpdate field mappings. Adds new fields to an existing data stream or index. You can also use this API to change the search settings of existing fields. For data streams, these changes are applied to all backing indices by default.
client.indices.putMapping({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index names the mapping should be added to (supports wildcards); use_all
or omit to add the mapping on all indices. -
date_detection
(Optional, boolean): Controls whether dynamic date detection is enabled. -
dynamic
(Optional, Enum("strict" | "runtime" | true | false)): Controls whether new fields are added dynamically. -
dynamic_date_formats
(Optional, string[]): If date detection is enabled then new string fields are checked against dynamic_date_formats and if the value matches then a new date field is added instead of string. -
dynamic_templates
(Optional, Record<string, { mapping, runtime, match, path_match, unmatch, path_unmatch, match_mapping_type, unmatch_mapping_type, match_pattern }> | Record<string, { mapping, runtime, match, path_match, unmatch, path_unmatch, match_mapping_type, unmatch_mapping_type, match_pattern }>[]): Specify dynamic templates for the mapping. -
_field_names
(Optional, { enabled }): Control whether field names are enabled for the index. -
_meta
(Optional, Record<string, User-defined value>): A mapping type can have custom meta data associated with it. These are not used at all by Elasticsearch, but can be used to store application-specific metadata. -
numeric_detection
(Optional, boolean): Automatically map strings into numeric data types for all fields. -
properties
(Optional, Record<string, { type } | { boost, fielddata, index, null_value, type } | { type, enabled, null_value, boost, coerce, script, on_script_error, ignore_malformed, time_series_metric, analyzer, eager_global_ordinals, index, index_options, index_phrases, index_prefixes, norms, position_increment_gap, search_analyzer, search_quote_analyzer, term_vector, format, precision_step, locale } | { relations, eager_global_ordinals, type } | { boost, eager_global_ordinals, index, index_options, script, on_script_error, normalizer, norms, null_value, similarity, split_queries_on_whitespace, time_series_dimension, type } | { type, fields, meta, copy_to } | { type } | { positive_score_impact, type } | { positive_score_impact, type } | { analyzer, index, index_options, max_shingle_size, norms, search_analyzer, search_quote_analyzer, similarity, term_vector, type } | { analyzer, boost, eager_global_ordinals, fielddata, fielddata_frequency_filter, index, index_options, index_phrases, index_prefixes, norms, position_increment_gap, search_analyzer, search_quote_analyzer, similarity, term_vector, type } | { type } | { type, null_value } | { boost, format, ignore_malformed, index, null_value, precision_step, type } | { boost, fielddata, format, ignore_malformed, index, null_value, precision_step, locale, type } | { type, default_metric, metrics, time_series_metric } | { type, element_type, dims, similarity, index, index_options } | { boost, depth_limit, doc_values, eager_global_ordinals, index, index_options, null_value, similarity, split_queries_on_whitespace, type } | { enabled, include_in_parent, include_in_root, type } | { enabled, subobjects, type } | { type, meta, inference_id } | { type } | { analyzer, contexts, max_input_length, preserve_position_increments, preserve_separators, search_analyzer, type } | { value, type } | { path, type } | { ignore_malformed, type } | { boost, index, ignore_malformed, null_value, on_script_error, script, time_series_dimension, type } | { type } | { analyzer, boost, index, null_value, enable_position_increments, type } | { ignore_malformed, ignore_z_value, null_value, index, on_script_error, script, type } | { coerce, ignore_malformed, ignore_z_value, orientation, strategy, type } | { ignore_malformed, ignore_z_value, null_value, type } | { coerce, ignore_malformed, ignore_z_value, orientation, type } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value } | { type, null_value, scaling_factor } | { type, null_value } | { type, null_value } | { format, type } | { type } | { type } | { type } | { type } | { type } | { type, norms, index_options, index, null_value, rules, language, country, variant, strength, decomposition, alternate, case_level, case_first, numeric, variable_top, hiragana_quaternary_mode }>): Mapping for a field. For new fields, this mapping can include:- Field name
- Field data type
- Mapping parameters
-
_routing
(Optional, { required }): Enable making a routing value required on indexed documents. -
_source
(Optional, { compress, compress_threshold, enabled, excludes, includes, mode }): Control whether the _source field is enabled on the index. -
runtime
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Mapping of runtime fields for the index. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
write_index_only
(Optional, boolean): Iftrue
, the mappings are applied only to the current write index for the target.
-
put_settings
editUpdate index settings. Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.
client.indices.putSettings({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
settings
(Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store }) -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
flat_settings
(Optional, boolean): Iftrue
, returns settings in flat format. -
ignore_unavailable
(Optional, boolean): Iftrue
, returns settings in flat format. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
preserve_existing
(Optional, boolean): Iftrue
, existing index settings remain unchanged. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_template
editCreate or update an index template. Index templates define settings, mappings, and aliases that can be applied automatically to new indices.
client.indices.putTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the template -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the index. -
index_patterns
(Optional, string | string[]): Array of wildcard expressions used to match the names of indices during creation. -
mappings
(Optional, { all_field, date_detection, dynamic, dynamic_date_formats, dynamic_templates, _field_names, index_field, _meta, numeric_detection, properties, _routing, _size, _source, runtime, enabled, subobjects, _data_stream_timestamp }): Mapping for fields in the index. -
order
(Optional, number): Order in which Elasticsearch applies this template if index matches multiple templates.
-
Templates with lower order values are merged first. Templates with higher
order values are merged later, overriding templates with lower values.
settings
(Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store }): Configuration options for the index.
version
(Optional, number): Version number used to manage index templates externally. This number
is not automatically generated by Elasticsearch.
create
(Optional, boolean): If true, this request cannot replace or update existing index templates.
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is
received before the timeout expires, the request fails and returns an error.
* *cause
(Optional, string)
recovery
editReturns information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream’s backing indices.
client.indices.recovery({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
active_only
(Optional, boolean): Iftrue
, the response only includes ongoing shard recoveries. -
detailed
(Optional, boolean): Iftrue
, the response includes detailed information about shard recoveries.
-
refresh
editRefresh an index. A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.
client.indices.refresh({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index.
-
reload_search_analyzers
editReloads an index’s search analyzers and their resources.
client.indices.reloadSearchAnalyzers({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of index names to reload analyzers for -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed)
-
resolve_cluster
editResolves the specified index expressions to return information about each cluster, including the local cluster, if included. Multiple patterns and remote clusters are supported.
client.indices.resolveCluster({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): Comma-separated name(s) or index pattern(s) of the indices, aliases, and data streams to resolve. Resources on remote clusters can be specified using the<cluster>
:<name>
syntax. -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_throttled
(Optional, boolean): If true, concrete, expanded or aliased indices are ignored when frozen. Defaults to false. -
ignore_unavailable
(Optional, boolean): If false, the request returns an error if it targets a missing or closed index. Defaults to false.
-
resolve_index
editResolve indices. Resolve the names and/or index patterns for indices, aliases, and data streams. Multiple patterns and remote clusters are supported.
client.indices.resolveIndex({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): Comma-separated name(s) or index pattern(s) of the indices, aliases, and data streams to resolve. Resources on remote clusters can be specified using the<cluster>
:<name>
syntax. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
.
-
rollover
editRoll over to a new index. Creates a new index for a data stream or index alias.
client.indices.rollover({ alias })
Arguments
edit-
Request (object):
-
alias
(string): Name of the data stream or index alias to roll over. -
new_index
(Optional, string): Name of the index to create. Supports date math. Data streams do not support this parameter. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the target index. Data streams do not support this parameter. -
conditions
(Optional, { min_age, max_age, max_age_millis, min_docs, max_docs, max_size, max_size_bytes, min_size, min_size_bytes, max_primary_shard_size, max_primary_shard_size_bytes, min_primary_shard_size, min_primary_shard_size_bytes, max_primary_shard_docs, min_primary_shard_docs }): Conditions for the rollover. If specified, Elasticsearch only performs the rollover if the current index satisfies these conditions. If this parameter is not specified, Elasticsearch performs the rollover unconditionally. If conditions are specified, at least one of them must be amax_*
condition. The index will rollover if anymax_*
condition is satisfied and allmin_*
conditions are satisfied. -
mappings
(Optional, { all_field, date_detection, dynamic, dynamic_date_formats, dynamic_templates, _field_names, index_field, _meta, numeric_detection, properties, _routing, _size, _source, runtime, enabled, subobjects, _data_stream_timestamp }): Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping paramaters. -
settings
(Optional, Record<string, User-defined value>): Configuration options for the index. Data streams do not support this parameter. -
dry_run
(Optional, boolean): Iftrue
, checks whether the current index satisfies the specified conditions but does not perform a rollover. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
segments
editReturns low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream’s backing indices.
client.indices.segments({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
verbose
(Optional, boolean): Iftrue
, the request returns a verbose response.
-
shard_stores
editRetrieves store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream’s backing indices.
client.indices.shardStores({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. -
ignore_unavailable
(Optional, boolean): If true, missing or closed indices are not included in the response. -
status
(Optional, Enum("green" | "yellow" | "red" | "all") | Enum("green" | "yellow" | "red" | "all")[]): List of shard health statuses used to limit the request.
-
shrink
editShrinks an existing index into a new index with fewer primary shards.
client.indices.shrink({ index, target })
Arguments
edit-
Request (object):
-
index
(string): Name of the source index to shrink. -
target
(string): Name of the target index to create. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): The key is the alias name. Index alias names support date math. -
settings
(Optional, Record<string, User-defined value>): Configuration options for the target index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
simulate_index_template
editSimulate an index. Returns the index configuration that would be applied to the specified index from an existing index template.
client.indices.simulateIndexTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): Name of the index to simulate -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
include_defaults
(Optional, boolean): If true, returns all relevant default configurations for the index template.
-
simulate_template
editSimulate an index template. Returns the index configuration that would be applied by a particular index template.
client.indices.simulateTemplate({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): Name of the index template to simulate. To test a template configuration before you add it to the cluster, omit this parameter and specify the template configuration in the request body. -
allow_auto_create
(Optional, boolean): This setting overrides the value of theaction.auto_create_index
cluster setting. If set totrue
in a template, then indices can be automatically created using that template even if auto-creation of indices is disabled viaactions.auto_create_index
. If set tofalse
, then indices or data streams matching the template must always be explicitly created, and may never be automatically created. -
index_patterns
(Optional, string | string[]): Array of wildcard (*
) expressions used to match the names of data streams and indices during creation. -
composed_of
(Optional, string[]): An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence. -
template
(Optional, { aliases, mappings, settings, lifecycle }): Template to be applied. It may optionally include analiases
,mappings
, orsettings
configuration. -
data_stream
(Optional, { hidden, allow_custom_routing }): If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with adata_stream
object. -
priority
(Optional, number): Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch. -
version
(Optional, number): Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch. -
_meta
(Optional, Record<string, User-defined value>): Optional user metadata about the index template. May have any contents. This map is not automatically generated by Elasticsearch. -
ignore_missing_component_templates
(Optional, string[]): The configuration option ignore_missing_component_templates can be used when an index template references a component template that might not exist -
deprecated
(Optional, boolean): Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning. -
create
(Optional, boolean): If true, the template passed in the body is only used if no existing templates match the same index patterns. If false, the simulation uses the template with the highest priority. Note that the template is not permanently added or updated in either case; it is only used for the simulation. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
include_defaults
(Optional, boolean): If true, returns all relevant default configurations for the index template.
-
split
editSplits an existing index into a new index with more primary shards.
client.indices.split({ index, target })
Arguments
edit-
Request (object):
-
index
(string): Name of the source index to split. -
target
(string): Name of the target index to create. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the resulting index. -
settings
(Optional, Record<string, User-defined value>): Configuration options for the target index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
stats
editReturns statistics for one or more indices. For data streams, the API retrieves statistics for the stream’s backing indices.
client.indices.stats({ ... })
Arguments
edit-
Request (object):
-
metric
(Optional, string | string[]): Limit the information returned the specific metrics. -
index
(Optional, string | string[]): A list of index names; use_all
or empty string to perform the operation on all indices -
completion_fields
(Optional, string | string[]): List or wildcard expressions of fields to include in fielddata and suggest statistics. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
fielddata_fields
(Optional, string | string[]): List or wildcard expressions of fields to include in fielddata statistics. -
fields
(Optional, string | string[]): List or wildcard expressions of fields to include in the statistics. -
forbid_closed_indices
(Optional, boolean): If true, statistics are not collected from closed indices. -
groups
(Optional, string | string[]): List of search groups to include in the search statistics. -
include_segment_file_sizes
(Optional, boolean): If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested). -
include_unloaded_segments
(Optional, boolean): If true, the response includes information from segments that are not loaded into memory. -
level
(Optional, Enum("cluster" | "indices" | "shards")): Indicates whether statistics are aggregated at the cluster, index, or shard level.
-
unfreeze
editUnfreezes an index.
client.indices.unfreeze({ index })
Arguments
edit-
Request (object):
-
index
(string): Identifier for the index. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, string): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
update_aliases
editCreate or update an alias. Adds a data stream or index to an alias.
client.indices.updateAliases({ ... })
Arguments
edit-
Request (object):
-
actions
(Optional, { add_backing_index, remove_backing_index }[]): Actions to perform. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
validate_query
editValidate a query. Validates a query without running it.
client.indices.validateQuery({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases to search. Supports wildcards (*
). To search all data streams or indices, omit this parameter or use*
or_all
. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Query in the Lucene query string syntax. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
all_shards
(Optional, boolean): Iftrue
, the validation is executed on all shards instead of one random shard per index. -
analyzer
(Optional, string): Analyzer to use for the query string. This parameter can only be used when theq
query string parameter is specified. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. -
df
(Optional, string): Field to use as default where no field prefix is given in the query string. This parameter can only be used when theq
query string parameter is specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
explain
(Optional, boolean): Iftrue
, the response returns detailed information if an error has occurred. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
rewrite
(Optional, boolean): Iftrue
, returns a more detailed explanation showing the actual Lucene query that will be executed. -
q
(Optional, string): Query in the Lucene query string syntax.
-
inference
editdelete
editDelete an inference endpoint
client.inference.delete({ inference_id })
Arguments
edit-
Request (object):
-
inference_id
(string): The inference Id -
task_type
(Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion")): The task type -
dry_run
(Optional, boolean): When true, the endpoint is not deleted, and a list of ingest processors which reference this endpoint is returned -
force
(Optional, boolean): When true, the inference endpoint is forcefully deleted even if it is still being used by ingest processors or semantic text fields
-
get
editGet an inference endpoint
client.inference.get({ ... })
Arguments
edit-
Request (object):
-
task_type
(Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion")): The task type -
inference_id
(Optional, string): The inference Id
-
inference
editPerform inference on the service
client.inference.inference({ inference_id, input })
Arguments
edit-
Request (object):
-
inference_id
(string): The inference Id -
input
(string | string[]): Inference input. Either a string or an array of strings. -
task_type
(Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion")): The task type -
query
(Optional, string): Query input, required for rerank task. Not required for other tasks. -
task_settings
(Optional, User-defined value): Optional task settings -
timeout
(Optional, string | -1 | 0): Specifies the amount of time to wait for the inference request to complete.
-
put
editCreate an inference endpoint
client.inference.put({ inference_id })
Arguments
edit-
Request (object):
-
inference_id
(string): The inference Id -
task_type
(Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion")): The task type -
inference_config
(Optional, { service, service_settings, task_settings })
-
stream_inference
editPerform streaming inference
client.inference.streamInference()
ingest
editdelete_geoip_database
editDelete GeoIP database configurations. Delete one or more IP geolocation database configurations.
client.ingest.deleteGeoipDatabase({ id })
Arguments
edit-
Request (object):
-
id
(string | string[]): A list of geoip database configurations to delete -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_ip_location_database
editDeletes an ip location database configuration
client.ingest.deleteIpLocationDatabase()
delete_pipeline
editDelete pipelines. Delete one or more ingest pipelines.
client.ingest.deletePipeline({ id })
Arguments
edit-
Request (object):
-
id
(string): Pipeline ID or wildcard expression of pipeline IDs used to limit the request. To delete all ingest pipelines in a cluster, use a value of*
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
geo_ip_stats
editGet GeoIP statistics. Get download statistics for GeoIP2 databases that are used with the GeoIP processor.
client.ingest.geoIpStats()
get_geoip_database
editGet GeoIP database configurations. Get information about one or more IP geolocation database configurations.
client.ingest.getGeoipDatabase({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string | string[]): List of database configuration IDs to retrieve. Wildcard (*
) expressions are supported. To get all database configurations, omit this parameter or use*
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_ip_location_database
editReturns the specified ip location database configuration
client.ingest.getIpLocationDatabase()
get_pipeline
editGet pipelines. Get information about one or more ingest pipelines. This API returns a local reference of the pipeline.
client.ingest.getPipeline({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): List of pipeline IDs to retrieve. Wildcard (*
) expressions are supported. To get all ingest pipelines, omit this parameter or use*
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
summary
(Optional, boolean): Return pipelines without their definitions (default: false)
-
processor_grok
editRun a grok processor. Extract structured fields out of a single text field within a document. You must choose which field to extract matched fields from, as well as the grok pattern you expect will match. A grok pattern is like a regular expression that supports aliased expressions that can be reused.
client.ingest.processorGrok()
put_geoip_database
editCreate or update GeoIP database configurations. Create or update IP geolocation database configurations.
client.ingest.putGeoipDatabase({ id, name, maxmind })
Arguments
edit-
Request (object):
-
id
(string): ID of the database configuration to create or update. -
name
(string): The provider-assigned name of the IP geolocation database to download. -
maxmind
({ account_id }): The configuration necessary to identify which IP geolocation provider to use to download the database, as well as any provider-specific configuration necessary for such downloading. At present, the only supported provider is maxmind, and the maxmind provider requires that an account_id (string) is configured. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_ip_location_database
editPuts the configuration for a ip location database to be downloaded
client.ingest.putIpLocationDatabase()
put_pipeline
editCreate or update a pipeline. Changes made using this API take effect immediately.
client.ingest.putPipeline({ id })
Arguments
edit-
Request (object):
-
id
(string): ID of the ingest pipeline to create or update. -
_meta
(Optional, Record<string, User-defined value>): Optional metadata about the ingest pipeline. May have any contents. This map is not automatically generated by Elasticsearch. -
description
(Optional, string): Description of the ingest pipeline. -
on_failure
(Optional, { append, attachment, bytes, circle, community_id, convert, csv, date, date_index_name, dissect, dot_expander, drop, enrich, fail, fingerprint, foreach, ip_location, geo_grid, geoip, grok, gsub, html_strip, inference, join, json, kv, lowercase, network_direction, pipeline, redact, registered_domain, remove, rename, reroute, script, set, set_security_user, sort, split, terminate, trim, uppercase, urldecode, uri_parts, user_agent }[]): Processors to run immediately after a processor failure. Each processor supports a processor-levelon_failure
value. If a processor without anon_failure
value fails, Elasticsearch uses this pipeline-level parameter as a fallback. The processors in this parameter run sequentially in the order specified. Elasticsearch will not attempt to run the pipeline’s remaining processors. -
processors
(Optional, { append, attachment, bytes, circle, community_id, convert, csv, date, date_index_name, dissect, dot_expander, drop, enrich, fail, fingerprint, foreach, ip_location, geo_grid, geoip, grok, gsub, html_strip, inference, join, json, kv, lowercase, network_direction, pipeline, redact, registered_domain, remove, rename, reroute, script, set, set_security_user, sort, split, terminate, trim, uppercase, urldecode, uri_parts, user_agent }[]): Processors used to perform transformations on documents before indexing. Processors run sequentially in the order specified. -
version
(Optional, number): Version number used by external systems to track ingest pipelines. This parameter is intended for external systems only. Elasticsearch does not use or validate pipeline version numbers. -
deprecated
(Optional, boolean): Marks this ingest pipeline as deprecated. When a deprecated ingest pipeline is referenced as the default or final pipeline when creating or updating a non-deprecated index template, Elasticsearch will emit a deprecation warning. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
if_version
(Optional, number): Required version for optimistic concurrency control for pipeline updates
-
simulate
editSimulate a pipeline. Run an ingest pipeline against a set of provided documents. You can either specify an existing pipeline to use with the provided documents or supply a pipeline definition in the body of the request.
client.ingest.simulate({ docs })
Arguments
edit-
Request (object):
-
docs
({ _id, _index, _source }[]): Sample documents to test in the pipeline. -
id
(Optional, string): Pipeline to test. If you don’t specify apipeline
in the request body, this parameter is required. -
pipeline
(Optional, { description, on_failure, processors, version, deprecated, _meta }): Pipeline to test. If you don’t specify thepipeline
request path parameter, this parameter is required. If you specify both this and the request path parameter, the API only uses the request path parameter. -
verbose
(Optional, boolean): Iftrue
, the response includes output data for each processor in the executed pipeline.
-
license
editdelete
editDeletes licensing information for the cluster
client.license.delete()
get
editGet license information. Returns information about your Elastic license, including its type, its status, when it was issued, and when it expires. For more information about the different types of licenses, refer to [Elastic Stack subscriptions](https://www.elastic.co/subscriptions).
client.license.get({ ... })
Arguments
edit-
Request (object):
-
accept_enterprise
(Optional, boolean): Iftrue
, this parameter returns enterprise for Enterprise license types. Iffalse
, this parameter returns platinum for both platinum and enterprise license types. This behavior is maintained for backwards compatibility. This parameter is deprecated and will always be set to true in 8.x. -
local
(Optional, boolean): Specifies whether to retrieve local information. The default value isfalse
, which means the information is retrieved from the master node.
-
get_basic_status
editRetrieves information about the status of the basic license.
client.license.getBasicStatus()
get_trial_status
editRetrieves information about the status of the trial license.
client.license.getTrialStatus()
post
editUpdates the license for the cluster.
client.license.post({ ... })
Arguments
edit-
Request (object):
-
license
(Optional, { expiry_date_in_millis, issue_date_in_millis, start_date_in_millis, issued_to, issuer, max_nodes, max_resource_units, signature, type, uid }) -
licenses
(Optional, { expiry_date_in_millis, issue_date_in_millis, start_date_in_millis, issued_to, issuer, max_nodes, max_resource_units, signature, type, uid }[]): A sequence of one or more JSON documents containing the license information. -
acknowledge
(Optional, boolean): Specifies whether you acknowledge the license changes.
-
post_start_basic
editThe start basic API enables you to initiate an indefinite basic license, which gives access to all the basic features. If the basic license does not support all of the features that are available with your current license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true. To check the status of your basic license, use the following API: [Get basic status](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-basic-status.html).
client.license.postStartBasic({ ... })
Arguments
edit-
Request (object):
-
acknowledge
(Optional, boolean): whether the user has acknowledged acknowledge messages (default: false)
-
post_start_trial
editThe start trial API enables you to start a 30-day trial, which gives access to all subscription features.
client.license.postStartTrial({ ... })
Arguments
edit-
Request (object):
-
acknowledge
(Optional, boolean): whether the user has acknowledged acknowledge messages (default: false) -
type_query_string
(Optional, string)
-
logstash
editdelete_pipeline
editDeletes a pipeline used for Logstash Central Management.
client.logstash.deletePipeline({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the pipeline.
-
get_pipeline
editRetrieves pipelines used for Logstash Central Management.
client.logstash.getPipeline({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string | string[]): List of pipeline identifiers.
-
put_pipeline
editCreates or updates a pipeline used for Logstash Central Management.
client.logstash.putPipeline({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the pipeline. -
pipeline
(Optional, { description, on_failure, processors, version, deprecated, _meta })
-
migration
editdeprecations
editRetrieves information about different cluster, node, and index level settings that use deprecated features that will be removed or changed in the next major version.
client.migration.deprecations({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): Comma-separate list of data streams or indices to check. Wildcard (*) expressions are supported.
-
get_feature_upgrade_status
editFind out whether system features need to be upgraded or not
client.migration.getFeatureUpgradeStatus()
post_feature_upgrade
editBegin upgrades for system features
client.migration.postFeatureUpgrade()
ml
editclear_trained_model_deployment_cache
editClear trained model deployment cache. Cache will be cleared on all nodes where the trained model is assigned. A trained model deployment may have an inference cache enabled. As requests are handled by each allocated node, their responses may be cached on that individual node. Calling this API clears the caches without restarting the deployment.
client.ml.clearTrainedModelDeploymentCache({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model.
-
close_job
editClose anomaly detection jobs. A job can be opened and closed multiple times throughout its lifecycle. A closed job cannot receive data or perform analysis operations, but you can still explore and navigate results. When you close a job, it runs housekeeping tasks such as pruning the model history, flushing buffers, calculating final results and persisting the model snapshots. Depending upon the size of the job, it could take several minutes to close and the equivalent time to re-open. After it is closed, the job has a minimal overhead on the cluster except for maintaining its meta data. Therefore it is a best practice to close jobs that are no longer required to process data. If you close an anomaly detection job whose datafeed is running, the request first tries to stop the datafeed. This behavior is equivalent to calling stop datafeed API with the same timeout and force parameters as the close job request. When a datafeed that has a specified end date stops, it automatically closes its associated job.
client.ml.closeJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. You can close multiple anomaly detection jobs in a single API request by using a group name, a list of jobs, or a wildcard expression. You can close all jobs by using_all
or by specifying*
as the job identifier. -
allow_no_match
(Optional, boolean): Refer to the description for theallow_no_match
query parameter. -
force
(Optional, boolean): Refer to the descriptiion for theforce
query parameter. -
timeout
(Optional, string | -1 | 0): Refer to the description for thetimeout
query parameter.
-
delete_calendar
editDelete a calendar. Removes all scheduled events from a calendar, then deletes it.
client.ml.deleteCalendar({ calendar_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar.
-
delete_calendar_event
editDelete events from a calendar.
client.ml.deleteCalendarEvent({ calendar_id, event_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
event_id
(string): Identifier for the scheduled event. You can obtain this identifier by using the get calendar events API.
-
delete_calendar_job
editDelete anomaly jobs from a calendar.
client.ml.deleteCalendarJob({ calendar_id, job_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
job_id
(string | string[]): An identifier for the anomaly detection jobs. It can be a job identifier, a group name, or a list of jobs or groups.
-
delete_data_frame_analytics
editDelete a data frame analytics job.
client.ml.deleteDataFrameAnalytics({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. -
force
(Optional, boolean): Iftrue
, it deletes a job that is not stopped; this method is quicker than stopping and deleting the job. -
timeout
(Optional, string | -1 | 0): The time to wait for the job to be deleted.
-
delete_datafeed
editDelete a datafeed.
client.ml.deleteDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
force
(Optional, boolean): Use to forcefully delete a started datafeed; this method is quicker than stopping and deleting the datafeed.
-
delete_expired_data
editDelete expired ML data. Deletes all job results, model snapshots and forecast data that have exceeded their retention days period. Machine learning state documents that are not associated with any job are also deleted. You can limit the request to a single or set of anomaly detection jobs by using a job identifier, a group name, a comma-separated list of jobs, or a wildcard expression. You can delete expired data for all anomaly detection jobs by using _all, by specifying * as the <job_id>, or by omitting the <job_id>.
client.ml.deleteExpiredData({ ... })
Arguments
edit-
Request (object):
-
job_id
(Optional, string): Identifier for an anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. -
requests_per_second
(Optional, float): The desired requests per second for the deletion processes. The default behavior is no throttling. -
timeout
(Optional, string | -1 | 0): How long can the underlying delete processes run until they are canceled.
-
delete_filter
editDelete a filter. If an anomaly detection job references the filter, you cannot delete the filter. You must update or delete the job before you can delete the filter.
client.ml.deleteFilter({ filter_id })
Arguments
edit-
Request (object):
-
filter_id
(string): A string that uniquely identifies a filter.
-
delete_forecast
editDelete forecasts from a job.
By default, forecasts are retained for 14 days. You can specify a
different retention period with the expires_in
parameter in the forecast
jobs API. The delete forecast API enables you to delete one or more
forecasts before they expire.
client.ml.deleteForecast({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
forecast_id
(Optional, string): A list of forecast identifiers. If you do not specify this optional parameter or if you specify_all
or*
the API deletes all forecasts from the job. -
allow_no_forecasts
(Optional, boolean): Specifies whether an error occurs when there are no forecasts. In particular, if this parameter is set tofalse
and there are no forecasts associated with the job, attempts to delete all forecasts return an error. -
timeout
(Optional, string | -1 | 0): Specifies the period of time to wait for the completion of the delete operation. When this period of time elapses, the API fails and returns an error.
-
delete_job
editDelete an anomaly detection job. All job configuration, model state and results are deleted. It is not currently possible to delete multiple jobs using wildcards or a comma separated list. If you delete a job that has a datafeed, the request first tries to delete the datafeed. This behavior is equivalent to calling the delete datafeed API with the same timeout and force parameters as the delete job request.
client.ml.deleteJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
force
(Optional, boolean): Use to forcefully delete an opened job; this method is quicker than closing and deleting the job. -
delete_user_annotations
(Optional, boolean): Specifies whether annotations that have been added by the user should be deleted along with any auto-generated annotations when the job is reset. -
wait_for_completion
(Optional, boolean): Specifies whether the request should return immediately or wait until the job deletion completes.
-
delete_model_snapshot
editDelete a model snapshot.
You cannot delete the active model snapshot. To delete that snapshot, first
revert to a different one. To identify the active model snapshot, refer to
the model_snapshot_id
in the results from the get jobs API.
client.ml.deleteModelSnapshot({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): Identifier for the model snapshot.
-
delete_trained_model
editDelete an unreferenced trained model. The request deletes a trained inference model that is not referenced by an ingest pipeline.
client.ml.deleteTrainedModel({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
force
(Optional, boolean): Forcefully deletes a trained model that is referenced by ingest pipelines or has a started deployment.
-
delete_trained_model_alias
editDelete a trained model alias.
This API deletes an existing model alias that refers to a trained model. If
the model alias is missing or refers to a model other than the one identified
by the model_id
, this API returns an error.
client.ml.deleteTrainedModelAlias({ model_alias, model_id })
Arguments
edit-
Request (object):
-
model_alias
(string): The model alias to delete. -
model_id
(string): The trained model ID to which the model alias refers.
-
estimate_model_memory
editEstimate job model memory usage. Makes an estimation of the memory usage for an anomaly detection job model. It is based on analysis configuration details for the job and cardinality estimates for the fields it references.
client.ml.estimateModelMemory({ ... })
Arguments
edit-
Request (object):
-
analysis_config
(Optional, { bucket_span, categorization_analyzer, categorization_field_name, categorization_filters, detectors, influencers, latency, model_prune_window, multivariate_by_fields, per_partition_categorization, summary_count_field_name }): For a list of the properties that you can specify in theanalysis_config
component of the body of this API. -
max_bucket_cardinality
(Optional, Record<string, number>): Estimates of the highest cardinality in a single bucket that is observed for influencer fields over the time period that the job analyzes data. To produce a good answer, values must be provided for all influencer fields. Providing values for fields that are not listed asinfluencers
has no effect on the estimation. -
overall_cardinality
(Optional, Record<string, number>): Estimates of the cardinality that is observed for fields over the whole time period that the job analyzes data. To produce a good answer, values must be provided for fields referenced in theby_field_name
,over_field_name
andpartition_field_name
of any detectors. Providing values for other fields has no effect on the estimation. It can be omitted from the request if no detectors have aby_field_name
,over_field_name
orpartition_field_name
.
-
evaluate_data_frame
editEvaluate data frame analytics. The API packages together commonly used evaluation metrics for various types of machine learning features. This has been designed for use on indexes created by data frame analytics. Evaluation requires both a ground truth field and an analytics result field to be present.
client.ml.evaluateDataFrame({ evaluation, index })
Arguments
edit-
Request (object):
-
evaluation
({ classification, outlier_detection, regression }): Defines the type of evaluation you want to perform. -
index
(string): Defines theindex
in which the evaluation will be performed. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): A query clause that retrieves a subset of data from the source index.
-
explain_data_frame_analytics
editExplain data frame analytics config. This API provides explanations for a data frame analytics config that either exists already or one that has not been created yet. The following explanations are provided: * which fields are included or not in the analysis and why, * how much memory is estimated to be required. The estimate can be used when deciding the appropriate value for model_memory_limit setting later on. If you have object fields or fields that are excluded via source filtering, they are not included in the explanation.
client.ml.explainDataFrameAnalytics({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
source
(Optional, { index, query, runtime_mappings, _source }): The configuration of how to source the analysis data. It requires an index. Optionally, query and _source may be specified. -
dest
(Optional, { index, results_field }): The destination configuration, consisting of index and optionally results_field (ml by default). -
analysis
(Optional, { classification, outlier_detection, regression }): The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression. -
description
(Optional, string): A description of the job. -
model_memory_limit
(Optional, string): The approximate maximum amount of memory resources that are permitted for analytical processing. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. -
max_num_threads
(Optional, number): The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself. -
analyzed_fields
(Optional, { includes, excludes }): Specify includes and/or excludes patterns to select which fields will be included in the analysis. The patterns specified in excludes are applied last, therefore excludes takes precedence. In other words, if the same field is specified in both includes and excludes, then the field will not be included in the analysis. -
allow_lazy_start
(Optional, boolean): Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node.
-
flush_job
editForce buffered data to be processed. The flush jobs API is only applicable when sending data for analysis using the post data API. Depending on the content of the buffer, then it might additionally calculate new results. Both flush and close operations are similar, however the flush is more efficient if you are expecting to send more data for analysis. When flushing, the job remains open and is available to continue analyzing data. A close operation additionally prunes and persists the model state to disk and the job must be opened again before analyzing further data.
client.ml.flushJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
advance_time
(Optional, string | Unit): Refer to the description for theadvance_time
query parameter. -
calc_interim
(Optional, boolean): Refer to the description for thecalc_interim
query parameter. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
skip_time
(Optional, string | Unit): Refer to the description for theskip_time
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter.
-
forecast
editPredict future behavior of a time series.
Forecasts are not supported for jobs that perform population analysis; an
error occurs if you try to create a forecast for a job that has an
over_field_name
in its configuration. Forcasts predict future behavior
based on historical data.
client.ml.forecast({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. The job must be open when you create a forecast; otherwise, an error occurs. -
duration
(Optional, string | -1 | 0): Refer to the description for theduration
query parameter. -
expires_in
(Optional, string | -1 | 0): Refer to the description for theexpires_in
query parameter. -
max_model_memory
(Optional, string): Refer to the description for themax_model_memory
query parameter.
-
get_buckets
editGet anomaly detection job results for buckets. The API presents a chronological view of the records, grouped by bucket.
client.ml.getBuckets({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
timestamp
(Optional, string | Unit): The timestamp of a single bucket result. If you do not specify this parameter, the API returns information about all buckets. -
anomaly_score
(Optional, number): Refer to the description for theanomaly_score
query parameter. -
desc
(Optional, boolean): Refer to the description for thedesc
query parameter. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
exclude_interim
(Optional, boolean): Refer to the description for theexclude_interim
query parameter. -
expand
(Optional, boolean): Refer to the description for theexpand
query parameter. -
page
(Optional, { from, size }) -
sort
(Optional, string): Refer to the desription for thesort
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter. -
from
(Optional, number): Skips the specified number of buckets. -
size
(Optional, number): Specifies the maximum number of buckets to obtain.
-
get_calendar_events
editGet info about events in calendars.
client.ml.getCalendarEvents({ calendar_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. You can get information for multiple calendars by using a list of ids or a wildcard expression. You can get information for all calendars by using_all
or*
or by omitting the calendar identifier. -
end
(Optional, string | Unit): Specifies to get events with timestamps earlier than this time. -
from
(Optional, number): Skips the specified number of events. -
job_id
(Optional, string): Specifies to get events for a specific anomaly detection job identifier or job group. It must be used with a calendar identifier of_all
or*
. -
size
(Optional, number): Specifies the maximum number of events to obtain. -
start
(Optional, string | Unit): Specifies to get events with timestamps after this time.
-
get_calendars
editGet calendar configuration info.
client.ml.getCalendars({ ... })
Arguments
edit-
Request (object):
-
calendar_id
(Optional, string): A string that uniquely identifies a calendar. You can get information for multiple calendars by using a list of ids or a wildcard expression. You can get information for all calendars by using_all
or*
or by omitting the calendar identifier. -
page
(Optional, { from, size }): This object is supported only when you omit the calendar identifier. -
from
(Optional, number): Skips the specified number of calendars. This parameter is supported only when you omit the calendar identifier. -
size
(Optional, number): Specifies the maximum number of calendars to obtain. This parameter is supported only when you omit the calendar identifier.
-
get_categories
editGet anomaly detection job results for categories.
client.ml.getCategories({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
category_id
(Optional, string): Identifier for the category, which is unique in the job. If you specify neither the category ID nor the partition_field_value, the API returns information about all categories. If you specify only the partition_field_value, it returns information about all categories for the specified partition. -
page
(Optional, { from, size }): Configures pagination. This parameter has thefrom
andsize
properties. -
from
(Optional, number): Skips the specified number of categories. -
partition_field_value
(Optional, string): Only return categories for the specified partition. -
size
(Optional, number): Specifies the maximum number of categories to obtain.
-
get_data_frame_analytics
editGet data frame analytics job configuration info. You can get information for multiple data frame analytics jobs in a single API request by using a comma-separated list of data frame analytics jobs or a wildcard expression.
client.ml.getDataFrameAnalytics({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the data frame analytics job. If you do not specify this option, the API returns information for the first hundred data frame analytics jobs. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no data frame analytics jobs that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
-
The default value returns an empty data_frame_analytics array when there
are no matches and the subset of results when there are partial matches.
If this parameter is false
, the request returns a 404 status code when
there are no matches or only partial matches.
from
(Optional, number): Skips the specified number of data frame analytics jobs.
size
(Optional, number): Specifies the maximum number of data frame analytics jobs to obtain.
* *exclude_generated
(Optional, boolean): Indicates if certain fields should be removed from the configuration on
retrieval. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
get_data_frame_analytics_stats
editGet data frame analytics jobs usage info.
client.ml.getDataFrameAnalyticsStats({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the data frame analytics job. If you do not specify this option, the API returns information for the first hundred data frame analytics jobs. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no data frame analytics jobs that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
-
The default value returns an empty data_frame_analytics array when there
are no matches and the subset of results when there are partial matches.
If this parameter is false
, the request returns a 404 status code when
there are no matches or only partial matches.
from
(Optional, number): Skips the specified number of data frame analytics jobs.
size
(Optional, number): Specifies the maximum number of data frame analytics jobs to obtain.
* *verbose
(Optional, boolean): Defines whether the stats response should be verbose.
get_datafeed_stats
editGet datafeeds usage info.
You can get statistics for multiple datafeeds in a single API request by
using a comma-separated list of datafeeds or a wildcard expression. You can
get statistics for all datafeeds by using _all
, by specifying *
as the
<feed_id>
, or by omitting the <feed_id>
. If the datafeed is stopped, the
only information you receive is the datafeed_id
and the state
.
This API returns a maximum of 10,000 datafeeds.
client.ml.getDatafeedStats({ ... })
Arguments
edit-
Request (object):
-
datafeed_id
(Optional, string | string[]): Identifier for the datafeed. It can be a datafeed identifier or a wildcard expression. If you do not specify one of these options, the API returns information about all datafeeds. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no datafeeds that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
-
The default value is true
, which returns an empty datafeeds
array
when there are no matches and the subset of results when there are
partial matches. If this parameter is false
, the request returns a
404
status code when there are no matches or only partial matches.
get_datafeeds
editGet datafeeds configuration info.
You can get information for multiple datafeeds in a single API request by
using a comma-separated list of datafeeds or a wildcard expression. You can
get information for all datafeeds by using _all
, by specifying *
as the
<feed_id>
, or by omitting the <feed_id>
.
This API returns a maximum of 10,000 datafeeds.
client.ml.getDatafeeds({ ... })
Arguments
edit-
Request (object):
-
datafeed_id
(Optional, string | string[]): Identifier for the datafeed. It can be a datafeed identifier or a wildcard expression. If you do not specify one of these options, the API returns information about all datafeeds. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no datafeeds that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
-
The default value is true
, which returns an empty datafeeds
array
when there are no matches and the subset of results when there are
partial matches. If this parameter is false
, the request returns a
404
status code when there are no matches or only partial matches.
* *exclude_generated
(Optional, boolean): Indicates if certain fields should be removed from the configuration on
retrieval. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
get_filters
editGet filters. You can get a single filter or all filters.
client.ml.getFilters({ ... })
Arguments
edit-
Request (object):
-
filter_id
(Optional, string | string[]): A string that uniquely identifies a filter. -
from
(Optional, number): Skips the specified number of filters. -
size
(Optional, number): Specifies the maximum number of filters to obtain.
-
get_influencers
editGet anomaly detection job results for influencers.
Influencers are the entities that have contributed to, or are to blame for,
the anomalies. Influencer results are available only if an
influencer_field_name
is specified in the job configuration.
client.ml.getInfluencers({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
page
(Optional, { from, size }): Configures pagination. This parameter has thefrom
andsize
properties. -
desc
(Optional, boolean): If true, the results are sorted in descending order. -
end
(Optional, string | Unit): Returns influencers with timestamps earlier than this time. The default value means it is unset and results are not limited to specific timestamps. -
exclude_interim
(Optional, boolean): If true, the output excludes interim results. By default, interim results are included. -
influencer_score
(Optional, number): Returns influencers with anomaly scores greater than or equal to this value. -
from
(Optional, number): Skips the specified number of influencers. -
size
(Optional, number): Specifies the maximum number of influencers to obtain. -
sort
(Optional, string): Specifies the sort field for the requested influencers. By default, the influencers are sorted by theinfluencer_score
value. -
start
(Optional, string | Unit): Returns influencers with timestamps after this time. The default value means it is unset and results are not limited to specific timestamps.
-
get_job_stats
editGet anomaly detection jobs usage info.
client.ml.getJobStats({ ... })
Arguments
edit-
Request (object):
-
job_id
(Optional, string): Identifier for the anomaly detection job. It can be a job identifier, a group name, a list of jobs, or a wildcard expression. If you do not specify one of these options, the API returns information for all anomaly detection jobs. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
If true
, the API returns an empty jobs
array when
there are no matches and the subset of results when there are partial
matches. If false
, the API returns a 404
status
code when there are no matches or only partial matches.
get_jobs
editGet anomaly detection jobs configuration info.
You can get information for multiple anomaly detection jobs in a single API
request by using a group name, a comma-separated list of jobs, or a wildcard
expression. You can get information for all anomaly detection jobs by using
_all
, by specifying *
as the <job_id>
, or by omitting the <job_id>
.
client.ml.getJobs({ ... })
Arguments
edit-
Request (object):
-
job_id
(Optional, string | string[]): Identifier for the anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. If you do not specify one of these options, the API returns information for all anomaly detection jobs. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
The default value is true
, which returns an empty jobs
array when
there are no matches and the subset of results when there are partial
matches. If this parameter is false
, the request returns a 404
status
code when there are no matches or only partial matches.
* *exclude_generated
(Optional, boolean): Indicates if certain fields should be removed from the configuration on
retrieval. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
get_memory_stats
editGet machine learning memory usage info. Get information about how machine learning jobs and trained models are using memory, on each node, both within the JVM heap, and natively, outside of the JVM.
client.ml.getMemoryStats({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string): The names of particular nodes in the cluster to target. For example,nodeId1,nodeId2
orml:true
-
human
(Optional, boolean): Specify this query parameter to include the fields with units in the response. Otherwise only the_in_bytes
sizes are returned in the response. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_model_snapshot_upgrade_stats
editGet anomaly detection job model snapshot upgrade usage info.
client.ml.getModelSnapshotUpgradeStats({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): A numerical character string that uniquely identifies the model snapshot. You can get information for multiple snapshots by using a list or a wildcard expression. You can get all snapshots by using_all
, by specifying*
as the snapshot ID, or by omitting the snapshot ID. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
The default value is true, which returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.
get_model_snapshots
editGet model snapshots info.
client.ml.getModelSnapshots({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(Optional, string): A numerical character string that uniquely identifies the model snapshot. You can get information for multiple snapshots by using a list or a wildcard expression. You can get all snapshots by using_all
, by specifying*
as the snapshot ID, or by omitting the snapshot ID. -
desc
(Optional, boolean): Refer to the description for thedesc
query parameter. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
page
(Optional, { from, size }) -
sort
(Optional, string): Refer to the description for thesort
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter. -
from
(Optional, number): Skips the specified number of snapshots. -
size
(Optional, number): Specifies the maximum number of snapshots to obtain.
-
get_overall_buckets
editGet overall bucket results.
Retrievs overall bucket results that summarize the bucket results of multiple anomaly detection jobs.
The overall_score
is calculated by combining the scores of all the
buckets within the overall bucket span. First, the maximum
anomaly_score
per anomaly detection job in the overall bucket is
calculated. Then the top_n
of those scores are averaged to result in
the overall_score
. This means that you can fine-tune the
overall_score
so that it is more or less sensitive to the number of
jobs that detect an anomaly at the same time. For example, if you set
top_n
to 1
, the overall_score
is the maximum bucket score in the
overall bucket. Alternatively, if you set top_n
to the number of jobs,
the overall_score
is high only when all jobs detect anomalies in that
overall bucket. If you set the bucket_span
parameter (to a value
greater than its default), the overall_score
is the maximum
overall_score
of the overall buckets that have a span equal to the
jobs' largest bucket span.
client.ml.getOverallBuckets({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. It can be a job identifier, a group name, a list of jobs or groups, or a wildcard expression.
-
You can summarize the bucket results for all anomaly detection jobs by
using _all
or by specifying *
as the <job_id>
.
allow_no_match
(Optional, boolean): Refer to the description for the allow_no_match
query parameter.
bucket_span
(Optional, string | -1 | 0): Refer to the description for the bucket_span
query parameter.
end
(Optional, string | Unit): Refer to the description for the end
query parameter.
exclude_interim
(Optional, boolean): Refer to the description for the exclude_interim
query parameter.
overall_score
(Optional, number | string): Refer to the description for the overall_score
query parameter.
start
(Optional, string | Unit): Refer to the description for the start
query parameter.
* *top_n
(Optional, number): Refer to the description for the top_n
query parameter.
get_records
editGet anomaly records for an anomaly detection job. Records contain the detailed analytical results. They describe the anomalous activity that has been identified in the input data based on the detector configuration. There can be many anomaly records depending on the characteristics and size of the input data. In practice, there are often too many to be able to manually process them. The machine learning features therefore perform a sophisticated aggregation of the anomaly records into buckets. The number of record results depends on the number of anomalies found in each bucket, which relates to the number of time series being modeled and the number of detectors.
client.ml.getRecords({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
desc
(Optional, boolean): Refer to the description for thedesc
query parameter. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
exclude_interim
(Optional, boolean): Refer to the description for theexclude_interim
query parameter. -
page
(Optional, { from, size }) -
record_score
(Optional, number): Refer to the description for therecord_score
query parameter. -
sort
(Optional, string): Refer to the description for thesort
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter. -
from
(Optional, number): Skips the specified number of records. -
size
(Optional, number): Specifies the maximum number of records to obtain.
-
get_trained_models
editGet trained model configuration info.
client.ml.getTrainedModels({ ... })
Arguments
edit-
Request (object):
-
model_id
(Optional, string | string[]): The unique identifier of the trained model or a model alias.
-
You can get information for multiple trained models in a single API
request by using a list of model IDs or a wildcard
expression.
* *allow_no_match
(Optional, boolean): Specifies what to do when the request:
- Contains wildcard expressions and there are no models that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
If true, it returns an empty array when there are no matches and the
subset of results when there are partial matches.
decompress_definition
(Optional, boolean): Specifies whether the included model definition should be returned as a
JSON map (true) or in a custom compressed format (false).
exclude_generated
(Optional, boolean): Indicates if certain fields should be removed from the configuration on
retrieval. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
from
(Optional, number): Skips the specified number of models.
include
(Optional, Enum("definition" | "feature_importance_baseline" | "hyperparameters" | "total_feature_importance" | "definition_status")): A comma delimited string of optional fields to include in the response
body.
size
(Optional, number): Specifies the maximum number of models to obtain.
tags
(Optional, string | string[]): A comma delimited string of tags. A trained model can have many tags, or
none. When supplied, only trained models that contain all the supplied
tags are returned.
get_trained_models_stats
editGet trained models usage info. You can get usage information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.
client.ml.getTrainedModelsStats({ ... })
Arguments
edit-
Request (object):
-
model_id
(Optional, string | string[]): The unique identifier of the trained model or a model alias. It can be a list or a wildcard expression. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no models that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
If true, it returns an empty array when there are no matches and the
subset of results when there are partial matches.
from
(Optional, number): Skips the specified number of models.
size
(Optional, number): Specifies the maximum number of models to obtain.
infer_trained_model
editEvaluate a trained model.
client.ml.inferTrainedModel({ model_id, docs })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
docs
(Record<string, User-defined value>[]): An array of objects to pass to the model for inference. The objects should contain a fields matching your configured trained model input. Typically, for NLP models, the field name istext_field
. Currently, for NLP models, only a single value is allowed. -
inference_config
(Optional, { regression, classification, text_classification, zero_shot_classification, fill_mask, ner, pass_through, text_embedding, text_expansion, question_answering }): The inference configuration updates to apply on the API call -
timeout
(Optional, string | -1 | 0): Controls the amount of time to wait for inference results.
-
info
editReturn ML defaults and limits. Returns defaults and limits used by machine learning. This endpoint is designed to be used by a user interface that needs to fully understand machine learning configurations where some options are not specified, meaning that the defaults should be used. This endpoint may be used to find out what those defaults are. It also provides information about the maximum size of machine learning jobs that could run in the current cluster configuration.
client.ml.info()
open_job
editOpen anomaly detection jobs. An anomaly detection job must be opened to be ready to receive and analyze data. It can be opened and closed multiple times throughout its lifecycle. When you open a new job, it starts with an empty model. When you open an existing job, the most recent model state is automatically loaded. The job is ready to resume its analysis from where it left off, once new data is received.
client.ml.openJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
timeout
(Optional, string | -1 | 0): Refer to the description for thetimeout
query parameter.
-
post_calendar_events
editAdd scheduled events to the calendar.
client.ml.postCalendarEvents({ calendar_id, events })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
events
({ calendar_id, event_id, description, end_time, start_time }[]): A list of one of more scheduled events. The event’s start and end times can be specified as integer milliseconds since the epoch or as a string in ISO 8601 format.
-
post_data
editSend data to an anomaly detection job for analysis.
For each job, data can be accepted from only a single connection at a time. It is not currently possible to post data to multiple jobs using wildcards or a comma-separated list.
client.ml.postData({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. The job must have a state of open to receive and process the data. -
data
(Optional, TData[]) -
reset_end
(Optional, string | Unit): Specifies the end of the bucket resetting range. -
reset_start
(Optional, string | Unit): Specifies the start of the bucket resetting range.
-
preview_data_frame_analytics
editPreview features used by data frame analytics. Previews the extracted features used by a data frame analytics config.
client.ml.previewDataFrameAnalytics({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the data frame analytics job. -
config
(Optional, { source, analysis, model_memory_limit, max_num_threads, analyzed_fields }): A data frame analytics config as described in create data frame analytics jobs. Note thatid
anddest
don’t need to be provided in the context of this API.
-
preview_datafeed
editPreview a datafeed. This API returns the first "page" of search results from a datafeed. You can preview an existing datafeed or provide configuration details for a datafeed and anomaly detection job in the API. The preview shows the structure of the data that will be passed to the anomaly detection engine. IMPORTANT: When Elasticsearch security features are enabled, the preview uses the credentials of the user that called the API. However, when the datafeed starts it uses the roles of the last user that created or updated the datafeed. To get a preview that accurately reflects the behavior of the datafeed, use the appropriate credentials. You can also use secondary authorization headers to supply the credentials.
client.ml.previewDatafeed({ ... })
Arguments
edit-
Request (object):
-
datafeed_id
(Optional, string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. NOTE: If you use this path parameter, you cannot provide datafeed or anomaly detection job configuration details in the request body. -
datafeed_config
(Optional, { aggregations, chunking_config, datafeed_id, delayed_data_check_config, frequency, indices, indices_options, job_id, max_empty_searches, query, query_delay, runtime_mappings, script_fields, scroll_size }): The datafeed definition to preview. -
job_config
(Optional, { allow_lazy_open, analysis_config, analysis_limits, background_persist_interval, custom_settings, daily_model_snapshot_retention_after_days, data_description, datafeed_config, description, groups, job_id, job_type, model_plot_config, model_snapshot_retention_days, renormalization_window_days, results_index_name, results_retention_days }): The configuration details for the anomaly detection job that is associated with the datafeed. If thedatafeed_config
object does not include ajob_id
that references an existing anomaly detection job, you must supply thisjob_config
object. If you include both ajob_id
and ajob_config
, the latter information is used. You cannot specify ajob_config
object unless you also supply adatafeed_config
object. -
start
(Optional, string | Unit): The start time from where the datafeed preview should begin -
end
(Optional, string | Unit): The end time when the datafeed preview should stop
-
put_calendar
editCreate a calendar.
client.ml.putCalendar({ calendar_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
job_ids
(Optional, string[]): An array of anomaly detection job identifiers. -
description
(Optional, string): A description of the calendar.
-
put_calendar_job
editAdd anomaly detection job to calendar.
client.ml.putCalendarJob({ calendar_id, job_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
job_id
(string | string[]): An identifier for the anomaly detection jobs. It can be a job identifier, a group name, or a list of jobs or groups.
-
put_data_frame_analytics
editCreate a data frame analytics job. This API creates a data frame analytics job that performs an analysis on the source indices and stores the outcome in a destination index.
client.ml.putDataFrameAnalytics({ id, analysis, dest, source })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
analysis
({ classification, outlier_detection, regression }): The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression. -
dest
({ index, results_field }): The destination configuration. -
source
({ index, query, runtime_mappings, _source }): The configuration of how to source the analysis data. -
allow_lazy_start
(Optional, boolean): Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node. If set tofalse
and a machine learning node with capacity to run the job cannot be immediately found, the API returns an error. If set totrue
, the API does not return an error; the job waits in thestarting
state until sufficient machine learning node capacity is available. This behavior is also affected by the cluster-widexpack.ml.max_lazy_ml_nodes
setting. -
analyzed_fields
(Optional, { includes, excludes }): Specifiesincludes
and/orexcludes
patterns to select which fields will be included in the analysis. The patterns specified inexcludes
are applied last, thereforeexcludes
takes precedence. In other words, if the same field is specified in bothincludes
andexcludes
, then the field will not be included in the analysis. Ifanalyzed_fields
is not set, only the relevant fields will be included. For example, all the numeric fields for outlier detection. The supported fields vary for each type of analysis. Outlier detection requires numeric orboolean
data to analyze. The algorithms don’t support missing values therefore fields that have data types other than numeric or boolean are ignored. Documents where included fields contain missing values, null values, or an array are also ignored. Therefore thedest
index may contain documents that don’t have an outlier score. Regression supports fields that are numeric,boolean
,text
,keyword
, andip
data types. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the regression analysis. Classification supports fields that are numeric,boolean
,text
,keyword
, andip
data types. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the classification analysis. Classification analysis can be improved by mapping ordinal variable values to a single number. For example, in case of age ranges, you can model the values as0-14 = 0
,15-24 = 1
,25-34 = 2
, and so on. -
description
(Optional, string): A description of the job. -
max_num_threads
(Optional, number): The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself. -
model_memory_limit
(Optional, string): The approximate maximum amount of memory resources that are permitted for analytical processing. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. -
headers
(Optional, Record<string, string | string[]>) -
version
(Optional, string)
-
put_datafeed
editCreate a datafeed.
Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job.
You can associate only one datafeed with each anomaly detection job.
The datafeed contains a query that runs at a defined interval (frequency
).
If you are concerned about delayed data, you can add a delay (query_delay') at each interval.
When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had
at the time of creation and runs the query using those same roles. If you provide secondary authorization headers,
those credentials are used instead.
You must use Kibana, this API, or the create anomaly detection jobs API to create a datafeed. Do not add a datafeed
directly to the `.ml-config
index. Do not give users write
privileges on the .ml-config
index.
client.ml.putDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data. -
chunking_config
(Optional, { mode, time_span }): Datafeeds might be required to search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. Chunking configuration controls how the size of these time chunks are calculated; it is an advanced configuration option. -
delayed_data_check_config
(Optional, { check_window, enabled }): Specifies whether the datafeed checks for missing data and the size of the window. The datafeed can optionally search over indices that have already been read in an effort to determine whether any data has subsequently been added to the index. If missing data is found, it is a good indication that thequery_delay
is set too low and the data is being indexed after the datafeed has passed that moment in time. This check runs only on real-time datafeeds. -
frequency
(Optional, string | -1 | 0): The interval at which scheduled queries are made while the datafeed runs in real time. The default value is either the bucket span for short bucket spans, or, for longer bucket spans, a sensible fraction of the bucket span. Whenfrequency
is shorter than the bucket span, interim results for the last (partial) bucket are written then eventually overwritten by the full bucket results. If the datafeed uses aggregations, this value must be divisible by the interval of the date histogram aggregation. -
indices
(Optional, string | string[]): An array of index names. Wildcards are supported. If any of the indices are in remote clusters, the machine learning nodes must have theremote_cluster_client
role. -
indices_options
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, ignore_throttled }): Specifies index expansion options that are used during search -
job_id
(Optional, string): Identifier for the anomaly detection job. -
max_empty_searches
(Optional, number): If a real-time datafeed has never seen any data (including during any initial training period), it automatically stops and closes the associated job after this many real-time searches return no documents. In other words, it stops afterfrequency
timesmax_empty_searches
of real-time operation. If not set, a datafeed with no end time that sees no data remains started until it is explicitly stopped. By default, it is not set. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. -
query_delay
(Optional, string | -1 | 0): The number of seconds behind real time that data is queried. For example, if data from 10:04 a.m. might not be searchable in Elasticsearch until 10:06 a.m., set this property to 120 seconds. The default value is randomly selected between60s
and120s
. This randomness improves the query performance when there are multiple jobs running on the same node. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Specifies runtime fields for the datafeed search. -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields. -
scroll_size
(Optional, number): The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value ofindex.max_result_window
, which is 10,000 by default. -
headers
(Optional, Record<string, string | string[]>) -
allow_no_indices
(Optional, boolean): If true, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the_all
string or when no indices are specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values. -
ignore_throttled
(Optional, boolean): If true, concrete, expanded, or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): If true, unavailable indices (missing or closed) are ignored.
-
put_filter
editCreate a filter.
A filter contains a list of strings. It can be used by one or more anomaly detection jobs.
Specifically, filters are referenced in the custom_rules
property of detector configuration objects.
client.ml.putFilter({ filter_id })
Arguments
edit-
Request (object):
-
filter_id
(string): A string that uniquely identifies a filter. -
description
(Optional, string): A description of the filter. -
items
(Optional, string[]): The items of the filter. A wildcard*
can be used at the beginning or the end of an item. Up to 10000 items are allowed in each filter.
-
put_job
editCreate an anomaly detection job.
If you include a datafeed_config
, you must have read index privileges on the source index.
client.ml.putJob({ job_id, analysis_config, data_description })
Arguments
edit-
Request (object):
-
job_id
(string): The identifier for the anomaly detection job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
analysis_config
({ bucket_span, categorization_analyzer, categorization_field_name, categorization_filters, detectors, influencers, latency, model_prune_window, multivariate_by_fields, per_partition_categorization, summary_count_field_name }): Specifies how to analyze the data. After you create a job, you cannot change the analysis configuration; all the properties are informational. -
data_description
({ format, time_field, time_format, field_delimiter }): Defines the format of the input data when you send data to the job by using the post data API. Note that when configure a datafeed, these properties are automatically set. When data is received via the post data API, it is not stored in Elasticsearch. Only the results for anomaly detection are retained. -
allow_lazy_open
(Optional, boolean): Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node. By default, if a machine learning node with capacity to run the job cannot immediately be found, the open anomaly detection jobs API returns an error. However, this is also subject to the cluster-widexpack.ml.max_lazy_ml_nodes
setting. If this option is set to true, the open anomaly detection jobs API does not return an error and the job waits in the opening state until sufficient machine learning node capacity is available. -
analysis_limits
(Optional, { categorization_examples_limit, model_memory_limit }): Limits can be applied for the resources required to hold the mathematical models in memory. These limits are approximate and can be set per job. They do not control the memory used by other processes, for example the Elasticsearch Java processes. -
background_persist_interval
(Optional, string | -1 | 0): Advanced configuration option. The time between each periodic persistence of the model. The default value is a randomized value between 3 to 4 hours, which avoids all jobs persisting at exactly the same time. The smallest allowed value is 1 hour. For very large models (several GB), persistence could take 10-20 minutes, so do not set thebackground_persist_interval
value too low. -
custom_settings
(Optional, User-defined value): Advanced configuration option. Contains custom meta data about the job. -
daily_model_snapshot_retention_after_days
(Optional, number): Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job. Valid values range from 0 tomodel_snapshot_retention_days
. -
datafeed_config
(Optional, { aggregations, chunking_config, datafeed_id, delayed_data_check_config, frequency, indices, indices_options, job_id, max_empty_searches, query, query_delay, runtime_mappings, script_fields, scroll_size }): Defines a datafeed for the anomaly detection job. If Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had at the time of creation and runs the query using those same roles. If you provide secondary authorization headers, those credentials are used instead. -
description
(Optional, string): A description of the job. -
groups
(Optional, string[]): A list of job groups. A job can belong to no groups or many. -
model_plot_config
(Optional, { annotations_enabled, enabled, terms }): This advanced configuration option stores model information along with the results. It provides a more detailed view into anomaly detection. If you enable model plot it can add considerable overhead to the performance of the system; it is not feasible for jobs with many entities. Model plot provides a simplified and indicative view of the model and its bounds. It does not display complex features such as multivariate correlations or multimodal data. As such, anomalies may occasionally be reported which cannot be seen in the model plot. Model plot config can be configured when the job is created or updated later. It must be disabled if performance issues are experienced. -
model_snapshot_retention_days
(Optional, number): Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. By default, snapshots ten days older than the newest snapshot are deleted. -
renormalization_window_days
(Optional, number): Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. The default value is the longer of 30 days or 100 bucket spans. -
results_index_name
(Optional, string): A text string that affects the name of the machine learning results index. By default, the job generates an index named.ml-anomalies-shared
. -
results_retention_days
(Optional, number): Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. Annotations generated by the system also count as results for retention purposes; they are deleted after the same number of days as results. Annotations added by users are retained forever.
-
put_trained_model
editCreate a trained model. Enable you to supply a trained model that is not created by data frame analytics.
client.ml.putTrainedModel({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
compressed_definition
(Optional, string): The compressed (GZipped and Base64 encoded) inference definition of the model. If compressed_definition is specified, then definition cannot be specified. -
definition
(Optional, { preprocessors, trained_model }): The inference definition for the model. If definition is specified, then compressed_definition cannot be specified. -
description
(Optional, string): A human-readable description of the inference trained model. -
inference_config
(Optional, { regression, classification, text_classification, zero_shot_classification, fill_mask, ner, pass_through, text_embedding, text_expansion, question_answering }): The default configuration for inference. This can be either a regression or classification configuration. It must match the underlying definition.trained_model’s target_type. For pre-packaged models such as ELSER the config is not required. -
input
(Optional, { field_names }): The input field names for the model definition. -
metadata
(Optional, User-defined value): An object map that contains metadata about the model. -
model_type
(Optional, Enum("tree_ensemble" | "lang_ident" | "pytorch")): The model type. -
model_size_bytes
(Optional, number): The estimated memory usage in bytes to keep the trained model in memory. This property is supported only if defer_definition_decompression is true or the model definition is not supplied. -
platform_architecture
(Optional, string): The platform architecture (if applicable) of the trained mode. If the model only works on one platform, because it is heavily optimized for a particular processor architecture and OS combination, then this field specifies which. The format of the string must match the platform identifiers used by Elasticsearch, so one of,linux-x86_64
,linux-aarch64
,darwin-x86_64
,darwin-aarch64
, orwindows-x86_64
. For portable models (those that work independent of processor architecture or OS features), leave this field unset. -
tags
(Optional, string[]): An array of tags to organize the model. -
prefix_strings
(Optional, { ingest, search }): Optional prefix strings applied at inference -
defer_definition_decompression
(Optional, boolean): If set totrue
and acompressed_definition
is provided, the request defers definition decompression and skips relevant validations. -
wait_for_completion
(Optional, boolean): Whether to wait for all child operations (e.g. model download) to complete.
-
put_trained_model_alias
editCreate or update a trained model alias. A trained model alias is a logical name used to reference a single trained model. You can use aliases instead of trained model identifiers to make it easier to reference your models. For example, you can use aliases in inference aggregations and processors. An alias must be unique and refer to only a single trained model. However, you can have multiple aliases for each trained model. If you use this API to update an alias such that it references a different trained model ID and the model uses a different type of data frame analytics, an error occurs. For example, this situation occurs if you have a trained model for regression analysis and a trained model for classification analysis; you cannot reassign an alias from one type of trained model to another. If you use this API to update an alias and there are very few input fields in common between the old and new trained models for the model alias, the API returns a warning.
client.ml.putTrainedModelAlias({ model_alias, model_id })
Arguments
edit-
Request (object):
-
model_alias
(string): The alias to create or update. This value cannot end in numbers. -
model_id
(string): The identifier for the trained model that the alias refers to. -
reassign
(Optional, boolean): Specifies whether the alias gets reassigned to the specified trained model if it is already assigned to a different model. If the alias is already assigned and this parameter is false, the API returns an error.
-
put_trained_model_definition_part
editCreate part of a trained model definition.
client.ml.putTrainedModelDefinitionPart({ model_id, part, definition, total_definition_length, total_parts })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
part
(number): The definition part number. When the definition is loaded for inference the definition parts are streamed in the order of their part number. The first part must be0
and the final part must betotal_parts - 1
. -
definition
(string): The definition part for the model. Must be a base64 encoded string. -
total_definition_length
(number): The total uncompressed definition length in bytes. Not base64 encoded. -
total_parts
(number): The total number of parts that will be uploaded. Must be greater than 0.
-
put_trained_model_vocabulary
editCreate a trained model vocabulary.
This API is supported only for natural language processing (NLP) models.
The vocabulary is stored in the index as described in inference_config.*.vocabulary
of the trained model definition.
client.ml.putTrainedModelVocabulary({ model_id, vocabulary })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
vocabulary
(string[]): The model vocabulary, which must not be empty. -
merges
(Optional, string[]): The optional model merges if required by the tokenizer. -
scores
(Optional, number[]): The optional vocabulary value scores if required by the tokenizer.
-
reset_job
editReset an anomaly detection job. All model state and results are deleted. The job is ready to start over as if it had just been created. It is not currently possible to reset multiple jobs using wildcards or a comma separated list.
client.ml.resetJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): The ID of the job to reset. -
wait_for_completion
(Optional, boolean): Should this request wait until the operation has completed before returning. -
delete_user_annotations
(Optional, boolean): Specifies whether annotations that have been added by the user should be deleted along with any auto-generated annotations when the job is reset.
-
revert_model_snapshot
editRevert to a snapshot. The machine learning features react quickly to anomalous input, learning new behaviors in data. Highly anomalous input increases the variance in the models whilst the system learns whether this is a new step-change in behavior or a one-off event. In the case where this anomalous input is known to be a one-off, then it might be appropriate to reset the model state to a time before this event. For example, you might consider reverting to a saved snapshot after Black Friday or a critical system failure.
client.ml.revertModelSnapshot({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): You can specifyempty
as the <snapshot_id>. Reverting to the empty snapshot means the anomaly detection job starts learning a new model from scratch when it is started. -
delete_intervening_results
(Optional, boolean): Refer to the description for thedelete_intervening_results
query parameter.
-
set_upgrade_mode
editSet upgrade_mode for ML indices. Sets a cluster wide upgrade_mode setting that prepares machine learning indices for an upgrade. When upgrading your cluster, in some circumstances you must restart your nodes and reindex your machine learning indices. In those circumstances, there must be no machine learning jobs running. You can close the machine learning jobs, do the upgrade, then open all the jobs again. Alternatively, you can use this API to temporarily halt tasks associated with the jobs and datafeeds and prevent new jobs from opening. You can also use this API during upgrades that do not require you to reindex your machine learning indices, though stopping jobs is not a requirement in that case. You can see the current value for the upgrade_mode setting by using the get machine learning info API.
client.ml.setUpgradeMode({ ... })
Arguments
edit-
Request (object):
-
enabled
(Optional, boolean): Whentrue
, it enablesupgrade_mode
which temporarily halts all job and datafeed tasks and prohibits new job and datafeed tasks from starting. -
timeout
(Optional, string | -1 | 0): The time to wait for the request to be completed.
-
start_data_frame_analytics
editStart a data frame analytics job.
A data frame analytics job can be started and stopped multiple times
throughout its lifecycle.
If the destination index does not exist, it is created automatically the
first time you start the data frame analytics job. The
index.number_of_shards
and index.number_of_replicas
settings for the
destination index are copied from the source index. If there are multiple
source indices, the destination index copies the highest setting values. The
mappings for the destination index are also copied from the source indices.
If there are any mapping conflicts, the job fails to start.
If the destination index exists, it is used as is. You can therefore set up
the destination index in advance with custom settings and mappings.
client.ml.startDataFrameAnalytics({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
timeout
(Optional, string | -1 | 0): Controls the amount of time to wait until the data frame analytics job starts.
-
start_datafeed
editStart datafeeds.
A datafeed must be started in order to retrieve data from Elasticsearch. A datafeed can be started and stopped multiple times throughout its lifecycle.
Before you can start a datafeed, the anomaly detection job must be open. Otherwise, an error occurs.
If you restart a stopped datafeed, it continues processing input data from the next millisecond after it was stopped. If new data was indexed for that exact millisecond between stopping and starting, it will be ignored.
When Elasticsearch security features are enabled, your datafeed remembers which roles the last user to create or update it had at the time of creation or update and runs the query using those same roles. If you provided secondary authorization headers when you created or updated the datafeed, those credentials are used instead.
client.ml.startDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter. -
timeout
(Optional, string | -1 | 0): Refer to the description for thetimeout
query parameter.
-
start_trained_model_deployment
editStart a trained model deployment. It allocates the model to every machine learning node.
client.ml.startTrainedModelDeployment({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. Currently, only PyTorch models are supported. -
cache_size
(Optional, number | string): The inference cache size (in memory outside the JVM heap) per node for the model. The default value is the same size as themodel_size_bytes
. To disable the cache,0b
can be provided. -
deployment_id
(Optional, string): A unique identifier for the deployment of the model. -
number_of_allocations
(Optional, number): The number of model allocations on each node where the model is deployed. All allocations on a node share the same copy of the model in memory but use a separate set of threads to evaluate the model. Increasing this value generally increases the throughput. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads. -
priority
(Optional, Enum("normal" | "low")): The deployment priority. -
queue_capacity
(Optional, number): Specifies the number of inference requests that are allowed in the queue. After the number of requests exceeds this value, new requests are rejected with a 429 error. -
threads_per_allocation
(Optional, number): Sets the number of threads used by each model allocation during inference. This generally increases the inference speed. The inference process is a compute-bound process; any number greater than the number of available hardware threads on the machine does not increase the inference speed. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads. -
timeout
(Optional, string | -1 | 0): Specifies the amount of time to wait for the model to deploy. -
wait_for
(Optional, Enum("started" | "starting" | "fully_allocated")): Specifies the allocation status to wait for before returning.
-
stop_data_frame_analytics
editStop data frame analytics jobs. A data frame analytics job can be started and stopped multiple times throughout its lifecycle.
client.ml.stopDataFrameAnalytics({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no data frame analytics jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
The default value is true, which returns an empty data_frame_analytics
array when there are no matches and the subset of results when there are
partial matches. If this parameter is false, the request returns a 404
status code when there are no matches or only partial matches.
force
(Optional, boolean): If true, the data frame analytics job is stopped forcefully.
timeout
(Optional, string | -1 | 0): Controls the amount of time to wait until the data frame analytics job
stops. Defaults to 20 seconds.
stop_datafeed
editStop datafeeds. A datafeed that is stopped ceases to retrieve data from Elasticsearch. A datafeed can be started and stopped multiple times throughout its lifecycle.
client.ml.stopDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): Identifier for the datafeed. You can stop multiple datafeeds in a single API request by using a comma-separated list of datafeeds or a wildcard expression. You can close all datafeeds by using_all
or by specifying*
as the identifier. -
allow_no_match
(Optional, boolean): Refer to the description for theallow_no_match
query parameter. -
force
(Optional, boolean): Refer to the description for theforce
query parameter. -
timeout
(Optional, string | -1 | 0): Refer to the description for thetimeout
query parameter.
-
stop_trained_model_deployment
editStop a trained model deployment.
client.ml.stopTrainedModelDeployment({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
allow_no_match
(Optional, boolean): Specifies what to do when the request: contains wildcard expressions and there are no deployments that match; contains the_all
string or no identifiers and there are no matches; or contains wildcard expressions and there are only partial matches. By default, it returns an empty array when there are no matches and the subset of results when there are partial matches. Iffalse
, the request returns a 404 status code when there are no matches or only partial matches. -
force
(Optional, boolean): Forcefully stops the deployment, even if it is used by ingest pipelines. You can’t use these pipelines until you restart the model deployment.
-
update_data_frame_analytics
editUpdate a data frame analytics job.
client.ml.updateDataFrameAnalytics({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
description
(Optional, string): A description of the job. -
model_memory_limit
(Optional, string): The approximate maximum amount of memory resources that are permitted for analytical processing. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. -
max_num_threads
(Optional, number): The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself. -
allow_lazy_start
(Optional, boolean): Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node.
-
update_datafeed
editUpdate a datafeed. You must stop and start the datafeed for the changes to be applied. When Elasticsearch security features are enabled, your datafeed remembers which roles the user who updated it had at the time of the update and runs the query using those same roles. If you provide secondary authorization headers, those credentials are used instead.
client.ml.updateDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data. -
chunking_config
(Optional, { mode, time_span }): Datafeeds might search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. Chunking configuration controls how the size of these time chunks are calculated; it is an advanced configuration option. -
delayed_data_check_config
(Optional, { check_window, enabled }): Specifies whether the datafeed checks for missing data and the size of the window. The datafeed can optionally search over indices that have already been read in an effort to determine whether any data has subsequently been added to the index. If missing data is found, it is a good indication that thequery_delay
is set too low and the data is being indexed after the datafeed has passed that moment in time. This check runs only on real-time datafeeds. -
frequency
(Optional, string | -1 | 0): The interval at which scheduled queries are made while the datafeed runs in real time. The default value is either the bucket span for short bucket spans, or, for longer bucket spans, a sensible fraction of the bucket span. Whenfrequency
is shorter than the bucket span, interim results for the last (partial) bucket are written then eventually overwritten by the full bucket results. If the datafeed uses aggregations, this value must be divisible by the interval of the date histogram aggregation. -
indices
(Optional, string[]): An array of index names. Wildcards are supported. If any of the indices are in remote clusters, the machine learning nodes must have theremote_cluster_client
role. -
indices_options
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, ignore_throttled }): Specifies index expansion options that are used during search. -
job_id
(Optional, string) -
max_empty_searches
(Optional, number): If a real-time datafeed has never seen any data (including during any initial training period), it automatically stops and closes the associated job after this many real-time searches return no documents. In other words, it stops afterfrequency
timesmax_empty_searches
of real-time operation. If not set, a datafeed with no end time that sees no data remains started until it is explicitly stopped. By default, it is not set. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. Note that if you change the query, the analyzed data is also changed. Therefore, the time required to learn might be long and the understandability of the results is unpredictable. If you want to make significant changes to the source data, it is recommended that you clone the job and datafeed and make the amendments in the clone. Let both run in parallel and close one when you are satisfied with the results of the job. -
query_delay
(Optional, string | -1 | 0): The number of seconds behind real time that data is queried. For example, if data from 10:04 a.m. might not be searchable in Elasticsearch until 10:06 a.m., set this property to 120 seconds. The default value is randomly selected between60s
and120s
. This randomness improves the query performance when there are multiple jobs running on the same node. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Specifies runtime fields for the datafeed search. -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields. -
scroll_size
(Optional, number): The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value ofindex.max_result_window
. -
allow_no_indices
(Optional, boolean): Iftrue
, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the_all
string or when no indices are specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values. Valid values are:
-
-
all
: Match any data stream or index, including hidden ones. -
closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed. -
hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, or both. -
none
: Wildcard patterns are not accepted. -
open
: Match open, non-hidden indices. Also matches any non-hidden data stream.-
ignore_throttled
(Optional, boolean): Iftrue
, concrete, expanded or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): Iftrue
, unavailable indices (missing or closed) are ignored.
-
update_filter
editUpdate a filter. Updates the description of a filter, adds items, or removes items from the list.
client.ml.updateFilter({ filter_id })
Arguments
edit-
Request (object):
-
filter_id
(string): A string that uniquely identifies a filter. -
add_items
(Optional, string[]): The items to add to the filter. -
description
(Optional, string): A description for the filter. -
remove_items
(Optional, string[]): The items to remove from the filter.
-
update_job
editUpdate an anomaly detection job. Updates certain properties of an anomaly detection job.
client.ml.updateJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the job. -
allow_lazy_open
(Optional, boolean): Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node. Iffalse
and a machine learning node with capacity to run the job cannot immediately be found, the open anomaly detection jobs API returns an error. However, this is also subject to the cluster-widexpack.ml.max_lazy_ml_nodes
setting. If this option is set totrue
, the open anomaly detection jobs API does not return an error and the job waits in the opening state until sufficient machine learning node capacity is available. -
analysis_limits
(Optional, { model_memory_limit }) -
background_persist_interval
(Optional, string | -1 | 0): Advanced configuration option. The time between each periodic persistence of the model. The default value is a randomized value between 3 to 4 hours, which avoids all jobs persisting at exactly the same time. The smallest allowed value is 1 hour. For very large models (several GB), persistence could take 10-20 minutes, so do not set the value too low. If the job is open when you make the update, you must stop the datafeed, close the job, then reopen the job and restart the datafeed for the changes to take effect. -
custom_settings
(Optional, Record<string, User-defined value>): Advanced configuration option. Contains custom meta data about the job. For example, it can contain custom URL information as shown in Adding custom URLs to machine learning results. -
categorization_filters
(Optional, string[]) -
description
(Optional, string): A description of the job. -
model_plot_config
(Optional, { annotations_enabled, enabled, terms }) -
model_prune_window
(Optional, string | -1 | 0) -
daily_model_snapshot_retention_after_days
(Optional, number): Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job. Valid values range from 0 tomodel_snapshot_retention_days
. For jobs created before version 7.8.0, the default value matchesmodel_snapshot_retention_days
. -
model_snapshot_retention_days
(Optional, number): Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. -
renormalization_window_days
(Optional, number): Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. -
results_retention_days
(Optional, number): Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. -
groups
(Optional, string[]): A list of job groups. A job can belong to no groups or many. -
detectors
(Optional, { by_field_name, custom_rules, detector_description, detector_index, exclude_frequent, field_name, function, over_field_name, partition_field_name, use_null }[]): An array of detector update objects. -
per_partition_categorization
(Optional, { enabled, stop_on_warn }): Settings related to how categorization interacts with partition fields.
-
update_model_snapshot
editUpdate a snapshot. Updates certain properties of a snapshot.
client.ml.updateModelSnapshot({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): Identifier for the model snapshot. -
description
(Optional, string): A description of the model snapshot. -
retain
(Optional, boolean): Iftrue
, this snapshot will not be deleted during automatic cleanup of snapshots older thanmodel_snapshot_retention_days
. However, this snapshot will be deleted when the job is deleted.
-
update_trained_model_deployment
editUpdate a trained model deployment.
client.ml.updateTrainedModelDeployment({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. Currently, only PyTorch models are supported. -
number_of_allocations
(Optional, number): The number of model allocations on each node where the model is deployed. All allocations on a node share the same copy of the model in memory but use a separate set of threads to evaluate the model. Increasing this value generally increases the throughput. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads.
-
upgrade_job_snapshot
editUpgrade a snapshot. Upgrades an anomaly detection model snapshot to the latest major version. Over time, older snapshot formats are deprecated and removed. Anomaly detection jobs support only snapshots that are from the current or previous major version. This API provides a means to upgrade a snapshot to the current major version. This aids in preparing the cluster for an upgrade to the next major version. Only one snapshot per anomaly detection job can be upgraded at a time and the upgraded snapshot cannot be the current snapshot of the anomaly detection job.
client.ml.upgradeJobSnapshot({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): A numerical character string that uniquely identifies the model snapshot. -
wait_for_completion
(Optional, boolean): When true, the API won’t respond until the upgrade is complete. Otherwise, it responds as soon as the upgrade task is assigned to a node. -
timeout
(Optional, string | -1 | 0): Controls the time to wait for the request to complete.
-
monitoring
editbulk
editUsed by the monitoring features to send monitoring data.
client.monitoring.bulk({ system_id, system_api_version, interval })
Arguments
edit-
Request (object):
-
system_id
(string): Identifier of the monitored system -
system_api_version
(string) -
interval
(string | -1 | 0): Collection interval (e.g., 10s or 10000ms) of the payload -
type
(Optional, string): Default document type for items which don’t provide one -
operations
(Optional, { index, create, update, delete } | { detect_noop, doc, doc_as_upsert, script, scripted_upsert, _source, upsert } | object[])
-
nodes
editclear_repositories_metering_archive
editClear the archived repositories metering. Clear the archived repositories metering information in the cluster.
client.nodes.clearRepositoriesMeteringArchive({ node_id, max_archive_version })
Arguments
edit-
Request (object):
-
node_id
(string | string[]): List of node IDs or names used to limit returned information. All the nodes selective options are explained [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.html#cluster-nodes). -
max_archive_version
(number): Specifies the maximum [archive_version](https://www.elastic.co/guide/en/elasticsearch/reference/current/get-repositories-metering-api.html#get-repositories-metering-api-response-body) to be cleared from the archive.
-
get_repositories_metering_info
editGet cluster repositories metering. Get repositories metering information for a cluster. This API exposes monotonically non-decreasing counters and it is expected that clients would durably store the information needed to compute aggregations over a period of time. Additionally, the information exposed by this API is volatile, meaning that it will not be present after node restarts.
client.nodes.getRepositoriesMeteringInfo({ node_id })
Arguments
edit-
Request (object):
-
node_id
(string | string[]): List of node IDs or names used to limit returned information. All the nodes selective options are explained [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.html#cluster-nodes).
-
hot_threads
editGet the hot threads for nodes. Get a breakdown of the hot threads on each selected node in the cluster. The output is plain text with a breakdown of the top hot threads for each node.
client.nodes.hotThreads({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): List of node IDs or names used to limit returned information. -
ignore_idle_threads
(Optional, boolean): If true, known idle threads (e.g. waiting in a socket select, or to get a task from an empty queue) are filtered out. -
interval
(Optional, string | -1 | 0): The interval to do the second sampling of threads. -
snapshots
(Optional, number): Number of samples of thread stacktrace. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
threads
(Optional, number): Specifies the number of hot threads to provide information for. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
type
(Optional, Enum("cpu" | "wait" | "block" | "gpu" | "mem")): The type to sample. -
sort
(Optional, Enum("cpu" | "wait" | "block" | "gpu" | "mem")): The sort order for cpu type (default: total)
-
info
editGet node information. By default, the API returns all attributes and core settings for cluster nodes.
client.nodes.info({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): List of node IDs or names used to limit returned information. -
metric
(Optional, string | string[]): Limits the information returned to the specific metrics. Supports a list, such as http,ingest. -
flat_settings
(Optional, boolean): If true, returns settings in flat format. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
reload_secure_settings
editReload the keystore on nodes in the cluster.
Secure settings are stored in an on-disk keystore. Certain of these settings are reloadable. That is, you can change them on disk and reload them without restarting any nodes in the cluster. When you have updated reloadable secure settings in your keystore, you can use this API to reload those settings on each node.
When the Elasticsearch keystore is password protected and not simply obfuscated, you must provide the password for the keystore when you reload the secure settings. Reloading the settings for the whole cluster assumes that the keystores for all nodes are protected with the same password; this method is allowed only when inter-node communications are encrypted. Alternatively, you can reload the secure settings on each node by locally accessing the API and passing the node-specific Elasticsearch keystore password.
client.nodes.reloadSecureSettings({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): The names of particular nodes in the cluster to target. -
secure_settings_password
(Optional, string): The password for the Elasticsearch keystore. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
stats
editGet node statistics. Get statistics for nodes in a cluster. By default, all stats are returned. You can limit the returned information by using metrics.
client.nodes.stats({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): List of node IDs or names used to limit returned information. -
metric
(Optional, string | string[]): Limit the information returned to the specified metrics -
index_metric
(Optional, string | string[]): Limit the information returned for indices metric to the specific index metrics. It can be used only if indices (or all) metric is specified. -
completion_fields
(Optional, string | string[]): List or wildcard expressions of fields to include in fielddata and suggest statistics. -
fielddata_fields
(Optional, string | string[]): List or wildcard expressions of fields to include in fielddata statistics. -
fields
(Optional, string | string[]): List or wildcard expressions of fields to include in the statistics. -
groups
(Optional, boolean): List of search groups to include in the search statistics. -
include_segment_file_sizes
(Optional, boolean): If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested). -
level
(Optional, Enum("cluster" | "indices" | "shards")): Indicates whether statistics are aggregated at the cluster, index, or shard level. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
types
(Optional, string[]): A list of document types for the indexing index metric. -
include_unloaded_segments
(Optional, boolean): Iftrue
, the response includes information from segments that are not loaded into memory.
-
usage
editGet feature usage information.
client.nodes.usage({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): A list of node IDs or names to limit the returned information; use_local
to return information from the node you’re connecting to, leave empty to get information from all nodes -
metric
(Optional, string | string[]): Limits the information returned to the specific metrics. A list of the following options:_all
,rest_actions
. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
query_rules
editdelete_rule
editDelete a query rule. Delete a query rule within a query ruleset.
client.queryRules.deleteRule({ ruleset_id, rule_id })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset containing the rule to delete -
rule_id
(string): The unique identifier of the query rule within the specified ruleset to delete
-
delete_ruleset
editDelete a query ruleset.
client.queryRules.deleteRuleset({ ruleset_id })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset to delete
-
get_rule
editGet a query rule. Get details about a query rule within a query ruleset.
client.queryRules.getRule({ ruleset_id, rule_id })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset containing the rule to retrieve -
rule_id
(string): The unique identifier of the query rule within the specified ruleset to retrieve
-
get_ruleset
editGet a query ruleset. Get details about a query ruleset.
client.queryRules.getRuleset({ ruleset_id })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset
-
list_rulesets
editGet all query rulesets. Get summarized information about the query rulesets.
client.queryRules.listRulesets({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): Starting offset (default: 0) -
size
(Optional, number): specifies a max number of results to get
-
put_rule
editCreate or update a query rule. Create or update a query rule within a query ruleset.
client.queryRules.putRule({ ruleset_id, rule_id, type, criteria, actions })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset containing the rule to be created or updated -
rule_id
(string): The unique identifier of the query rule within the specified ruleset to be created or updated -
type
(Enum("pinned" | "exclude")) -
criteria
({ type, metadata, values } | { type, metadata, values }[]) -
actions
({ ids, docs }) -
priority
(Optional, number)
-
put_ruleset
editCreate or update a query ruleset.
client.queryRules.putRuleset({ ruleset_id, rules })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset to be created or updated -
rules
({ rule_id, type, criteria, actions, priority } | { rule_id, type, criteria, actions, priority }[])
-
test
editTest a query ruleset. Evaluate match criteria against a query ruleset to identify the rules that would match that criteria.
client.queryRules.test({ ruleset_id, match_criteria })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset to be created or updated -
match_criteria
(Record<string, User-defined value>)
-
rollup
editdelete_job
editDeletes an existing rollup job.
client.rollup.deleteJob({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the job.
-
get_jobs
editRetrieves the configuration, stats, and status of rollup jobs.
client.rollup.getJobs({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the rollup job. If it is_all
or omitted, the API returns all rollup jobs.
-
get_rollup_caps
editReturns the capabilities of any rollup jobs that have been configured for a specific index or index pattern.
client.rollup.getRollupCaps({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Index, indices or index-pattern to return rollup capabilities for._all
may be used to fetch rollup capabilities from all jobs.
-
get_rollup_index_caps
editReturns the rollup capabilities of all jobs inside of a rollup index (for example, the index where rollup data is stored).
client.rollup.getRollupIndexCaps({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): Data stream or index to check for rollup capabilities. Wildcard (*
) expressions are supported.
-
put_job
editCreates a rollup job.
client.rollup.putJob({ id, cron, groups, index_pattern, page_size, rollup_index })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the rollup job. This can be any alphanumeric string and uniquely identifies the data that is associated with the rollup job. The ID is persistent; it is stored with the rolled up data. If you create a job, let it run for a while, then delete the job, the data that the job rolled up is still be associated with this job ID. You cannot create a new job with the same ID since that could lead to problems with mismatched job configurations. -
cron
(string): A cron string which defines the intervals when the rollup job should be executed. When the interval triggers, the indexer attempts to rollup the data in the index pattern. The cron pattern is unrelated to the time interval of the data being rolled up. For example, you may wish to create hourly rollups of your document but to only run the indexer on a daily basis at midnight, as defined by the cron. The cron pattern is defined just like a Watcher cron schedule. -
groups
({ date_histogram, histogram, terms }): Defines the grouping fields and aggregations that are defined for this rollup job. These fields will then be available later for aggregating into buckets. These aggs and fields can be used in any combination. Think of the groups configuration as defining a set of tools that can later be used in aggregations to partition the data. Unlike raw data, we have to think ahead to which fields and aggregations might be used. Rollups provide enough flexibility that you simply need to determine which fields are needed, not in what order they are needed. -
index_pattern
(string): The index or index pattern to roll up. Supports wildcard-style patterns (logstash-*
). The job attempts to rollup the entire index or index-pattern. -
page_size
(number): The number of bucket results that are processed on each iteration of the rollup indexer. A larger value tends to execute faster, but requires more memory during processing. This value has no effect on how the data is rolled up; it is merely used for tweaking the speed or memory cost of the indexer. -
rollup_index
(string): The index that contains the rollup results. The index can be shared with other rollup jobs. The data is stored so that it doesn’t interfere with unrelated jobs. -
metrics
(Optional, { field, metrics }[]): Defines the metrics to collect for each grouping tuple. By default, only the doc_counts are collected for each group. To make rollup useful, you will often add metrics like averages, mins, maxes, etc. Metrics are defined on a per-field basis and for each field you configure which metric should be collected. -
timeout
(Optional, string | -1 | 0): Time to wait for the request to complete. -
headers
(Optional, Record<string, string | string[]>)
-
rollup_search
editEnables searching rolled-up data using the standard Query DSL.
client.rollup.rollupSearch({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): Enables searching rolled-up data using the standard Query DSL. -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): Specifies aggregations. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Specifies a DSL query. -
size
(Optional, number): Must be zero if set, as rollups work on pre-aggregated data. -
rest_total_hits_as_int
(Optional, boolean): Indicates whether hits.total should be rendered as an integer or an object in the rest search response -
typed_keys
(Optional, boolean): Specify whether aggregation and suggester names should be prefixed by their respective types in the response
-
start_job
editStarts an existing, stopped rollup job.
client.rollup.startJob({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the rollup job.
-
stop_job
editStops an existing, started rollup job.
client.rollup.stopJob({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the rollup job. -
timeout
(Optional, string | -1 | 0): Ifwait_for_completion
istrue
, the API blocks for (at maximum) the specified duration while waiting for the job to stop. If more thantimeout
time has passed, the API throws a timeout exception. -
wait_for_completion
(Optional, boolean): If set totrue
, causes the API to block until the indexer state completely stops. If set tofalse
, the API returns immediately and the indexer is stopped asynchronously in the background.
-
search_application
editdelete
editDelete a search application. Remove a search application and its associated alias. Indices attached to the search application are not removed.
client.searchApplication.delete({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the search application to delete
-
delete_behavioral_analytics
editDelete a behavioral analytics collection. The associated data stream is also deleted.
client.searchApplication.deleteBehavioralAnalytics({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the analytics collection to be deleted
-
get
editGet search application details.
client.searchApplication.get({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the search application
-
get_behavioral_analytics
editGet behavioral analytics collections.
client.searchApplication.getBehavioralAnalytics({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string[]): A list of analytics collections to limit the returned information
-
list
editReturns the existing search applications.
client.searchApplication.list({ ... })
Arguments
edit-
Request (object):
-
q
(Optional, string): Query in the Lucene query string syntax. -
from
(Optional, number): Starting offset. -
size
(Optional, number): Specifies a max number of results to get.
-
post_behavioral_analytics_event
editCreates a behavioral analytics event for existing collection.
client.searchApplication.postBehavioralAnalyticsEvent()
put
editCreate or update a search application.
client.searchApplication.put({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the search application to be created or updated. -
search_application
(Optional, { name, indices, updated_at_millis, analytics_collection_name, template }) -
create
(Optional, boolean): Iftrue
, this request cannot replace or update existing Search Applications.
-
put_behavioral_analytics
editCreate a behavioral analytics collection.
client.searchApplication.putBehavioralAnalytics({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the analytics collection to be created or updated.
-
render_query
editRenders a query for given search application search parameters
client.searchApplication.renderQuery()
search
editRun a search application search. Generate and run an Elasticsearch query that uses the specified query parameteter and the search template associated with the search application or default template. Unspecified template parameters are assigned their default values if applicable.
client.searchApplication.search({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the search application to be searched. -
params
(Optional, Record<string, User-defined value>): Query parameters specific to this request, which will override any defaults specified in the template. -
typed_keys
(Optional, boolean): Determines whether aggregation names are prefixed by their respective types in the response.
-
searchable_snapshots
editcache_stats
editRetrieve node-level cache statistics about searchable snapshots.
client.searchableSnapshots.cacheStats({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): A list of node IDs or names to limit the returned information; use_local
to return information from the node you’re connecting to, leave empty to get information from all nodes -
master_timeout
(Optional, string | -1 | 0)
-
clear_cache
editClear the cache of searchable snapshots.
client.searchableSnapshots.clearCache({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of index names -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed) -
pretty
(Optional, boolean) -
human
(Optional, boolean)
-
mount
editMount a snapshot as a searchable index.
client.searchableSnapshots.mount({ repository, snapshot, index })
Arguments
edit-
Request (object):
-
repository
(string): The name of the repository containing the snapshot of the index to mount -
snapshot
(string): The name of the snapshot of the index to mount -
index
(string) -
renamed_index
(Optional, string) -
index_settings
(Optional, Record<string, User-defined value>) -
ignore_index_settings
(Optional, string[]) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
wait_for_completion
(Optional, boolean): Should this request wait until the operation has completed before returning -
storage
(Optional, string): Selects the kind of local storage used to accelerate searches. Experimental, and defaults tofull_copy
-
stats
editRetrieve shard-level statistics about searchable snapshots.
client.searchableSnapshots.stats({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of index names -
level
(Optional, Enum("cluster" | "indices" | "shards")): Return stats aggregated at cluster, index or shard level
-
security
editactivate_user_profile
editActivate a user profile.
Create or update a user profile on behalf of another user.
client.security.activateUserProfile({ grant_type })
Arguments
edit-
Request (object):
-
grant_type
(Enum("password" | "access_token")) -
access_token
(Optional, string) -
password
(Optional, string) -
username
(Optional, string)
-
authenticate
editAuthenticate a user.
Authenticates a user and returns information about the authenticated user. Include the user information in a [basic auth header](https://en.wikipedia.org/wiki/Basic_access_authentication). A successful call returns a JSON structure that shows user information such as their username, the roles that are assigned to the user, any assigned metadata, and information about the realms that authenticated and authorized the user. If the user cannot be authenticated, this API returns a 401 status code.
client.security.authenticate()
bulk_delete_role
editBulk delete roles.
The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The bulk delete roles API cannot delete roles that are defined in roles files.
client.security.bulkDeleteRole({ names })
Arguments
edit-
Request (object):
-
names
(string[]): An array of role names to delete -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
bulk_put_role
editBulk create or update roles.
The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The bulk create or update roles API cannot update roles that are defined in roles files.
client.security.bulkPutRole({ roles })
Arguments
edit-
Request (object):
-
roles
(Record<string, { cluster, indices, remote_indices, remote_cluster, global, applications, metadata, run_as, description, restriction, transient_metadata }>): A dictionary of role name to RoleDescriptor objects to add or update -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
bulk_update_api_keys
editUpdates the attributes of multiple existing API keys.
client.security.bulkUpdateApiKeys()
change_password
editChange passwords.
Change the passwords of users in the native realm and built-in users.
client.security.changePassword({ ... })
Arguments
edit-
Request (object):
-
username
(Optional, string): The user whose password you want to change. If you do not specify this parameter, the password is changed for the current user. -
password
(Optional, string): The new password value. Passwords must be at least 6 characters long. -
password_hash
(Optional, string): A hash of the new password value. This must be produced using the same hashing algorithm as has been configured for password storage. For more details, see the explanation of thexpack.security.authc.password_hashing.algorithm
setting. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
clear_api_key_cache
editClear the API key cache.
Evict a subset of all entries from the API key cache. The cache is also automatically cleared on state changes of the security index.
client.security.clearApiKeyCache({ ids })
Arguments
edit-
Request (object):
-
ids
(string | string[]): List of API key IDs to evict from the API key cache. To evict all API keys, use*
. Does not support other wildcard patterns.
-
clear_cached_privileges
editClear the privileges cache.
Evict privileges from the native application privilege cache. The cache is also automatically cleared for applications that have their privileges updated.
client.security.clearCachedPrivileges({ application })
Arguments
edit-
Request (object):
-
application
(string): A list of application names
-
clear_cached_realms
editClear the user cache.
Evict users from the user cache. You can completely clear the cache or evict specific users.
client.security.clearCachedRealms({ realms })
Arguments
edit-
Request (object):
-
realms
(string | string[]): List of realms to clear -
usernames
(Optional, string[]): List of usernames to clear from the cache
-
clear_cached_roles
editClear the roles cache.
Evict roles from the native role cache.
client.security.clearCachedRoles({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): Role name
-
clear_cached_service_tokens
editClear service account token caches.
Evict a subset of all entries from the service account token caches.
client.security.clearCachedServiceTokens({ namespace, service, name })
Arguments
edit-
Request (object):
-
namespace
(string): An identifier for the namespace -
service
(string): An identifier for the service name -
name
(string | string[]): A list of service token names
-
create_api_key
editCreate an API key.
Create an API key for access without requiring basic authentication. A successful request returns a JSON structure that contains the API key, its unique id, and its name. If applicable, it also returns expiration information for the API key in milliseconds. NOTE: By default, API keys never expire. You can specify expiration information when you create the API keys.
client.security.createApiKey({ ... })
Arguments
edit-
Request (object):
-
expiration
(Optional, string | -1 | 0): Expiration time for the API key. By default, API keys never expire. -
name
(Optional, string): Specifies the name for this API key. -
role_descriptors
(Optional, Record<string, { cluster, indices, remote_indices, remote_cluster, global, applications, metadata, run_as, description, restriction, transient_metadata }>): An array of role descriptors for this API key. This parameter is optional. When it is not specified or is an empty array, then the API key will have a point in time snapshot of permissions of the authenticated user. If you supply role descriptors then the resultant permissions would be an intersection of API keys permissions and authenticated user’s permissions thereby limiting the access scope for API keys. The structure of role descriptor is the same as the request for create role API. For more details, see create or update roles API. -
metadata
(Optional, Record<string, User-defined value>): Arbitrary metadata that you want to associate with the API key. It supports nested data structure. Within the metadata object, keys beginning with_
are reserved for system usage. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
create_cross_cluster_api_key
editCreate a cross-cluster API key.
Create an API key of the cross_cluster
type for the API key based remote cluster access.
A cross_cluster
API key cannot be used to authenticate through the REST interface.
To authenticate this request you must use a credential that is not an API key. Even if you use an API key that has the required privilege, the API returns an error.
Cross-cluster API keys are created by the Elasticsearch API key service, which is automatically enabled.
Unlike REST API keys, a cross-cluster API key does not capture permissions of the authenticated user. The API key’s effective permission is exactly as specified with the access
property.
A successful request returns a JSON structure that contains the API key, its unique ID, and its name. If applicable, it also returns expiration information for the API key in milliseconds.
By default, API keys never expire. You can specify expiration information when you create the API keys.
Cross-cluster API keys can only be updated with the update cross-cluster API key API. Attempting to update them with the update REST API key API or the bulk update REST API keys API will result in an error.
client.security.createCrossClusterApiKey({ access, name })
Arguments
edit-
Request (object):
-
access
({ replication, search }): The access to be granted to this API key. The access is composed of permissions for cross-cluster search and cross-cluster replication. At least one of them must be specified.
-
No explicit privileges should be specified for either search or replication access.
The creation process automatically converts the access specification to a role descriptor which has relevant privileges assigned accordingly.
name
(string): Specifies the name for this API key.
expiration
(Optional, string | -1 | 0): Expiration time for the API key.
By default, API keys never expire.
* *metadata
(Optional, Record<string, User-defined value>): Arbitrary metadata that you want to associate with the API key.
It supports nested data structure.
Within the metadata object, keys beginning with _
are reserved for system usage.
create_service_token
editCreate a service account token.
Create a service accounts token for access without requiring basic authentication.
client.security.createServiceToken({ namespace, service })
Arguments
edit-
Request (object):
-
namespace
(string): An identifier for the namespace -
service
(string): An identifier for the service name -
name
(Optional, string): An identifier for the token name -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
then refresh the affected shards to make this operation visible to search, ifwait_for
(the default) then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
delete_privileges
editDelete application privileges.
client.security.deletePrivileges({ application, name })
Arguments
edit-
Request (object):
-
application
(string): Application name -
name
(string | string[]): Privilege name -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
delete_role
editDelete roles.
Delete roles in the native realm.
client.security.deleteRole({ name })
Arguments
edit-
Request (object):
-
name
(string): Role name -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
delete_role_mapping
editDelete role mappings.
client.security.deleteRoleMapping({ name })
Arguments
edit-
Request (object):
-
name
(string): Role-mapping name -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
delete_service_token
editDelete service account tokens.
Delete service account tokens for a service in a specified namespace.
client.security.deleteServiceToken({ namespace, service, name })
Arguments
edit-
Request (object):
-
namespace
(string): An identifier for the namespace -
service
(string): An identifier for the service name -
name
(string): An identifier for the token name -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
then refresh the affected shards to make this operation visible to search, ifwait_for
(the default) then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
delete_user
editDelete users.
Delete users from the native realm.
client.security.deleteUser({ username })
Arguments
edit-
Request (object):
-
username
(string): username -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
disable_user
editDisable users.
Disable users in the native realm.
client.security.disableUser({ username })
Arguments
edit-
Request (object):
-
username
(string): The username of the user to disable -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
disable_user_profile
editDisable a user profile.
Disable user profiles so that they are not visible in user profile searches.
client.security.disableUserProfile({ uid })
Arguments
edit-
Request (object):
-
uid
(string): Unique identifier for the user profile. -
refresh
(Optional, Enum(true | false | "wait_for")): If true, Elasticsearch refreshes the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false do nothing with refreshes.
-
enable_user
editEnable users.
Enable users in the native realm.
client.security.enableUser({ username })
Arguments
edit-
Request (object):
-
username
(string): The username of the user to enable -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
enable_user_profile
editEnable a user profile.
Enable user profiles to make them visible in user profile searches.
client.security.enableUserProfile({ uid })
Arguments
edit-
Request (object):
-
uid
(string): Unique identifier for the user profile. -
refresh
(Optional, Enum(true | false | "wait_for")): If true, Elasticsearch refreshes the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false do nothing with refreshes.
-
enroll_kibana
editEnroll Kibana.
Enable a Kibana instance to configure itself for communication with a secured Elasticsearch cluster.
client.security.enrollKibana()
enroll_node
editEnroll a node.
Enroll a new node to allow it to join an existing cluster with security features enabled.
client.security.enrollNode()
get_api_key
editGet API key information.
Retrieves information for one or more API keys.
NOTE: If you have only the manage_own_api_key
privilege, this API returns only the API keys that you own.
If you have read_security
, manage_api_key
or greater privileges (including manage_security
), this API returns all API keys regardless of ownership.
client.security.getApiKey({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): An API key id. This parameter cannot be used with any ofname
,realm_name
orusername
. -
name
(Optional, string): An API key name. This parameter cannot be used with any ofid
,realm_name
orusername
. It supports prefix search with wildcard. -
owner
(Optional, boolean): A boolean flag that can be used to query API keys owned by the currently authenticated user. Therealm_name
orusername
parameters cannot be specified when this parameter is set totrue
as they are assumed to be the currently authenticated ones. -
realm_name
(Optional, string): The name of an authentication realm. This parameter cannot be used with eitherid
orname
or whenowner
flag is set totrue
. -
username
(Optional, string): The username of a user. This parameter cannot be used with eitherid
orname
or whenowner
flag is set totrue
. -
with_limited_by
(Optional, boolean): Return the snapshot of the owner user’s role descriptors associated with the API key. An API key’s actual permission is the intersection of its assigned role descriptors and the owner user’s role descriptors. -
active_only
(Optional, boolean): A boolean flag that can be used to query API keys that are currently active. An API key is considered active if it is neither invalidated, nor expired at query time. You can specify this together with other parameters such asowner
orname
. Ifactive_only
is false, the response will include both active and inactive (expired or invalidated) keys. -
with_profile_uid
(Optional, boolean): Determines whether to also retrieve the profile uid, for the API key owner principal, if it exists.
-
get_builtin_privileges
editGet builtin privileges.
Get the list of cluster privileges and index privileges that are available in this version of Elasticsearch.
client.security.getBuiltinPrivileges()
get_privileges
editGet application privileges.
client.security.getPrivileges({ ... })
Arguments
edit-
Request (object):
-
application
(Optional, string): Application name -
name
(Optional, string | string[]): Privilege name
-
get_role
editGet roles.
Get roles in the native realm. The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The get roles API cannot retrieve roles that are defined in roles files.
client.security.getRole({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): The name of the role. You can specify multiple roles as a list. If you do not specify this parameter, the API returns information about all roles.
-
get_role_mapping
editGet role mappings.
Role mappings define which roles are assigned to each user. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The get role mappings API cannot retrieve role mappings that are defined in role mapping files.
client.security.getRoleMapping({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): The distinct name that identifies the role mapping. The name is used solely as an identifier to facilitate interaction via the API; it does not affect the behavior of the mapping in any way. You can specify multiple mapping names as a list. If you do not specify this parameter, the API returns information about all role mappings.
-
get_service_accounts
editGet service accounts.
Get a list of service accounts that match the provided path parameters.
client.security.getServiceAccounts({ ... })
Arguments
edit-
Request (object):
-
namespace
(Optional, string): Name of the namespace. Omit this parameter to retrieve information about all service accounts. If you omit this parameter, you must also omit theservice
parameter. -
service
(Optional, string): Name of the service name. Omit this parameter to retrieve information about all service accounts that belong to the specifiednamespace
.
-
get_service_credentials
editGet service account credentials.
client.security.getServiceCredentials({ namespace, service })
Arguments
edit-
Request (object):
-
namespace
(string): Name of the namespace. -
service
(string): Name of the service name.
-
get_settings
editRetrieve settings for the security system indices
client.security.getSettings()
get_token
editGet a token.
Create a bearer token for access without requiring basic authentication.
client.security.getToken({ ... })
Arguments
edit-
Request (object):
-
grant_type
(Optional, Enum("password" | "client_credentials" | "_kerberos" | "refresh_token")) -
scope
(Optional, string) -
password
(Optional, string) -
kerberos_ticket
(Optional, string) -
refresh_token
(Optional, string) -
username
(Optional, string)
-
get_user
editGet users.
Get information about users in the native realm and built-in users.
client.security.getUser({ ... })
Arguments
edit-
Request (object):
-
username
(Optional, string | string[]): An identifier for the user. You can specify multiple usernames as a list. If you omit this parameter, the API retrieves information about all users. -
with_profile_uid
(Optional, boolean): If true will return the User Profile ID for a user, if any.
-
get_user_privileges
editGet user privileges.
client.security.getUserPrivileges({ ... })
Arguments
edit-
Request (object):
-
application
(Optional, string): The name of the application. Application privileges are always associated with exactly one application. If you do not specify this parameter, the API returns information about all privileges for all applications. -
priviledge
(Optional, string): The name of the privilege. If you do not specify this parameter, the API returns information about all privileges for the requested application. -
username
(Optional, string | null)
-
get_user_profile
editGet a user profile.
Get a user’s profile using the unique profile ID.
client.security.getUserProfile({ uid })
Arguments
edit-
Request (object):
-
uid
(string | string[]): A unique identifier for the user profile. -
data
(Optional, string | string[]): List of filters for thedata
field of the profile document. To return all content usedata=*
. To return a subset of content usedata=<key>
to retrieve content nested under the specified<key>
. By default returns nodata
content.
-
grant_api_key
editGrant an API key.
Create an API key on behalf of another user. This API is similar to the create API keys API, however it creates the API key for a user that is different than the user that runs the API. The caller must have authentication credentials (either an access token, or a username and password) for the user on whose behalf the API key will be created. It is not possible to use this API to create an API key without that user’s credentials. The user, for whom the authentication credentials is provided, can optionally "run as" (impersonate) another user. In this case, the API key will be created on behalf of the impersonated user.
This API is intended be used by applications that need to create and manage API keys for end users, but cannot guarantee that those users have permission to create API keys on their own behalf.
A successful grant API key API call returns a JSON structure that contains the API key, its unique id, and its name. If applicable, it also returns expiration information for the API key in milliseconds.
By default, API keys never expire. You can specify expiration information when you create the API keys.
client.security.grantApiKey({ api_key, grant_type })
Arguments
edit-
Request (object):
-
api_key
({ name, expiration, role_descriptors, metadata }): Defines the API key. -
grant_type
(Enum("access_token" | "password")): The type of grant. Supported grant types are:access_token
,password
. -
access_token
(Optional, string): The user’s access token. If you specify theaccess_token
grant type, this parameter is required. It is not valid with other grant types. -
username
(Optional, string): The user name that identifies the user. If you specify thepassword
grant type, this parameter is required. It is not valid with other grant types. -
password
(Optional, string): The user’s password. If you specify thepassword
grant type, this parameter is required. It is not valid with other grant types. -
run_as
(Optional, string): The name of the user to be impersonated.
-
has_privileges
editCheck user privileges.
Determine whether the specified user has a specified list of privileges.
client.security.hasPrivileges({ ... })
Arguments
edit-
Request (object):
-
user
(Optional, string): Username -
application
(Optional, { application, privileges, resources }[]) -
cluster
(Optional, Enum("all" | "cancel_task" | "create_snapshot" | "cross_cluster_replication" | "cross_cluster_search" | "delegate_pki" | "grant_api_key" | "manage" | "manage_api_key" | "manage_autoscaling" | "manage_behavioral_analytics" | "manage_ccr" | "manage_data_frame_transforms" | "manage_data_stream_global_retention" | "manage_enrich" | "manage_ilm" | "manage_index_templates" | "manage_inference" | "manage_ingest_pipelines" | "manage_logstash_pipelines" | "manage_ml" | "manage_oidc" | "manage_own_api_key" | "manage_pipeline" | "manage_rollup" | "manage_saml" | "manage_search_application" | "manage_search_query_rules" | "manage_search_synonyms" | "manage_security" | "manage_service_account" | "manage_slm" | "manage_token" | "manage_transform" | "manage_user_profile" | "manage_watcher" | "monitor" | "monitor_data_frame_transforms" | "monitor_data_stream_global_retention" | "monitor_enrich" | "monitor_inference" | "monitor_ml" | "monitor_rollup" | "monitor_snapshot" | "monitor_stats" | "monitor_text_structure" | "monitor_transform" | "monitor_watcher" | "none" | "post_behavioral_analytics_event" | "read_ccr" | "read_fleet_secrets" | "read_ilm" | "read_pipeline" | "read_security" | "read_slm" | "transport_client" | "write_connector_secrets" | "write_fleet_secrets")[]): A list of the cluster privileges that you want to check. -
index
(Optional, { names, privileges, allow_restricted_indices }[])
-
has_privileges_user_profile
editCheck user profile privileges.
Determine whether the users associated with the specified user profile IDs have all the requested privileges.
client.security.hasPrivilegesUserProfile({ uids, privileges })
Arguments
edit-
Request (object):
-
uids
(string[]): A list of profile IDs. The privileges are checked for associated users of the profiles. -
privileges
({ application, cluster, index })
-
invalidate_api_key
editInvalidate API keys.
This API invalidates API keys created by the create API key or grant API key APIs.
Invalidated API keys fail authentication, but they can still be viewed using the get API key information and query API key information APIs, for at least the configured retention period, until they are automatically deleted.
The manage_api_key
privilege allows deleting any API keys.
The manage_own_api_key
only allows deleting API keys that are owned by the user.
In addition, with the manage_own_api_key
privilege, an invalidation request must be issued in one of the three formats:
- Set the parameter owner=true
.
- Or, set both username
and realm_name
to match the user’s identity.
- Or, if the request is issued by an API key, that is to say an API key invalidates itself, specify its ID in the ids
field.
client.security.invalidateApiKey({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string) -
ids
(Optional, string[]): A list of API key ids. This parameter cannot be used with any ofname
,realm_name
, orusername
. -
name
(Optional, string): An API key name. This parameter cannot be used with any ofids
,realm_name
orusername
. -
owner
(Optional, boolean): Can be used to query API keys owned by the currently authenticated user. Therealm_name
orusername
parameters cannot be specified when this parameter is set totrue
as they are assumed to be the currently authenticated ones. -
realm_name
(Optional, string): The name of an authentication realm. This parameter cannot be used with eitherids
orname
, or whenowner
flag is set totrue
. -
username
(Optional, string): The username of a user. This parameter cannot be used with eitherids
orname
, or whenowner
flag is set totrue
.
-
invalidate_token
editInvalidate a token.
The access tokens returned by the get token API have a finite period of time for which they are valid.
After that time period, they can no longer be used.
The time period is defined by the xpack.security.authc.token.timeout
setting.
The refresh tokens returned by the get token API are only valid for 24 hours. They can also be used exactly once. If you want to invalidate one or more access or refresh tokens immediately, use this invalidate token API.
client.security.invalidateToken({ ... })
Arguments
edit-
Request (object):
-
token
(Optional, string) -
refresh_token
(Optional, string) -
realm_name
(Optional, string) -
username
(Optional, string)
-
oidc_authenticate
editExchanges an OpenID Connection authentication response message for an Elasticsearch access token and refresh token pair
client.security.oidcAuthenticate()
oidc_logout
editInvalidates a refresh token and access token that was generated from the OpenID Connect Authenticate API
client.security.oidcLogout()
oidc_prepare_authentication
editCreates an OAuth 2.0 authentication request as a URL string
client.security.oidcPrepareAuthentication()
put_privileges
editCreate or update application privileges.
client.security.putPrivileges({ ... })
Arguments
edit-
Request (object):
-
privileges
(Optional, Record<string, Record<string, { allocate, delete, downsample, freeze, forcemerge, migrate, readonly, rollover, set_priority, searchable_snapshot, shrink, unfollow, wait_for_snapshot }>>) -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
put_role
editCreate or update roles.
The role management APIs are generally the preferred way to manage roles in the native realm, rather than using file-based role management. The create or update roles API cannot update roles that are defined in roles files. File-based role management is not available in Elastic Serverless.
client.security.putRole({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the role. -
applications
(Optional, { application, privileges, resources }[]): A list of application privilege entries. -
cluster
(Optional, Enum("all" | "cancel_task" | "create_snapshot" | "cross_cluster_replication" | "cross_cluster_search" | "delegate_pki" | "grant_api_key" | "manage" | "manage_api_key" | "manage_autoscaling" | "manage_behavioral_analytics" | "manage_ccr" | "manage_data_frame_transforms" | "manage_data_stream_global_retention" | "manage_enrich" | "manage_ilm" | "manage_index_templates" | "manage_inference" | "manage_ingest_pipelines" | "manage_logstash_pipelines" | "manage_ml" | "manage_oidc" | "manage_own_api_key" | "manage_pipeline" | "manage_rollup" | "manage_saml" | "manage_search_application" | "manage_search_query_rules" | "manage_search_synonyms" | "manage_security" | "manage_service_account" | "manage_slm" | "manage_token" | "manage_transform" | "manage_user_profile" | "manage_watcher" | "monitor" | "monitor_data_frame_transforms" | "monitor_data_stream_global_retention" | "monitor_enrich" | "monitor_inference" | "monitor_ml" | "monitor_rollup" | "monitor_snapshot" | "monitor_stats" | "monitor_text_structure" | "monitor_transform" | "monitor_watcher" | "none" | "post_behavioral_analytics_event" | "read_ccr" | "read_fleet_secrets" | "read_ilm" | "read_pipeline" | "read_security" | "read_slm" | "transport_client" | "write_connector_secrets" | "write_fleet_secrets")[]): A list of cluster privileges. These privileges define the cluster-level actions for users with this role. -
global
(Optional, Record<string, User-defined value>): An object defining global privileges. A global privilege is a form of cluster privilege that is request-aware. Support for global privileges is currently limited to the management of application privileges. -
indices
(Optional, { field_security, names, privileges, query, allow_restricted_indices }[]): A list of indices permissions entries. -
remote_indices
(Optional, { clusters, field_security, names, privileges, query, allow_restricted_indices }[]): A list of remote indices permissions entries. -
remote_cluster
(Optional, { clusters, privileges }[]): A list of remote cluster permissions entries. -
metadata
(Optional, Record<string, User-defined value>): Optional metadata. Within the metadata object, keys that begin with an underscore (_
) are reserved for system use. -
run_as
(Optional, string[]): A list of users that the owners of this role can impersonate. Note: in Serverless, the run-as feature is disabled. For API compatibility, you can still specify an emptyrun_as
field, but a non-empty list will be rejected. -
description
(Optional, string): Optional description of the role descriptor -
transient_metadata
(Optional, Record<string, User-defined value>): Indicates roles that might be incompatible with the current cluster license, specifically roles with document and field level security. When the cluster license doesn’t allow certain features for a given role, this parameter is updated dynamically to list the incompatible features. Ifenabled
isfalse
, the role is ignored, but is still listed in the response from the authenticate API. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
put_role_mapping
editCreate or update role mappings.
Role mappings define which roles are assigned to each user. Each mapping has rules that identify users and a list of roles that are granted to those users. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The create or update role mappings API cannot update role mappings that are defined in role mapping files.
This API does not create roles. Rather, it maps users to existing roles. Roles can be created by using the create or update roles API or roles files.
client.security.putRoleMapping({ name })
Arguments
edit-
Request (object):
-
name
(string): Role-mapping name -
enabled
(Optional, boolean) -
metadata
(Optional, Record<string, User-defined value>) -
roles
(Optional, string[]) -
role_templates
(Optional, { format, template }[]) -
rules
(Optional, { any, all, field, except }) -
run_as
(Optional, string[]) -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
put_user
editCreate or update users.
A password is required for adding a new user but is optional when updating an existing user. To change a user’s password without updating any other fields, use the change password API.
client.security.putUser({ username })
Arguments
edit-
Request (object):
-
username
(string): The username of the User -
email
(Optional, string | null) -
full_name
(Optional, string | null) -
metadata
(Optional, Record<string, User-defined value>) -
password
(Optional, string) -
password_hash
(Optional, string) -
roles
(Optional, string[]) -
enabled
(Optional, boolean) -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
query_api_keys
editFind API keys with a query.
Get a paginated list of API keys and their information. You can optionally filter the results with a query.
client.security.queryApiKeys({ ... })
Arguments
edit-
Request (object):
-
aggregations
(Optional, Record<string, { aggregations, meta, cardinality, composite, date_range, filter, filters, missing, range, terms, value_count }>): Any aggregations to run over the corpus of returned API keys. Aggregations and queries work together. Aggregations are computed only on the API keys that match the query. This supports only a subset of aggregation types, namely:terms
,range
,date_range
,missing
,cardinality
,value_count
,composite
,filter
, andfilters
. Additionally, aggregations only run over the same subset of fields that query works with. -
query
(Optional, { bool, exists, ids, match, match_all, prefix, range, simple_query_string, term, terms, wildcard }): A query to filter which API keys to return. If the query parameter is missing, it is equivalent to amatch_all
query. The query supports a subset of query types, includingmatch_all
,bool
,term
,terms
,match
,ids
,prefix
,wildcard
,exists
,range
, andsimple_query_string
. You can query the following public information associated with an API key:id
,type
,name
,creation
,expiration
,invalidated
,invalidation
,username
,realm
, andmetadata
. -
from
(Optional, number): Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use thesearch_after
parameter. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): Other thanid
, all public fields of an API key are eligible for sorting. In addition, sort can also be applied to the_doc
field to sort by index order. -
size
(Optional, number): The number of hits to return. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): Search after definition -
with_limited_by
(Optional, boolean): Return the snapshot of the owner user’s role descriptors associated with the API key. An API key’s actual permission is the intersection of its assigned role descriptors and the owner user’s role descriptors. -
with_profile_uid
(Optional, boolean): Determines whether to also retrieve the profile uid, for the API key owner principal, if it exists. -
typed_keys
(Optional, boolean): Determines whether aggregation names are prefixed by their respective types in the response.
-
query_role
editFind roles with a query.
Get roles in a paginated manner. You can optionally filter the results with a query.
client.security.queryRole({ ... })
Arguments
edit-
Request (object):
-
query
(Optional, { bool, exists, ids, match, match_all, prefix, range, simple_query_string, term, terms, wildcard }): A query to filter which roles to return. If the query parameter is missing, it is equivalent to amatch_all
query. The query supports a subset of query types, includingmatch_all
,bool
,term
,terms
,match
,ids
,prefix
,wildcard
,exists
,range
, andsimple_query_string
. You can query the following information associated with roles:name
,description
,metadata
,applications.application
,applications.privileges
,applications.resources
. -
from
(Optional, number): Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use thesearch_after
parameter. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): All public fields of a role are eligible for sorting. In addition, sort can also be applied to the_doc
field to sort by index order. -
size
(Optional, number): The number of hits to return. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): Search after definition
-
query_user
editFind users with a query.
Get information for users in a paginated manner. You can optionally filter the results with a query.
client.security.queryUser({ ... })
Arguments
edit-
Request (object):
-
query
(Optional, { ids, bool, exists, match, match_all, prefix, range, simple_query_string, term, terms, wildcard }): A query to filter which users to return. If the query parameter is missing, it is equivalent to amatch_all
query. The query supports a subset of query types, includingmatch_all
,bool
,term
,terms
,match
,ids
,prefix
,wildcard
,exists
,range
, andsimple_query_string
. You can query the following information associated with user:username
,roles
,enabled
-
from
(Optional, number): Starting document offset. By default, you cannot page through more than 10,000 hits using the from and size parameters. To page through more hits, use thesearch_after
parameter. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): Fields eligible for sorting are: username, roles, enabled In addition, sort can also be applied to the_doc
field to sort by index order. -
size
(Optional, number): The number of hits to return. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): Search after definition -
with_profile_uid
(Optional, boolean): If true will return the User Profile ID for the users in the query result, if any.
-
saml_authenticate
editAuthenticate SAML.
Submits a SAML response message to Elasticsearch for consumption.
client.security.samlAuthenticate({ content, ids })
Arguments
edit-
Request (object):
-
content
(string): The SAML response as it was sent by the user’s browser, usually a Base64 encoded XML document. -
ids
(string | string[]): A json array with all the valid SAML Request Ids that the caller of the API has for the current user. -
realm
(Optional, string): The name of the realm that should authenticate the SAML response. Useful in cases where many SAML realms are defined.
-
saml_complete_logout
editLogout of SAML completely.
Verifies the logout response sent from the SAML IdP.
client.security.samlCompleteLogout({ realm, ids })
Arguments
edit-
Request (object):
-
realm
(string): The name of the SAML realm in Elasticsearch for which the configuration is used to verify the logout response. -
ids
(string | string[]): A json array with all the valid SAML Request Ids that the caller of the API has for the current user. -
query_string
(Optional, string): If the SAML IdP sends the logout response with the HTTP-Redirect binding, this field must be set to the query string of the redirect URI. -
content
(Optional, string): If the SAML IdP sends the logout response with the HTTP-Post binding, this field must be set to the value of the SAMLResponse form parameter from the logout response.
-
saml_invalidate
editInvalidate SAML.
Submits a SAML LogoutRequest message to Elasticsearch for consumption.
client.security.samlInvalidate({ query_string })
Arguments
edit-
Request (object):
-
query_string
(string): The query part of the URL that the user was redirected to by the SAML IdP to initiate the Single Logout. This query should include a single parameter named SAMLRequest that contains a SAML logout request that is deflated and Base64 encoded. If the SAML IdP has signed the logout request, the URL should include two extra parameters named SigAlg and Signature that contain the algorithm used for the signature and the signature value itself. In order for Elasticsearch to be able to verify the IdP’s signature, the value of the query_string field must be an exact match to the string provided by the browser. The client application must not attempt to parse or process the string in any way. -
acs
(Optional, string): The Assertion Consumer Service URL that matches the one of the SAML realm in Elasticsearch that should be used. You must specify either this parameter or the realm parameter. -
realm
(Optional, string): The name of the SAML realm in Elasticsearch the configuration. You must specify either this parameter or the acs parameter.
-
saml_logout
editLogout of SAML.
Submits a request to invalidate an access token and refresh token.
client.security.samlLogout({ token })
Arguments
edit-
Request (object):
-
token
(string): The access token that was returned as a response to calling the SAML authenticate API. Alternatively, the most recent token that was received after refreshing the original one by using a refresh_token. -
refresh_token
(Optional, string): The refresh token that was returned as a response to calling the SAML authenticate API. Alternatively, the most recent refresh token that was received after refreshing the original access token.
-
saml_prepare_authentication
editPrepare SAML authentication.
Creates a SAML authentication request (<AuthnRequest>
) as a URL string, based on the configuration of the respective SAML realm in Elasticsearch.
client.security.samlPrepareAuthentication({ ... })
Arguments
edit-
Request (object):
-
acs
(Optional, string): The Assertion Consumer Service URL that matches the one of the SAML realms in Elasticsearch. The realm is used to generate the authentication request. You must specify either this parameter or the realm parameter. -
realm
(Optional, string): The name of the SAML realm in Elasticsearch for which the configuration is used to generate the authentication request. You must specify either this parameter or the acs parameter. -
relay_state
(Optional, string): A string that will be included in the redirect URL that this API returns as the RelayState query parameter. If the Authentication Request is signed, this value is used as part of the signature computation.
-
saml_service_provider_metadata
editCreate SAML service provider metadata.
Generate SAML metadata for a SAML 2.0 Service Provider.
client.security.samlServiceProviderMetadata({ realm_name })
Arguments
edit-
Request (object):
-
realm_name
(string): The name of the SAML realm in Elasticsearch.
-
suggest_user_profiles
editSuggest a user profile.
Get suggestions for user profiles that match specified search criteria.
client.security.suggestUserProfiles({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): Query string used to match name-related fields in user profile documents. Name-related fields are the user’susername
,full_name
, andemail
. -
size
(Optional, number): Number of profiles to return. -
data
(Optional, string | string[]): List of filters for thedata
field of the profile document. To return all content usedata=*
. To return a subset of content usedata=<key>
to retrieve content nested under the specified<key>
. By default returns nodata
content. -
hint
(Optional, { uids, labels }): Extra search criteria to improve relevance of the suggestion result. Profiles matching the spcified hint are ranked higher in the response. Profiles not matching the hint don’t exclude the profile from the response as long as the profile matches thename
field query.
-
update_api_key
editUpdate an API key.
Updates attributes of an existing API key.
Users can only update API keys that they created or that were granted to them.
Use this API to update API keys created by the create API Key or grant API Key APIs.
If you need to apply the same update to many API keys, you can use bulk update API Keys to reduce overhead.
It’s not possible to update expired API keys, or API keys that have been invalidated by invalidate API Key.
This API supports updates to an API key’s access scope and metadata.
The access scope of an API key is derived from the role_descriptors
you specify in the request, and a snapshot of the owner user’s permissions at the time of the request.
The snapshot of the owner’s permissions is updated automatically on every call.
If you don’t specify role_descriptors
in the request, a call to this API might still change the API key’s access scope.
This change can occur if the owner user’s permissions have changed since the API key was created or last modified.
To update another user’s API key, use the run_as
feature to submit a request on behalf of another user.
IMPORTANT: It’s not possible to use an API key as the authentication credential for this API.
To update an API key, the owner user’s credentials are required.
client.security.updateApiKey({ id })
Arguments
edit-
Request (object):
-
id
(string): The ID of the API key to update. -
role_descriptors
(Optional, Record<string, { cluster, indices, remote_indices, remote_cluster, global, applications, metadata, run_as, description, restriction, transient_metadata }>): An array of role descriptors for this API key. This parameter is optional. When it is not specified or is an empty array, then the API key will have a point in time snapshot of permissions of the authenticated user. If you supply role descriptors then the resultant permissions would be an intersection of API keys permissions and authenticated user’s permissions thereby limiting the access scope for API keys. The structure of role descriptor is the same as the request for create role API. For more details, see create or update roles API. -
metadata
(Optional, Record<string, User-defined value>): Arbitrary metadata that you want to associate with the API key. It supports nested data structure. Within the metadata object, keys beginning with _ are reserved for system usage. -
expiration
(Optional, string | -1 | 0): Expiration time for the API key.
-
update_cross_cluster_api_key
editUpdate a cross-cluster API key.
Update the attributes of an existing cross-cluster API key, which is used for API key based remote cluster access.
client.security.updateCrossClusterApiKey({ id, access })
Arguments
edit-
Request (object):
-
id
(string): The ID of the cross-cluster API key to update. -
access
({ replication, search }): The access to be granted to this API key. The access is composed of permissions for cross cluster search and cross cluster replication. At least one of them must be specified. When specified, the new access assignment fully replaces the previously assigned access. -
expiration
(Optional, string | -1 | 0): Expiration time for the API key. By default, API keys never expire. This property can be omitted to leave the value unchanged. -
metadata
(Optional, Record<string, User-defined value>): Arbitrary metadata that you want to associate with the API key. It supports nested data structure. Within the metadata object, keys beginning with_
are reserved for system usage. When specified, this information fully replaces metadata previously associated with the API key.
-
update_settings
editUpdate settings for the security system index
client.security.updateSettings()
update_user_profile_data
editUpdate user profile data.
Update specific data for the user profile that is associated with a unique ID.
client.security.updateUserProfileData({ uid })
Arguments
edit-
Request (object):
-
uid
(string): A unique identifier for the user profile. -
labels
(Optional, Record<string, User-defined value>): Searchable data that you want to associate with the user profile. This field supports a nested data structure. -
data
(Optional, Record<string, User-defined value>): Non-searchable data that you want to associate with the user profile. This field supports a nested data structure. -
if_seq_no
(Optional, number): Only perform the operation if the document has this sequence number. -
if_primary_term
(Optional, number): Only perform the operation if the document has this primary term. -
refresh
(Optional, Enum(true | false | "wait_for")): If true, Elasticsearch refreshes the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false do nothing with refreshes.
-
shutdown
editdelete_node
editRemoves a node from the shutdown list. Designed for indirect use by ECE/ESS and ECK. Direct use is not supported.
client.shutdown.deleteNode({ node_id })
Arguments
edit-
Request (object):
-
node_id
(string): The node id of node to be removed from the shutdown state -
master_timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_node
editRetrieve status of a node or nodes that are currently marked as shutting down. Designed for indirect use by ECE/ESS and ECK. Direct use is not supported.
client.shutdown.getNode({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): Which node for which to retrieve the shutdown status -
master_timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_node
editAdds a node to be shut down. Designed for indirect use by ECE/ESS and ECK. Direct use is not supported.
client.shutdown.putNode({ node_id, type, reason })
Arguments
edit-
Request (object):
-
node_id
(string): The node id of node to be shut down -
type
(Enum("restart" | "remove" | "replace")): Valid values are restart, remove, or replace. Use restart when you need to temporarily shut down a node to perform an upgrade, make configuration changes, or perform other maintenance. Because the node is expected to rejoin the cluster, data is not migrated off of the node. Use remove when you need to permanently remove a node from the cluster. The node is not marked ready for shutdown until data is migrated off of the node Use replace to do a 1:1 replacement of a node with another node. Certain allocation decisions will be ignored (such as disk watermarks) in the interest of true replacement of the source node with the target node. During a replace-type shutdown, rollover and index creation may result in unassigned shards, and shrink may fail until the replacement is complete. -
reason
(string): A human-readable reason that the node is being shut down. This field provides information for other cluster operators; it does not affect the shut down process. -
allocation_delay
(Optional, string): Only valid if type is restart. Controls how long Elasticsearch will wait for the node to restart and join the cluster before reassigning its shards to other nodes. This works the same as delaying allocation with the index.unassigned.node_left.delayed_timeout setting. If you specify both a restart allocation delay and an index-level allocation delay, the longer of the two is used. -
target_node_name
(Optional, string): Only valid if type is replace. Specifies the name of the node that is replacing the node being shut down. Shards from the shut down node are only allowed to be allocated to the target node, and no other data will be allocated to the target node. During relocation of data certain allocation rules are ignored, such as disk watermarks or user attribute filtering rules. -
master_timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
simulate
editingest
editSimulates running ingest with example documents.
client.simulate.ingest()
slm
editdelete_lifecycle
editDeletes an existing snapshot lifecycle policy.
client.slm.deleteLifecycle({ policy_id })
Arguments
edit-
Request (object):
-
policy_id
(string): The id of the snapshot lifecycle policy to remove
-
execute_lifecycle
editImmediately creates a snapshot according to the lifecycle policy, without waiting for the scheduled time.
client.slm.executeLifecycle({ policy_id })
Arguments
edit-
Request (object):
-
policy_id
(string): The id of the snapshot lifecycle policy to be executed
-
execute_retention
editDeletes any snapshots that are expired according to the policy’s retention rules.
client.slm.executeRetention()
get_lifecycle
editRetrieves one or more snapshot lifecycle policy definitions and information about the latest snapshot attempts.
client.slm.getLifecycle({ ... })
Arguments
edit-
Request (object):
-
policy_id
(Optional, string | string[]): List of snapshot lifecycle policies to retrieve
-
get_stats
editReturns global and policy-level statistics about actions taken by snapshot lifecycle management.
client.slm.getStats()
get_status
editRetrieves the status of snapshot lifecycle management (SLM).
client.slm.getStatus()
put_lifecycle
editCreates or updates a snapshot lifecycle policy.
client.slm.putLifecycle({ policy_id })
Arguments
edit-
Request (object):
-
policy_id
(string): ID for the snapshot lifecycle policy you want to create or update. -
config
(Optional, { ignore_unavailable, indices, include_global_state, feature_states, metadata, partial }): Configuration for each snapshot created by the policy. -
name
(Optional, string): Name automatically assigned to each snapshot created by the policy. Date math is supported. To prevent conflicting snapshot names, a UUID is automatically appended to each snapshot name. -
repository
(Optional, string): Repository used to store snapshots created by this policy. This repository must exist prior to the policy’s creation. You can create a repository using the snapshot repository API. -
retention
(Optional, { expire_after, max_count, min_count }): Retention rules used to retain and delete snapshots created by the policy. -
schedule
(Optional, string): Periodic or absolute schedule at which the policy creates snapshots. SLM applies schedule changes immediately. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
start
editTurns on snapshot lifecycle management (SLM).
client.slm.start()
stop
editTurns off snapshot lifecycle management (SLM).
client.slm.stop()
snapshot
editcleanup_repository
editTriggers the review of a snapshot repository’s contents and deletes any stale data not referenced by existing snapshots.
client.snapshot.cleanupRepository({ repository })
Arguments
edit-
Request (object):
-
repository
(string): Snapshot repository to clean up. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
timeout
(Optional, string | -1 | 0): Period to wait for a response.
-
clone
editClones indices from one snapshot into another snapshot in the same repository.
client.snapshot.clone({ repository, snapshot, target_snapshot, indices })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
snapshot
(string): The name of the snapshot to clone from -
target_snapshot
(string): The name of the cloned snapshot to create -
indices
(string) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
timeout
(Optional, string | -1 | 0)
-
create
editCreates a snapshot in a repository.
client.snapshot.create({ repository, snapshot })
Arguments
edit-
Request (object):
-
repository
(string): Repository for the snapshot. -
snapshot
(string): Name of the snapshot. Must be unique in the repository. -
ignore_unavailable
(Optional, boolean): Iftrue
, the request ignores data streams and indices inindices
that are missing or closed. Iffalse
, the request returns an error for any data stream or index that is missing or closed. -
include_global_state
(Optional, boolean): Iftrue
, the current cluster state is included in the snapshot. The cluster state includes persistent cluster settings, composable index templates, legacy index templates, ingest pipelines, and ILM policies. It also includes data stored in system indices, such as Watches and task records (configurable viafeature_states
). -
indices
(Optional, string | string[]): Data streams and indices to include in the snapshot. Supports multi-target syntax. Includes all data streams and indices by default. -
feature_states
(Optional, string[]): Feature states to include in the snapshot. Each feature state includes one or more system indices containing related data. You can view a list of eligible features using the get features API. Ifinclude_global_state
istrue
, all current feature states are included by default. Ifinclude_global_state
isfalse
, no feature states are included by default. -
metadata
(Optional, Record<string, User-defined value>): Optional metadata for the snapshot. May have any contents. Must be less than 1024 bytes. This map is not automatically generated by Elasticsearch. -
partial
(Optional, boolean): Iftrue
, allows restoring a partial snapshot of indices with unavailable shards. Only shards that were successfully included in the snapshot will be restored. All missing shards will be recreated as empty. Iffalse
, the entire restore operation will fail if one or more indices included in the snapshot do not have all primary shards available. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_completion
(Optional, boolean): Iftrue
, the request returns a response when the snapshot is complete. Iffalse
, the request returns a response when the snapshot initializes.
-
create_repository
editCreates a repository.
client.snapshot.createRepository({ repository })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
timeout
(Optional, string | -1 | 0): Explicit operation timeout -
verify
(Optional, boolean): Whether to verify the repository after creation
-
delete
editDeletes one or more snapshots.
client.snapshot.delete({ repository, snapshot })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
snapshot
(string): A list of snapshot names -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node
-
delete_repository
editDeletes a repository.
client.snapshot.deleteRepository({ repository })
Arguments
edit-
Request (object):
-
repository
(string | string[]): Name of the snapshot repository to unregister. Wildcard (*
) patterns are supported. -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
get
editReturns information about a snapshot.
client.snapshot.get({ repository, snapshot })
Arguments
edit-
Request (object):
-
repository
(string): List of snapshot repository names used to limit the request. Wildcard (*) expressions are supported. -
snapshot
(string | string[]): List of snapshot names to retrieve. Also accepts wildcards (*).- To get information about all snapshots in a registered repository, use a wildcard (*) or _all.
- To get information about any snapshots that are currently running, use _current.
-
ignore_unavailable
(Optional, boolean): If false, the request returns an error for any snapshots that are unavailable. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
verbose
(Optional, boolean): If true, returns additional information about each snapshot such as the version of Elasticsearch which took the snapshot, the start and end times of the snapshot, and the number of shards snapshotted. -
index_details
(Optional, boolean): If true, returns additional information about each index in the snapshot comprising the number of shards in the index, the total size of the index in bytes, and the maximum number of segments per shard in the index. Defaults to false, meaning that this information is omitted. -
index_names
(Optional, boolean): If true, returns the name of each index in each snapshot. -
include_repository
(Optional, boolean): If true, returns the repository name in each snapshot. -
sort
(Optional, Enum("start_time" | "duration" | "name" | "index_count" | "repository" | "shard_count" | "failed_shard_count")): Allows setting a sort order for the result. Defaults to start_time, i.e. sorting by snapshot start time stamp. -
size
(Optional, number): Maximum number of snapshots to return. Defaults to 0 which means return all that match the request without limit. -
order
(Optional, Enum("asc" | "desc")): Sort order. Valid values are asc for ascending and desc for descending order. Defaults to asc, meaning ascending order. -
after
(Optional, string): Offset identifier to start pagination from as returned by the next field in the response body. -
offset
(Optional, number): Numeric offset to start pagination from based on the snapshots matching this request. Using a non-zero value for this parameter is mutually exclusive with using the after parameter. Defaults to 0. -
from_sort_value
(Optional, string): Value of the current sort column at which to start retrieval. Can either be a string snapshot- or repository name when sorting by snapshot or repository name, a millisecond time value or a number when sorting by index- or shard count. -
slm_policy_filter
(Optional, string): Filter snapshots by a list of SLM policy names that snapshots belong to. Also accepts wildcards (*) and combinations of wildcards followed by exclude patterns starting with -. To include snapshots not created by an SLM policy you can use the special pattern _none that will match all snapshots without an SLM policy.
-
get_repository
editReturns information about a repository.
client.snapshot.getRepository({ ... })
Arguments
edit-
Request (object):
-
repository
(Optional, string | string[]): A list of repository names -
local
(Optional, boolean): Return local information, do not retrieve the state from master node (default: false) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node
-
repository_analyze
editAnalyzes a repository for correctness and performance
client.snapshot.repositoryAnalyze()
restore
editRestores a snapshot.
client.snapshot.restore({ repository, snapshot })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
snapshot
(string): A snapshot name -
feature_states
(Optional, string[]) -
ignore_index_settings
(Optional, string[]) -
ignore_unavailable
(Optional, boolean) -
include_aliases
(Optional, boolean) -
include_global_state
(Optional, boolean) -
index_settings
(Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store }) -
indices
(Optional, string | string[]) -
partial
(Optional, boolean) -
rename_pattern
(Optional, string) -
rename_replacement
(Optional, string) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
wait_for_completion
(Optional, boolean): Should this request wait until the operation has completed before returning
-
status
editReturns information about the status of a snapshot.
client.snapshot.status({ ... })
Arguments
edit-
Request (object):
-
repository
(Optional, string): A repository name -
snapshot
(Optional, string | string[]): A list of snapshot names -
ignore_unavailable
(Optional, boolean): Whether to ignore unavailable snapshots, defaults to false which means a SnapshotMissingException is thrown -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node
-
verify_repository
editVerifies a repository.
client.snapshot.verifyRepository({ repository })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
sql
editclear_cursor
editClear an SQL search cursor.
client.sql.clearCursor({ cursor })
Arguments
edit-
Request (object):
-
cursor
(string): Cursor to clear.
-
delete_async
editDelete an async SQL search. Delete an async SQL search or a stored synchronous SQL search. If the search is still running, the API cancels it.
client.sql.deleteAsync({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the search.
-
get_async
editGet async SQL search results. Get the current status and available results for an async SQL search or stored synchronous SQL search.
client.sql.getAsync({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the search. -
delimiter
(Optional, string): Separator for CSV results. The API only supports this parameter for CSV responses. -
format
(Optional, string): Format for the response. You must specify a format using this parameter or the Accept HTTP header. If you specify both, the API uses this parameter. -
keep_alive
(Optional, string | -1 | 0): Retention period for the search and its results. Defaults to thekeep_alive
period for the original SQL search. -
wait_for_completion_timeout
(Optional, string | -1 | 0): Period to wait for complete results. Defaults to no timeout, meaning the request waits for complete search results.
-
get_async_status
editGet the async SQL search status. Get the current status of an async SQL search or a stored synchronous SQL search.
client.sql.getAsyncStatus({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the search.
-
query
editGet SQL search results. Run an SQL request.
client.sql.query({ ... })
Arguments
edit-
Request (object):
-
catalog
(Optional, string): Default catalog (cluster) for queries. If unspecified, the queries execute on the data in the local cluster only. -
columnar
(Optional, boolean): If true, the results in a columnar fashion: one row represents all the values of a certain column from the current page of results. -
cursor
(Optional, string): Cursor used to retrieve a set of paginated results. If you specify a cursor, the API only uses thecolumnar
andtime_zone
request body parameters. It ignores other request body parameters. -
fetch_size
(Optional, number): The maximum number of rows (or entries) to return in one response -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Elasticsearch query DSL for additional filtering. -
query
(Optional, string): SQL query to run. -
request_timeout
(Optional, string | -1 | 0): The timeout before the request fails. -
page_timeout
(Optional, string | -1 | 0): The timeout before a pagination request fails. -
time_zone
(Optional, string): ISO-8601 time zone ID for the search. -
field_multi_value_leniency
(Optional, boolean): Throw an exception when encountering multiple values for a field (default) or be lenient and return the first value from the list (without any guarantees of what that will be - typically the first in natural ascending order). -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name. -
wait_for_completion_timeout
(Optional, string | -1 | 0): Period to wait for complete results. Defaults to no timeout, meaning the request waits for complete search results. If the search doesn’t finish within this period, the search becomes async. -
params
(Optional, Record<string, User-defined value>): Values for parameters in the query. -
keep_alive
(Optional, string | -1 | 0): Retention period for an async or saved synchronous search. -
keep_on_completion
(Optional, boolean): If true, Elasticsearch stores synchronous searches if you also specify the wait_for_completion_timeout parameter. If false, Elasticsearch only stores async searches that don’t finish before the wait_for_completion_timeout. -
index_using_frozen
(Optional, boolean): If true, the search can run on frozen indices. Defaults to false. -
format
(Optional, Enum("csv" | "json" | "tsv" | "txt" | "yaml" | "cbor" | "smile")): Format for the response.
-
translate
editTranslate SQL into Elasticsearch queries. Translate an SQL search into a search API request containing Query DSL.
client.sql.translate({ query })
Arguments
edit-
Request (object):
-
query
(string): SQL query to run. -
fetch_size
(Optional, number): The maximum number of rows (or entries) to return in one response. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Elasticsearch query DSL for additional filtering. -
time_zone
(Optional, string): ISO-8601 time zone ID for the search.
-
ssl
editcertificates
editGet SSL certificates.
Get information about the X.509 certificates that are used to encrypt communications in the cluster. The API returns a list that includes certificates from all TLS contexts including:
- Settings for transport and HTTP interfaces
- TLS settings that are used within authentication realms
- TLS settings for remote monitoring exporters
The list includes certificates that are used for configuring trust, such as those configured in the xpack.security.transport.ssl.truststore
and xpack.security.transport.ssl.certificate_authorities
settings.
It also includes certificates that are used for configuring server identity, such as xpack.security.http.ssl.keystore
and xpack.security.http.ssl.certificate settings
.
The list does not include certificates that are sourced from the default SSL context of the Java Runtime Environment (JRE), even if those certificates are in use within Elasticsearch.
When a PKCS#11 token is configured as the truststore of the JRE, the API returns all the certificates that are included in the PKCS#11 token irrespective of whether these are used in the Elasticsearch TLS configuration.
If Elasticsearch is configured to use a keystore or truststore, the API output includes all certificates in that store, even though some of the certificates might not be in active use within the cluster.
client.ssl.certificates()
synonyms
editdelete_synonym
editDelete a synonym set.
client.synonyms.deleteSynonym({ id })
Arguments
edit-
Request (object):
-
id
(string): The id of the synonyms set to be deleted
-
delete_synonym_rule
editDelete a synonym rule. Delete a synonym rule from a synonym set.
client.synonyms.deleteSynonymRule({ set_id, rule_id })
Arguments
edit-
Request (object):
-
set_id
(string): The id of the synonym set to be updated -
rule_id
(string): The id of the synonym rule to be deleted
-
get_synonym
editGet a synonym set.
client.synonyms.getSynonym({ id })
Arguments
edit-
Request (object):
-
id
(string): "The id of the synonyms set to be retrieved -
from
(Optional, number): Starting offset for query rules to be retrieved -
size
(Optional, number): specifies a max number of query rules to retrieve
-
get_synonym_rule
editGet a synonym rule. Get a synonym rule from a synonym set.
client.synonyms.getSynonymRule({ set_id, rule_id })
Arguments
edit-
Request (object):
-
set_id
(string): The id of the synonym set to retrieve the synonym rule from -
rule_id
(string): The id of the synonym rule to retrieve
-
get_synonyms_sets
editGet all synonym sets. Get a summary of all defined synonym sets.
client.synonyms.getSynonymsSets({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): Starting offset -
size
(Optional, number): specifies a max number of results to get
-
put_synonym
editCreate or update a synonym set. Synonyms sets are limited to a maximum of 10,000 synonym rules per set. If you need to manage more synonym rules, you can create multiple synonym sets.
client.synonyms.putSynonym({ id, synonyms_set })
Arguments
edit-
Request (object):
-
id
(string): The id of the synonyms set to be created or updated -
synonyms_set
({ id, synonyms } | { id, synonyms }[]): The synonym set information to update
-
put_synonym_rule
editCreate or update a synonym rule. Create or update a synonym rule in a synonym set.
client.synonyms.putSynonymRule({ set_id, rule_id, synonyms })
Arguments
edit-
Request (object):
-
set_id
(string): The id of the synonym set to be updated with the synonym rule -
rule_id
(string): The id of the synonym rule to be updated or created -
synonyms
(string)
-
tasks
editcancel
editCancels a task, if it can be cancelled through an API.
client.tasks.cancel({ ... })
Arguments
edit-
Request (object):
-
task_id
(Optional, string | number): ID of the task. -
actions
(Optional, string | string[]): List or wildcard expression of actions used to limit the request. -
nodes
(Optional, string[]): List of node IDs or names used to limit the request. -
parent_task_id
(Optional, string): Parent task ID used to limit the tasks. -
wait_for_completion
(Optional, boolean): Should the request block until the cancellation of the task and its descendant tasks is completed. Defaults to false
-
get
editGet task information. Returns information about the tasks currently executing in the cluster.
client.tasks.get({ task_id })
Arguments
edit-
Request (object):
-
task_id
(string): ID of the task. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the task has completed.
-
list
editThe task management API returns information about tasks currently executing on one or more nodes in the cluster.
client.tasks.list({ ... })
Arguments
edit-
Request (object):
-
actions
(Optional, string | string[]): List or wildcard expression of actions used to limit the request. -
detailed
(Optional, boolean): Iftrue
, the response includes detailed information about shard recoveries. -
group_by
(Optional, Enum("nodes" | "parents" | "none")): Key used to group tasks in the response. -
nodes
(Optional, string | string[]): List of node IDs or names used to limit returned information. -
parent_task_id
(Optional, string): Parent task ID used to limit returned information. To return all tasks, omit this parameter or use a value of-1
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the operation is complete.
-
text_structure
editfind_field_structure
editFinds the structure of a text field in an index.
client.textStructure.findFieldStructure()
find_message_structure
editFinds the structure of a list of messages. The messages must contain data that is suitable to be ingested into Elasticsearch.
client.textStructure.findMessageStructure()
find_structure
editFinds the structure of a text file. The text file must contain data that is suitable to be ingested into Elasticsearch.
client.textStructure.findStructure({ ... })
Arguments
edit-
Request (object):
-
text_files
(Optional, TJsonDocument[]) -
charset
(Optional, string): The text’s character set. It must be a character set that is supported by the JVM that Elasticsearch uses. For example, UTF-8, UTF-16LE, windows-1252, or EUC-JP. If this parameter is not specified, the structure finder chooses an appropriate character set. -
column_names
(Optional, string): If you have set format to delimited, you can specify the column names in a list. If this parameter is not specified, the structure finder uses the column names from the header row of the text. If the text does not have a header role, columns are named "column1", "column2", "column3", etc. -
delimiter
(Optional, string): If you have set format to delimited, you can specify the character used to delimit the values in each row. Only a single character is supported; the delimiter cannot have multiple characters. By default, the API considers the following possibilities: comma, tab, semi-colon, and pipe (|). In this default scenario, all rows must have the same number of fields for the delimited format to be detected. If you specify a delimiter, up to 10% of the rows can have a different number of columns than the first row. -
ecs_compatibility
(Optional, string): The mode of compatibility with ECS compliant Grok patterns (disabled or v1, default: disabled). -
explain
(Optional, boolean): If this parameter is set to true, the response includes a field named explanation, which is an array of strings that indicate how the structure finder produced its result. -
format
(Optional, string): The high level structure of the text. Valid values are ndjson, xml, delimited, and semi_structured_text. By default, the API chooses the format. In this default scenario, all rows must have the same number of fields for a delimited format to be detected. If the format is set to delimited and the delimiter is not set, however, the API tolerates up to 5% of rows that have a different number of columns than the first row. -
grok_pattern
(Optional, string): If you have set format to semi_structured_text, you can specify a Grok pattern that is used to extract fields from every message in the text. The name of the timestamp field in the Grok pattern must match what is specified in the timestamp_field parameter. If that parameter is not specified, the name of the timestamp field in the Grok pattern must match "timestamp". If grok_pattern is not specified, the structure finder creates a Grok pattern. -
has_header_row
(Optional, boolean): If you have set format to delimited, you can use this parameter to indicate whether the column names are in the first row of the text. If this parameter is not specified, the structure finder guesses based on the similarity of the first row of the text to other rows. -
line_merge_size_limit
(Optional, number): The maximum number of characters in a message when lines are merged to form messages while analyzing semi-structured text. If you have extremely long messages you may need to increase this, but be aware that this may lead to very long processing times if the way to group lines into messages is misdetected. -
lines_to_sample
(Optional, number): The number of lines to include in the structural analysis, starting from the beginning of the text. The minimum is 2; If the value of this parameter is greater than the number of lines in the text, the analysis proceeds (as long as there are at least two lines in the text) for all of the lines. -
quote
(Optional, string): If you have set format to delimited, you can specify the character used to quote the values in each row if they contain newlines or the delimiter character. Only a single character is supported. If this parameter is not specified, the default value is a double quote ("). If your delimited text format does not use quoting, a workaround is to set this argument to a character that does not appear anywhere in the sample. -
should_trim_fields
(Optional, boolean): If you have set format to delimited, you can specify whether values between delimiters should have whitespace trimmed from them. If this parameter is not specified and the delimiter is pipe (|), the default value is true. Otherwise, the default value is false. -
timeout
(Optional, string | -1 | 0): Sets the maximum amount of time that the structure analysis make take. If the analysis is still running when the timeout expires then it will be aborted. -
timestamp_field
(Optional, string): Optional parameter to specify the timestamp field in the file -
timestamp_format
(Optional, string): The Java time format of the timestamp field in the text.
-
test_grok_pattern
editTests a Grok pattern on some text.
client.textStructure.testGrokPattern({ grok_pattern, text })
Arguments
edit-
Request (object):
-
grok_pattern
(string): Grok pattern to run on the text. -
text
(string[]): Lines of text to run the Grok pattern on. -
ecs_compatibility
(Optional, string): The mode of compatibility with ECS compliant Grok patterns (disabled or v1, default: disabled).
-
transform
editdelete_transform
editDelete a transform. Deletes a transform.
client.transform.deleteTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. -
force
(Optional, boolean): If this value is false, the transform must be stopped before it can be deleted. If true, the transform is deleted regardless of its current state. -
delete_dest_index
(Optional, boolean): If this value is true, the destination index is deleted together with the transform. If false, the destination index will not be deleted -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_node_stats
editRetrieves transform usage information for transform nodes.
client.transform.getNodeStats()
get_transform
editGet transforms. Retrieves configuration information for transforms.
client.transform.getTransform({ ... })
Arguments
edit-
Request (object):
-
transform_id
(Optional, string | string[]): Identifier for the transform. It can be a transform identifier or a wildcard expression. You can get information for all transforms by using_all
, by specifying*
as the<transform_id>
, or by omitting the<transform_id>
. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no transforms that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
If this parameter is false, the request returns a 404 status code when
there are no matches or only partial matches.
from
(Optional, number): Skips the specified number of transforms.
size
(Optional, number): Specifies the maximum number of transforms to obtain.
* *exclude_generated
(Optional, boolean): Excludes fields that were automatically added when creating the
transform. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
get_transform_stats
editGet transform stats. Retrieves usage information for transforms.
client.transform.getTransformStats({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string | string[]): Identifier for the transform. It can be a transform identifier or a wildcard expression. You can get information for all transforms by using_all
, by specifying*
as the<transform_id>
, or by omitting the<transform_id>
. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no transforms that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
If this parameter is false, the request returns a 404 status code when
there are no matches or only partial matches.
from
(Optional, number): Skips the specified number of transforms.
size
(Optional, number): Specifies the maximum number of transforms to obtain.
* *timeout
(Optional, string | -1 | 0): Controls the time to wait for the stats
preview_transform
editPreview a transform. Generates a preview of the results that you will get when you create a transform with the same configuration.
It returns a maximum of 100 results. The calculations are based on all the current data in the source index. It also generates a list of mappings and settings for the destination index. These values are determined based on the field types of the source index and the transform aggregations.
client.transform.previewTransform({ ... })
Arguments
edit-
Request (object):
-
transform_id
(Optional, string): Identifier for the transform to preview. If you specify this path parameter, you cannot provide transform configuration details in the request body. -
dest
(Optional, { index, op_type, pipeline, routing, version_type }): The destination for the transform. -
description
(Optional, string): Free text description of the transform. -
frequency
(Optional, string | -1 | 0): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. -
pivot
(Optional, { aggregations, group_by }): The pivot method transforms the data by aggregating and grouping it. These objects define the group by fields and the aggregation to reduce the data. -
source
(Optional, { index, query, remote, size, slice, sort, _source, runtime_mappings }): The source of the data for the transform. -
settings
(Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, unattended }): Defines optional transform settings. -
sync
(Optional, { time }): Defines the properties transforms require to run continuously. -
retention_policy
(Optional, { time }): Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index. -
latest
(Optional, { sort, unique_key }): The latest method transforms the data by finding the latest document for each unique key. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_transform
editCreate a transform. Creates a transform.
A transform copies data from source indices, transforms it, and persists it into an entity-centric destination index. You can also think of the destination index as a two-dimensional tabular data structure (known as a data frame). The ID for each document in the data frame is generated from a hash of the entity, so there is a unique row per entity.
You must choose either the latest or pivot method for your transform; you cannot use both in a single transform. If
you choose to use the pivot method for your transform, the entities are defined by the set of group_by
fields in
the pivot object. If you choose to use the latest method, the entities are defined by the unique_key
field values
in the latest object.
You must have create_index
, index
, and read
privileges on the destination index and read
and
view_index_metadata
privileges on the source indices. When Elasticsearch security features are enabled, the
transform remembers which roles the user that created it had at the time of creation and uses those same roles. If
those roles do not have the required privileges on the source and destination indices, the transform fails when it
attempts unauthorized operations.
You must use Kibana or this API to create a transform. Do not add a transform directly into any
.transform-internal*
indices using the Elasticsearch index API. If Elasticsearch security features are enabled, do
not give users any privileges on .transform-internal*
indices. If you used transforms prior to 7.5, also do not
give users any privileges on .data-frame-internal*
indices.
client.transform.putTransform({ transform_id, dest, source })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It has a 64 character limit and must start and end with alphanumeric characters. -
dest
({ index, op_type, pipeline, routing, version_type }): The destination for the transform. -
source
({ index, query, remote, size, slice, sort, _source, runtime_mappings }): The source of the data for the transform. -
description
(Optional, string): Free text description of the transform. -
frequency
(Optional, string | -1 | 0): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is1s
and the maximum is1h
. -
latest
(Optional, { sort, unique_key }): The latest method transforms the data by finding the latest document for each unique key. -
_meta
(Optional, Record<string, User-defined value>): Defines optional transform metadata. -
pivot
(Optional, { aggregations, group_by }): The pivot method transforms the data by aggregating and grouping it. These objects define the group by fields and the aggregation to reduce the data. -
retention_policy
(Optional, { time }): Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index. -
settings
(Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, unattended }): Defines optional transform settings. -
sync
(Optional, { time }): Defines the properties transforms require to run continuously. -
defer_validation
(Optional, boolean): When the transform is created, a series of validations occur to ensure its success. For example, there is a check for the existence of the source indices and a check that the destination index is not part of the source index pattern. You can use this parameter to skip the checks, for example when the source index does not exist until after the transform is created. The validations are always run when you start the transform, however, with the exception of privilege checks. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
reset_transform
editReset a transform.
Resets a transform.
Before you can reset it, you must stop it; alternatively, use the force
query parameter.
If the destination index was created by the transform, it is deleted.
client.transform.resetTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It has a 64 character limit and must start and end with alphanumeric characters. -
force
(Optional, boolean): If this value istrue
, the transform is reset regardless of its current state. If it’sfalse
, the transform must be stopped before it can be reset.
-
schedule_now_transform
editSchedule a transform to start now. Instantly runs a transform to process data.
If you _schedule_now a transform, it will process the new data instantly, without waiting for the configured frequency interval. After _schedule_now API is called, the transform will be processed again at now + frequency unless _schedule_now API is called again in the meantime.
client.transform.scheduleNowTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. -
timeout
(Optional, string | -1 | 0): Controls the time to wait for the scheduling to take place
-
start_transform
editStart a transform. Starts a transform.
When you start a transform, it creates the destination index if it does not already exist. The number_of_shards
is
set to 1
and the auto_expand_replicas
is set to 0-1
. If it is a pivot transform, it deduces the mapping
definitions for the destination index from the source indices and the transform aggregations. If fields in the
destination index are derived from scripts (as in the case of scripted_metric
or bucket_script
aggregations),
the transform uses dynamic mappings unless an index template exists. If it is a latest transform, it does not deduce
mapping definitions; it uses dynamic mappings. To use explicit mappings, create the destination index before you
start the transform. Alternatively, you can create an index template, though it does not affect the deduced mappings
in a pivot transform.
When the transform starts, a series of validations occur to ensure its success. If you deferred validation when you created the transform, they occur when you start the transform—with the exception of privilege checks. When Elasticsearch security features are enabled, the transform remembers which roles the user that created it had at the time of creation and uses those same roles. If those roles do not have the required privileges on the source and destination indices, the transform fails when it attempts unauthorized operations.
client.transform.startTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
from
(Optional, string): Restricts the set of transformed entities to those changed after this time. Relative times like now-30d are supported. Only applicable for continuous transforms.
-
stop_transform
editStop transforms. Stops one or more transforms.
client.transform.stopTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. To stop multiple transforms, use a list or a wildcard expression. To stop all transforms, use_all
or*
as the identifier. -
allow_no_match
(Optional, boolean): Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the_all
string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches.
-
If it is true, the API returns a successful acknowledgement message when there are no matches. When there are only partial matches, the API stops the appropriate transforms.
If it is false, the request returns a 404 status code when there are no matches or only partial matches.
force
(Optional, boolean): If it is true, the API forcefully stops the transforms.
timeout
(Optional, string | -1 | 0): Period to wait for a response when wait_for_completion
is true
. If no response is received before the
timeout expires, the request returns a timeout exception. However, the request continues processing and
eventually moves the transform to a STOPPED state.
wait_for_checkpoint
(Optional, boolean): If it is true, the transform does not completely stop until the current checkpoint is completed. If it is false,
the transform stops as soon as possible.
wait_for_completion
(Optional, boolean): If it is true, the API blocks until the indexer state completely stops. If it is false, the API returns
immediately and the indexer is stopped asynchronously in the background.
update_transform
editUpdate a transform. Updates certain properties of a transform.
All updated properties except description
do not take effect until after the transform starts the next checkpoint,
thus there is data consistency in each checkpoint. To use this API, you must have read
and view_index_metadata
privileges for the source indices. You must also have index
and read
privileges for the destination index. When
Elasticsearch security features are enabled, the transform remembers which roles the user who updated it had at the
time of update and runs with those privileges.
client.transform.updateTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. -
dest
(Optional, { index, op_type, pipeline, routing, version_type }): The destination for the transform. -
description
(Optional, string): Free text description of the transform. -
frequency
(Optional, string | -1 | 0): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. -
_meta
(Optional, Record<string, User-defined value>): Defines optional transform metadata. -
source
(Optional, { index, query, remote, size, slice, sort, _source, runtime_mappings }): The source of the data for the transform. -
settings
(Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, unattended }): Defines optional transform settings. -
sync
(Optional, { time }): Defines the properties transforms require to run continuously. -
retention_policy
(Optional, { time } | null): Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index. -
defer_validation
(Optional, boolean): When true, deferrable validations are not run. This behavior may be desired if the source index does not exist until after the transform is created. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
upgrade_transforms
editUpgrades all transforms. This API identifies transforms that have a legacy configuration format and upgrades them to the latest version. It also cleans up the internal data structures that store the transform state and checkpoints. The upgrade does not affect the source and destination indices. The upgrade also does not affect the roles that transforms use when Elasticsearch security features are enabled; the role used to read source data and write to the destination index remains unchanged.
client.transform.upgradeTransforms({ ... })
Arguments
edit-
Request (object):
-
dry_run
(Optional, boolean): When true, the request checks for updates but does not run them. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
watcher
editack_watch
editAcknowledges a watch, manually throttling the execution of the watch’s actions.
client.watcher.ackWatch({ watch_id })
Arguments
edit-
Request (object):
-
watch_id
(string): Watch ID -
action_id
(Optional, string | string[]): A list of the action ids to be acked
-
activate_watch
editActivates a currently inactive watch.
client.watcher.activateWatch({ watch_id })
Arguments
edit-
Request (object):
-
watch_id
(string): Watch ID
-
deactivate_watch
editDeactivates a currently active watch.
client.watcher.deactivateWatch({ watch_id })
Arguments
edit-
Request (object):
-
watch_id
(string): Watch ID
-
delete_watch
editRemoves a watch from Watcher.
client.watcher.deleteWatch({ id })
Arguments
edit-
Request (object):
-
id
(string): Watch ID
-
execute_watch
editThis API can be used to force execution of the watch outside of its triggering logic or to simulate the watch execution for debugging purposes. For testing and debugging purposes, you also have fine-grained control on how the watch runs. You can execute the watch without executing all of its actions or alternatively by simulating them. You can also force execution by ignoring the watch condition and control whether a watch record would be written to the watch history after execution.
client.watcher.executeWatch({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the watch. -
action_modes
(Optional, Record<string, Enum("simulate" | "force_simulate" | "execute" | "force_execute" | "skip")>): Determines how to handle the watch actions as part of the watch execution. -
alternative_input
(Optional, Record<string, User-defined value>): When present, the watch uses this object as a payload instead of executing its own input. -
ignore_condition
(Optional, boolean): When set totrue
, the watch execution uses the always condition. This can also be specified as an HTTP parameter. -
record_execution
(Optional, boolean): When set totrue
, the watch record representing the watch execution result is persisted to the.watcher-history
index for the current time. In addition, the status of the watch is updated, possibly throttling subsequent executions. This can also be specified as an HTTP parameter. -
simulated_actions
(Optional, { actions, all, use_all }) -
trigger_data
(Optional, { scheduled_time, triggered_time }): This structure is parsed as the data of the trigger event that will be used during the watch execution -
watch
(Optional, { actions, condition, input, metadata, status, throttle_period, throttle_period_in_millis, transform, trigger }): When present, this watch is used instead of the one specified in the request. This watch is not persisted to the index and record_execution cannot be set. -
debug
(Optional, boolean): Defines whether the watch runs in debug mode.
-
get_settings
editRetrieve settings for the watcher system index
client.watcher.getSettings()
get_watch
editRetrieves a watch by its ID.
client.watcher.getWatch({ id })
Arguments
edit-
Request (object):
-
id
(string): Watch ID
-
put_watch
editCreates a new watch, or updates an existing one.
client.watcher.putWatch({ id })
Arguments
edit-
Request (object):
-
id
(string): Watch ID -
actions
(Optional, Record<string, { add_backing_index, remove_backing_index }>) -
condition
(Optional, { always, array_compare, compare, never, script }) -
input
(Optional, { chain, http, search, simple }) -
metadata
(Optional, Record<string, User-defined value>) -
throttle_period
(Optional, string) -
transform
(Optional, { chain, script, search }) -
trigger
(Optional, { schedule }) -
active
(Optional, boolean): Specify whether the watch is in/active by default -
if_primary_term
(Optional, number): only update the watch if the last operation that has changed the watch has the specified primary term -
if_seq_no
(Optional, number): only update the watch if the last operation that has changed the watch has the specified sequence number -
version
(Optional, number): Explicit version number for concurrency control
-
query_watches
editRetrieves stored watches.
client.watcher.queryWatches({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): The offset from the first result to fetch. Needs to be non-negative. -
size
(Optional, number): The number of hits to return. Needs to be non-negative. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Optional, query filter watches to be returned. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): Optional sort definition. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): Optional search After to do pagination using last hit’s sort values.
-
start
editStarts Watcher if it is not already running.
client.watcher.start()
stats
editRetrieves the current Watcher metrics.
client.watcher.stats({ ... })
Arguments
edit-
Request (object):
-
metric
(Optional, Enum("_all" | "queued_watches" | "current_watches" | "pending_watches") | Enum("_all" | "queued_watches" | "current_watches" | "pending_watches")[]): Defines which additional metrics are included in the response. -
emit_stacktraces
(Optional, boolean): Defines whether stack traces are generated for each watch that is running.
-
stop
editStops Watcher if it is running.
client.watcher.stop()
update_settings
editUpdate settings for the watcher system index
client.watcher.updateSettings()
xpack
editinfo
editProvides general information about the installed X-Pack features.
client.xpack.info({ ... })
Arguments
edit-
Request (object):
-
categories
(Optional, Enum("build" | "features" | "license")[]): A list of the information categories to include in the response. For example,build,license,features
. -
accept_enterprise
(Optional, boolean): If this param is used it must be set to true -
human
(Optional, boolean): Defines whether additional human-readable information is included in the response. In particular, it adds descriptions and a tag line.
-
usage
editThis API provides information about which features are currently enabled and available under the current license and some usage statistics.
client.xpack.usage({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-