Test availability of remote clusters
editTest availability of remote clusters
editThe remote/info
endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not.
The remote cluster may be available, while the local cluster is not currently connected to it.
You can use the _resolve/cluster
API to attempt to reconnect to remote clusters.
For example with GET _resolve/cluster
or GET _resolve/cluster/*:*
.
The connected
field in the response will indicate whether it was successful.
If a connection was (re-)established, this will also cause the remote/info
endpoint to now indicate a connected status.
client.indices.resolveCluster({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): A list of names or index patterns for the indices, aliases, and data streams to resolve. Resources on remote clusters can be specified using the<cluster>
:<name>
syntax. Index and cluster exclusions (e.g.,-cluster1:*
) are also supported. If no index expression is specified, information about all remote clusters configured on the local cluster is returned without doing any index matching -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the_resolve/cluster
API endpoint that takes no index expression. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the_resolve/cluster
API endpoint that takes no index expression. -
ignore_throttled
(Optional, boolean): If true, concrete, expanded, or aliased indices are ignored when frozen. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the_resolve/cluster
API endpoint that takes no index expression. -
ignore_unavailable
(Optional, boolean): If false, the request returns an error if it targets a missing or closed index. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the_resolve/cluster
API endpoint that takes no index expression. -
timeout
(Optional, string | -1 | 0): The maximum time to wait for remote clusters to respond. If a remote cluster does not respond within this timeout period, the API response will show the cluster as not connected and include an error message that the request timed out.
-
The default timeout is unset and the query can take as long as the networking layer is configured to wait for remote clusters that are not responding (typically 30 seconds).
resolve_index
editResolve indices. Resolve the names and/or index patterns for indices, aliases, and data streams. Multiple patterns and remote clusters are supported.
client.indices.resolveIndex({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): Comma-separated name(s) or index pattern(s) of the indices, aliases, and data streams to resolve. Resources on remote clusters can be specified using the<cluster>
:<name>
syntax. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
.
-
rollover
editRoll over to a new index. TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.
The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.
Roll over a data stream
If you roll over a data stream, the API creates a new write index for the stream. The stream’s previous write index becomes a regular backing index. A rollover also increments the data stream’s generation.
Roll over an index alias with a write index
Prior to Elasticsearch 7.9, you’d typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.
If an index alias points to multiple indices, one of the indices must be a write index.
The rollover API creates a new write index for the alias with is_write_index
set to true
.
The API also sets is_write_index
to false
for the previous write index.
Roll over an index alias with one index
If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.
A rollover creates a new index and is subject to the wait_for_active_shards
setting.
Increment index names for an alias
When you roll over an index alias, you can specify a name for the new index.
If you don’t specify a name and the current index ends with -
and a number, such as my-index-000001
or my-index-3
, the new index name increments that number.
For example, if you roll over an alias with a current index of my-index-000001
, the rollover creates a new index named my-index-000002
.
This number is always six characters and zero-padded, regardless of the previous index’s name.
If you use an index alias for time series data, you can use date math in the index name to track the rollover date.
For example, you can create an alias that points to an index named <my-index-{now/d}-000001>
.
If you create the index on May 6, 2099, the index’s name is my-index-2099.05.06-000001
.
If you roll over the alias on May 7, 2099, the new index’s name is my-index-2099.05.07-000002
.
client.indices.rollover({ alias })
Arguments
edit-
Request (object):
-
alias
(string): Name of the data stream or index alias to roll over. -
new_index
(Optional, string): Name of the index to create. Supports date math. Data streams do not support this parameter. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the target index. Data streams do not support this parameter. -
conditions
(Optional, { min_age, max_age, max_age_millis, min_docs, max_docs, max_size, max_size_bytes, min_size, min_size_bytes, max_primary_shard_size, max_primary_shard_size_bytes, min_primary_shard_size, min_primary_shard_size_bytes, max_primary_shard_docs, min_primary_shard_docs }): Conditions for the rollover. If specified, Elasticsearch only performs the rollover if the current index satisfies these conditions. If this parameter is not specified, Elasticsearch performs the rollover unconditionally. If conditions are specified, at least one of them must be amax_*
condition. The index will rollover if anymax_*
condition is satisfied and allmin_*
conditions are satisfied. -
mappings
(Optional, { all_field, date_detection, dynamic, dynamic_date_formats, dynamic_templates, _field_names, index_field, _meta, numeric_detection, properties, _routing, _size, _source, runtime, enabled, subobjects, _data_stream_timestamp }): Mapping for fields in the index. If specified, this mapping can include field names, field data types, and mapping paramaters. -
settings
(Optional, Record<string, User-defined value>): Configuration options for the index. Data streams do not support this parameter. -
dry_run
(Optional, boolean): Iftrue
, checks whether the current index satisfies the specified conditions but does not perform a rollover. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1
). -
lazy
(Optional, boolean): If set to true, the rollover action will only mark a data stream to signal that it needs to be rolled over at the next write. Only allowed on data streams.
-
segments
editGet index segments. Get low-level information about the Lucene segments in index shards. For data streams, the API returns information about the stream’s backing indices.
client.indices.segments({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. Supports wildcards (*
). To target all data streams and indices, omit this parameter or use*
or_all
. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
verbose
(Optional, boolean): Iftrue
, the request returns a verbose response.
-
shard_stores
editGet index shard stores. Get store information about replica shards in one or more indices. For data streams, the API retrieves store information for the stream’s backing indices.
The index shard stores API returns the following information:
- The node on which each replica shard exists.
- The allocation ID for each replica shard.
- A unique ID for each replica shard.
- Any errors encountered while opening the shard index or from an earlier failure.
By default, the API returns store information only for primary shards that are unassigned or have one or more unassigned replica shards.
client.indices.shardStores({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases used to limit the request. -
allow_no_indices
(Optional, boolean): If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. -
ignore_unavailable
(Optional, boolean): If true, missing or closed indices are not included in the response. -
status
(Optional, Enum("green" | "yellow" | "red" | "all") | Enum("green" | "yellow" | "red" | "all")[]): List of shard health statuses used to limit the request.
-
shrink
editShrink an index. Shrink an index into a new index with fewer primary shards.
Before you can shrink an index:
- The index must be read-only.
- A copy of every shard in the index must reside on the same node.
- The index must have a green health status.
To make shard allocation easier, we recommend you also remove the index’s replica shards. You can later re-add replica shards as part of the shrink operation.
The requested number of primary shards in the target index must be a factor of the number of shards in the source index. For example an index with 8 primary shards can be shrunk into 4, 2 or 1 primary shards or an index with 15 primary shards can be shrunk into 5, 3 or 1. If the number of shards in the index is a prime number it can only be shrunk into a single primary shard Before shrinking, a (primary or replica) copy of every shard in the index must be present on the same node.
The current write index on a data stream cannot be shrunk. In order to shrink the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be shrunk.
A shrink operation:
- Creates a new target index with the same definition as the source index, but with a smaller number of primary shards.
- Hard-links segments from the source index into the target index. If the file system does not support hard-linking, then all segments are copied into the new index, which is a much more time consuming process. Also if using multiple data paths, shards on different data paths require a full copy of segment files if they are not on the same disk since hardlinks do not work across disks.
-
Recovers the target index as though it were a closed index which had just been re-opened. Recovers shards to the
.routing.allocation.initial_recovery._id
index setting.
Indices can only be shrunk if they satisfy the following requirements:
- The target index must not exist.
- The source index must have more primary shards than the target index.
- The number of primary shards in the target index must be a factor of the number of primary shards in the source index. The source index must have more primary shards than the target index.
- The index must not contain more than 2,147,483,519 documents in total across all shards that will be shrunk into a single shard on the target index as this is the maximum number of docs that can fit into a single shard.
- The node handling the shrink process must have sufficient free disk space to accommodate a second copy of the existing index.
client.indices.shrink({ index, target })
Arguments
edit-
Request (object):
-
index
(string): Name of the source index to shrink. -
target
(string): Name of the target index to create. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): The key is the alias name. Index alias names support date math. -
settings
(Optional, Record<string, User-defined value>): Configuration options for the target index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
simulate_index_template
editSimulate an index. Get the index configuration that would be applied to the specified index from an existing index template.
client.indices.simulateIndexTemplate({ name })
Arguments
edit-
Request (object):
-
name
(string): Name of the index to simulate -
create
(Optional, boolean): Whether the index template we optionally defined in the body should only be dry-run added if new or can also replace an existing one -
cause
(Optional, string): User defined reason for dry-run creating the new template for simulation purposes -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
include_defaults
(Optional, boolean): If true, returns all relevant default configurations for the index template.
-
simulate_template
editSimulate an index template. Get the index configuration that would be applied by a particular index template.
client.indices.simulateTemplate({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): Name of the index template to simulate. To test a template configuration before you add it to the cluster, omit this parameter and specify the template configuration in the request body. -
allow_auto_create
(Optional, boolean): This setting overrides the value of theaction.auto_create_index
cluster setting. If set totrue
in a template, then indices can be automatically created using that template even if auto-creation of indices is disabled viaactions.auto_create_index
. If set tofalse
, then indices or data streams matching the template must always be explicitly created, and may never be automatically created. -
index_patterns
(Optional, string | string[]): Array of wildcard (*
) expressions used to match the names of data streams and indices during creation. -
composed_of
(Optional, string[]): An ordered list of component template names. Component templates are merged in the order specified, meaning that the last component template specified has the highest precedence. -
template
(Optional, { aliases, mappings, settings, lifecycle }): Template to be applied. It may optionally include analiases
,mappings
, orsettings
configuration. -
data_stream
(Optional, { hidden, allow_custom_routing }): If this object is included, the template is used to create data streams and their backing indices. Supports an empty object. Data streams require a matching index template with adata_stream
object. -
priority
(Optional, number): Priority to determine index template precedence when a new data stream or index is created. The index template with the highest priority is chosen. If no priority is specified the template is treated as though it is of priority 0 (lowest priority). This number is not automatically generated by Elasticsearch. -
version
(Optional, number): Version number used to manage index templates externally. This number is not automatically generated by Elasticsearch. -
_meta
(Optional, Record<string, User-defined value>): Optional user metadata about the index template. May have any contents. This map is not automatically generated by Elasticsearch. -
ignore_missing_component_templates
(Optional, string[]): The configuration option ignore_missing_component_templates can be used when an index template references a component template that might not exist -
deprecated
(Optional, boolean): Marks this index template as deprecated. When creating or updating a non-deprecated index template that uses deprecated components, Elasticsearch will emit a deprecation warning. -
create
(Optional, boolean): If true, the template passed in the body is only used if no existing templates match the same index patterns. If false, the simulation uses the template with the highest priority. Note that the template is not permanently added or updated in either case; it is only used for the simulation. -
cause
(Optional, string): User defined reason for dry-run creating the new template for simulation purposes -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
include_defaults
(Optional, boolean): If true, returns all relevant default configurations for the index template.
-
split
editSplit an index. Split an index into a new index with more primary shards. * Before you can split an index:
- The index must be read-only.
- The cluster health status must be green.
You can do make an index read-only with the following request using the add index block API:
PUT /my_source_index/_block/write
The current write index on a data stream cannot be split. In order to split the current write index, the data stream must first be rolled over so that a new write index is created and then the previous write index can be split.
The number of times the index can be split (and the number of shards that each original shard can be split into) is determined by the index.number_of_routing_shards
setting.
The number of routing shards specifies the hashing space that is used internally to distribute documents across shards with consistent hashing.
For instance, a 5 shard index with number_of_routing_shards
set to 30 (5 x 2 x 3) could be split by a factor of 2 or 3.
A split operation:
- Creates a new target index with the same definition as the source index, but with a larger number of primary shards.
- Hard-links segments from the source index into the target index. If the file system doesn’t support hard-linking, all segments are copied into the new index, which is a much more time consuming process.
- Hashes all documents again, after low level files are created, to delete documents that belong to a different shard.
- Recovers the target index as though it were a closed index which had just been re-opened.
Indices can only be split if they satisfy the following requirements:
- The target index must not exist.
- The source index must have fewer primary shards than the target index.
- The number of primary shards in the target index must be a multiple of the number of primary shards in the source index.
- The node handling the split process must have sufficient free disk space to accommodate a second copy of the existing index.
client.indices.split({ index, target })
Arguments
edit-
Request (object):
-
index
(string): Name of the source index to split. -
target
(string): Name of the target index to create. -
aliases
(Optional, Record<string, { filter, index_routing, is_hidden, is_write_index, routing, search_routing }>): Aliases for the resulting index. -
settings
(Optional, Record<string, User-defined value>): Configuration options for the target index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, number | Enum("all" | "index-setting")): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
stats
editGet index statistics. For data streams, the API retrieves statistics for the stream’s backing indices.
By default, the returned statistics are index-level with primaries
and total
aggregations.
primaries
are the values for only the primary shards.
total
are the accumulated values for both primary and replica shards.
To get shard-level statistics, set the level
parameter to shards
.
When moving to another node, the shard-level statistics for a shard are cleared. Although the shard is no longer part of the node, that node retains any node-level statistics to which the shard contributed.
client.indices.stats({ ... })
Arguments
edit-
Request (object):
-
metric
(Optional, string | string[]): Limit the information returned the specific metrics. -
index
(Optional, string | string[]): A list of index names; use_all
or empty string to perform the operation on all indices -
completion_fields
(Optional, string | string[]): List or wildcard expressions of fields to include in fielddata and suggest statistics. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. -
fielddata_fields
(Optional, string | string[]): List or wildcard expressions of fields to include in fielddata statistics. -
fields
(Optional, string | string[]): List or wildcard expressions of fields to include in the statistics. -
forbid_closed_indices
(Optional, boolean): If true, statistics are not collected from closed indices. -
groups
(Optional, string | string[]): List of search groups to include in the search statistics. -
include_segment_file_sizes
(Optional, boolean): If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested). -
include_unloaded_segments
(Optional, boolean): If true, the response includes information from segments that are not loaded into memory. -
level
(Optional, Enum("cluster" | "indices" | "shards")): Indicates whether statistics are aggregated at the cluster, index, or shard level.
-
unfreeze
editUnfreeze an index. When a frozen index is unfrozen, the index goes through the normal recovery process and becomes writeable again.
client.indices.unfreeze({ index })
Arguments
edit-
Request (object):
-
index
(string): Identifier for the index. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_active_shards
(Optional, string): The number of shard copies that must be active before proceeding with the operation. Set toall
or any positive integer up to the total number of shards in the index (number_of_replicas+1
).
-
update_aliases
editCreate or update an alias. Adds a data stream or index to an alias.
client.indices.updateAliases({ ... })
Arguments
edit-
Request (object):
-
actions
(Optional, { add_backing_index, remove_backing_index }[]): Actions to perform. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
validate_query
editValidate a query. Validates a query without running it.
client.indices.validateQuery({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): List of data streams, indices, and aliases to search. Supports wildcards (*
). To search all data streams or indices, omit this parameter or use*
or_all
. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Query in the Lucene query string syntax. -
allow_no_indices
(Optional, boolean): Iffalse
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
all_shards
(Optional, boolean): Iftrue
, the validation is executed on all shards instead of one random shard per index. -
analyzer
(Optional, string): Analyzer to use for the query string. This parameter can only be used when theq
query string parameter is specified. -
analyze_wildcard
(Optional, boolean): Iftrue
, wildcard and prefix queries are analyzed. -
default_operator
(Optional, Enum("and" | "or")): The default operator for query string query:AND
orOR
. -
df
(Optional, string): Field to use as default where no field prefix is given in the query string. This parameter can only be used when theq
query string parameter is specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values, such asopen,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
explain
(Optional, boolean): Iftrue
, the response returns detailed information if an error has occurred. -
ignore_unavailable
(Optional, boolean): Iffalse
, the request returns an error if it targets a missing or closed index. -
lenient
(Optional, boolean): Iftrue
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
rewrite
(Optional, boolean): Iftrue
, returns a more detailed explanation showing the actual Lucene query that will be executed. -
q
(Optional, string): Query in the Lucene query string syntax.
-
inference
editchat_completion_unified
editPerform chat completion inference
client.inference.chatCompletionUnified({ inference_id })
Arguments
edit-
Request (object):
-
inference_id
(string): The inference Id -
chat_completion_request
(Optional, { messages, model, max_completion_tokens, stop, temperature, tool_choice, tools, top_p }) -
timeout
(Optional, string | -1 | 0): Specifies the amount of time to wait for the inference request to complete.
-
completion
editPerform completion inference on the service
client.inference.completion({ inference_id, input })
Arguments
edit-
Request (object):
-
inference_id
(string): The inference Id -
input
(string | string[]): Inference input. Either a string or an array of strings. -
task_settings
(Optional, User-defined value): Optional task settings -
timeout
(Optional, string | -1 | 0): Specifies the amount of time to wait for the inference request to complete.
-
delete
editDelete an inference endpoint
client.inference.delete({ inference_id })
Arguments
edit-
Request (object):
-
inference_id
(string): The inference identifier. -
task_type
(Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion" | "chat_completion")): The task type -
dry_run
(Optional, boolean): When true, the endpoint is not deleted and a list of ingest processors which reference this endpoint is returned. -
force
(Optional, boolean): When true, the inference endpoint is forcefully deleted even if it is still being used by ingest processors or semantic text fields.
-
get
editGet an inference endpoint
client.inference.get({ ... })
Arguments
edit-
Request (object):
-
task_type
(Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion" | "chat_completion")): The task type -
inference_id
(Optional, string): The inference Id
-
inference
editPerform inference on the service.
This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API.
For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.
info The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
client.inference.inference({ inference_id, input })
Arguments
edit-
Request (object):
-
inference_id
(string): The unique identifier for the inference endpoint. -
input
(string | string[]): The text on which you want to perform the inference task. It can be a single string or an array.
-
info Inference endpoints for the
completion
task type currently only support a single string as input.task_type
(Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion" | "chat_completion")): The type of inference task that the model performs.query
(Optional, string): The query input, which is required only for thererank
task. It is not required for other tasks.task_settings
(Optional, User-defined value): Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service.timeout
(Optional, string | -1 | 0): The amount of time to wait for the inference request to complete.
put
editCreate an inference endpoint.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
client.inference.put({ inference_id })
Arguments
edit-
Request (object):
-
inference_id
(string): The inference Id -
task_type
(Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion" | "chat_completion")): The task type -
inference_config
(Optional, { chunking_settings, service, service_settings, task_settings })
-
put_alibabacloud
editCreate an AlibabaCloud AI Search inference endpoint.
Create an inference endpoint to perform an inference task with the alibabacloud-ai-search
service.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putAlibabacloud({ task_type, alibabacloud_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("completion" | "rerank" | "space_embedding" | "text_embedding")): The type of the inference task that the model will perform. -
alibabacloud_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("alibabacloud-ai-search")): The type of service supported for the specified task type. In this case,alibabacloud-ai-search
. -
service_settings
({ api_key, host, rate_limit, service_id, workspace }): Settings used to install the inference model. These settings are specific to thealibabacloud-ai-search
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { input_type, return_token }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_amazonbedrock
editCreate an Amazon Bedrock inference endpoint.
Creates an inference endpoint to perform an inference task with the amazonbedrock
service.
>info > You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putAmazonbedrock({ task_type, amazonbedrock_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("completion" | "text_embedding")): The type of the inference task that the model will perform. -
amazonbedrock_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("amazonbedrock")): The type of service supported for the specified task type. In this case,amazonbedrock
. -
service_settings
({ access_key, model, provider, region, rate_limit, secret_key }): Settings used to install the inference model. These settings are specific to theamazonbedrock
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { max_new_tokens, temperature, top_k, top_p }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_anthropic
editCreate an Anthropic inference endpoint.
Create an inference endpoint to perform an inference task with the anthropic
service.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putAnthropic({ task_type, anthropic_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("completion")): The task type. The only valid task type for the model to perform iscompletion
. -
anthropic_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("anthropic")): The type of service supported for the specified task type. In this case,anthropic
. -
service_settings
({ api_key, model_id, rate_limit }): Settings used to install the inference model. These settings are specific to thewatsonxai
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { max_tokens, temperature, top_k, top_p }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_azureaistudio
editCreate an Azure AI studio inference endpoint.
Create an inference endpoint to perform an inference task with the azureaistudio
service.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putAzureaistudio({ task_type, azureaistudio_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("completion" | "text_embedding")): The type of the inference task that the model will perform. -
azureaistudio_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("azureaistudio")): The type of service supported for the specified task type. In this case,azureaistudio
. -
service_settings
({ api_key, endpoint_type, target, provider, rate_limit }): Settings used to install the inference model. These settings are specific to theopenai
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { do_sample, max_new_tokens, temperature, top_p, user }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_azureopenai
editCreate an Azure OpenAI inference endpoint.
Create an inference endpoint to perform an inference task with the azureopenai
service.
The list of chat completion models that you can choose from in your Azure OpenAI deployment include:
- [GPT-4 and GPT-4 Turbo models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#gpt-4-and-gpt-4-turbo-models)
- [GPT-3.5](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#gpt-35)
The list of embeddings models that you can choose from in your deployment can be found in the [Azure models documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#embeddings).
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putAzureopenai({ task_type, azureopenai_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("completion" | "text_embedding")): The type of the inference task that the model will perform. NOTE: Thechat_completion
task type only supports streaming and only through the _stream API. -
azureopenai_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("azureopenai")): The type of service supported for the specified task type. In this case,azureopenai
. -
service_settings
({ api_key, api_version, deployment_id, entra_id, rate_limit, resource_name }): Settings used to install the inference model. These settings are specific to theazureopenai
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { user }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_cohere
editCreate a Cohere inference endpoint.
Create an inference endpoint to perform an inference task with the cohere
service.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putCohere({ task_type, cohere_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("completion" | "rerank" | "text_embedding")): The type of the inference task that the model will perform. -
cohere_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("cohere")): The type of service supported for the specified task type. In this case,cohere
. -
service_settings
({ api_key, embedding_type, model_id, rate_limit, similarity }): Settings used to install the inference model. These settings are specific to thecohere
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { input_type, return_documents, top_n, truncate }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_elasticsearch
editCreate an Elasticsearch inference endpoint.
Create an inference endpoint to perform an inference task with the elasticsearch
service.
info Your Elasticsearch deployment contains preconfigured ELSER and E5 inference endpoints, you only need to create the enpoints using the API if you want to customize the settings.
If you use the ELSER or the E5 model through the elasticsearch
service, the API request will automatically download and deploy the model if it isn’t downloaded yet.
info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putElasticsearch({ task_type, elasticsearch_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("rerank" | "sparse_embedding" | "text_embedding")): The type of the inference task that the model will perform. -
elasticsearch_inference_id
(string): The unique identifier of the inference endpoint. The must not match themodel_id
. -
service
(Enum("elasticsearch")): The type of service supported for the specified task type. In this case,elasticsearch
. -
service_settings
({ adaptive_allocations, deployment_id, model_id, num_allocations, num_threads }): Settings used to install the inference model. These settings are specific to theelasticsearch
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { return_documents }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_elser
editCreate an ELSER inference endpoint.
Create an inference endpoint to perform an inference task with the elser
service.
You can also deploy ELSER by using the Elasticsearch inference integration.
info Your Elasticsearch deployment contains a preconfigured ELSER inference endpoint, you only need to create the enpoint using the API if you want to customize the settings.
The API request will automatically download and deploy the ELSER model if it isn’t already downloaded.
info You might see a 502 bad gateway error in the response when using the Kibana Console. This error usually just reflects a timeout, while the model downloads in the background. You can check the download progress in the Machine Learning UI. If using the Python client, you can set the timeout parameter to a higher value.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putElser({ task_type, elser_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("sparse_embedding")): The type of the inference task that the model will perform. -
elser_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("elser")): The type of service supported for the specified task type. In this case,elser
. -
service_settings
({ adaptive_allocations, num_allocations, num_threads }): Settings used to install the inference model. These settings are specific to theelser
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object.
-
put_googleaistudio
editCreate an Google AI Studio inference endpoint.
Create an inference endpoint to perform an inference task with the googleaistudio
service.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putGoogleaistudio({ task_type, googleaistudio_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("completion" | "text_embedding")): The type of the inference task that the model will perform. -
googleaistudio_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("googleaistudio")): The type of service supported for the specified task type. In this case,googleaistudio
. -
service_settings
({ api_key, model_id, rate_limit }): Settings used to install the inference model. These settings are specific to thegoogleaistudio
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object.
-
put_googlevertexai
editCreate a Google Vertex AI inference endpoint.
Create an inference endpoint to perform an inference task with the googlevertexai
service.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putGooglevertexai({ task_type, googlevertexai_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("rerank" | "text_embedding")): The type of the inference task that the model will perform. -
googlevertexai_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("googlevertexai")): The type of service supported for the specified task type. In this case,googlevertexai
. -
service_settings
({ location, model_id, project_id, rate_limit, service_account_json }): Settings used to install the inference model. These settings are specific to thegooglevertexai
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { auto_truncate, top_n }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_hugging_face
editCreate a Hugging Face inference endpoint.
Create an inference endpoint to perform an inference task with the hugging_face
service.
You must first create an inference endpoint on the Hugging Face endpoint page to get an endpoint URL.
Select the model you want to use on the new endpoint creation page (for example intfloat/e5-small-v2
), then select the sentence embeddings task under the advanced configuration section.
Create the endpoint and copy the URL after the endpoint initialization has been finished.
The following models are recommended for the Hugging Face service:
-
all-MiniLM-L6-v2
-
all-MiniLM-L12-v2
-
all-mpnet-base-v2
-
e5-base-v2
-
e5-small-v2
-
multilingual-e5-base
-
multilingual-e5-small
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putHuggingFace({ task_type, huggingface_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("text_embedding")): The type of the inference task that the model will perform. -
huggingface_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("hugging_face")): The type of service supported for the specified task type. In this case,hugging_face
. -
service_settings
({ api_key, rate_limit, url }): Settings used to install the inference model. These settings are specific to thehugging_face
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object.
-
put_jinaai
editCreate an JinaAI inference endpoint.
Create an inference endpoint to perform an inference task with the jinaai
service.
To review the available rerank
models, refer to https://jina.ai/reranker.
To review the available text_embedding
models, refer to the https://jina.ai/embeddings/.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putJinaai({ task_type, jinaai_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("rerank" | "text_embedding")): The type of the inference task that the model will perform. -
jinaai_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("jinaai")): The type of service supported for the specified task type. In this case,jinaai
. -
service_settings
({ api_key, model_id, rate_limit, similarity }): Settings used to install the inference model. These settings are specific to thejinaai
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { return_documents, task, top_n }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_mistral
editCreate a Mistral inference endpoint.
Creates an inference endpoint to perform an inference task with the mistral
service.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putMistral({ task_type, mistral_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("text_embedding")): The task type. The only valid task type for the model to perform istext_embedding
. -
mistral_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("mistral")): The type of service supported for the specified task type. In this case,mistral
. -
service_settings
({ api_key, max_input_tokens, model, rate_limit }): Settings used to install the inference model. These settings are specific to themistral
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object.
-
put_openai
editCreate an OpenAI inference endpoint.
Create an inference endpoint to perform an inference task with the openai
service or openai
compatible APIs.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putOpenai({ task_type, openai_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("chat_completion" | "completion" | "text_embedding")): The type of the inference task that the model will perform. NOTE: Thechat_completion
task type only supports streaming and only through the _stream API. -
openai_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("openai")): The type of service supported for the specified task type. In this case,openai
. -
service_settings
({ api_key, dimensions, model_id, organization_id, rate_limit, url }): Settings used to install the inference model. These settings are specific to theopenai
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { user }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_voyageai
editCreate a VoyageAI inference endpoint.
Create an inference endpoint to perform an inference task with the voyageai
service.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putVoyageai({ task_type, voyageai_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("text_embedding" | "rerank")): The type of the inference task that the model will perform. -
voyageai_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("voyageai")): The type of service supported for the specified task type. In this case,voyageai
. -
service_settings
({ dimensions, model_id, rate_limit, embedding_type }): Settings used to install the inference model. These settings are specific to thevoyageai
service. -
chunking_settings
(Optional, { max_chunk_size, overlap, sentence_overlap, strategy }): The chunking configuration object. -
task_settings
(Optional, { input_type, return_documents, top_k, truncation }): Settings to configure the inference task. These settings are specific to the task type you specified.
-
put_watsonx
editCreate a Watsonx inference endpoint.
Create an inference endpoint to perform an inference task with the watsonxai
service.
You need an IBM Cloud Databases for Elasticsearch deployment to use the watsonxai
inference service.
You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.
When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running.
After creating the endpoint, wait for the model deployment to complete before using it.
To verify the deployment status, use the get trained model statistics API.
Look for "state": "fully_allocated"
in the response and ensure that the "allocation_count"
matches the "target_allocation_count"
.
Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
client.inference.putWatsonx({ task_type, watsonx_inference_id, service, service_settings })
Arguments
edit-
Request (object):
-
task_type
(Enum("text_embedding")): The task type. The only valid task type for the model to perform istext_embedding
. -
watsonx_inference_id
(string): The unique identifier of the inference endpoint. -
service
(Enum("watsonxai")): The type of service supported for the specified task type. In this case,watsonxai
. -
service_settings
({ api_key, api_version, model_id, project_id, rate_limit, url }): Settings used to install the inference model. These settings are specific to thewatsonxai
service.
-
rerank
editPerform rereanking inference on the service
client.inference.rerank({ inference_id, query, input })
Arguments
edit-
Request (object):
-
inference_id
(string): The unique identifier for the inference endpoint. -
query
(string): Query input. -
input
(string | string[]): The text on which you want to perform the inference task. It can be a single string or an array.
-
info Inference endpoints for the
completion
task type currently only support a single string as input.task_settings
(Optional, User-defined value): Task settings for the individual inference request. These settings are specific to the task type you specified and override the task settings specified when initializing the service.timeout
(Optional, string | -1 | 0): The amount of time to wait for the inference request to complete.
sparse_embedding
editPerform sparse embedding inference on the service
client.inference.sparseEmbedding({ inference_id, input })
Arguments
edit-
Request (object):
-
inference_id
(string): The inference Id -
input
(string | string[]): Inference input. Either a string or an array of strings. -
task_settings
(Optional, User-defined value): Optional task settings -
timeout
(Optional, string | -1 | 0): Specifies the amount of time to wait for the inference request to complete.
-
stream_completion
editPerform streaming inference. Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type.
The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
This API requires the monitor_inference
cluster privilege (the built-in inference_admin
and inference_user
roles grant this privilege). You must use a client that supports streaming.
client.inference.streamCompletion({ inference_id, input })
Arguments
edit-
Request (object):
-
inference_id
(string): The unique identifier for the inference endpoint. -
input
(string | string[]): The text on which you want to perform the inference task. It can be a single string or an array.
-
Inference endpoints for the completion task type currently only support a single string as input.
* *task_settings
(Optional, User-defined value): Optional task settings
text_embedding
editPerform text embedding inference on the service
client.inference.textEmbedding({ inference_id, input })
Arguments
edit-
Request (object):
-
inference_id
(string): The inference Id -
input
(string | string[]): Inference input. Either a string or an array of strings. -
task_settings
(Optional, User-defined value): Optional task settings -
timeout
(Optional, string | -1 | 0): Specifies the amount of time to wait for the inference request to complete.
-
update
editUpdate an inference endpoint.
Modify task_settings
, secrets (within service_settings
), or num_allocations
for an inference endpoint, depending on the specific endpoint service and task_type
.
The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
client.inference.update({ inference_id })
Arguments
edit-
Request (object):
-
inference_id
(string): The unique identifier of the inference endpoint. -
task_type
(Optional, Enum("sparse_embedding" | "text_embedding" | "rerank" | "completion" | "chat_completion")): The type of inference task that the model performs. -
inference_config
(Optional, { chunking_settings, service, service_settings, task_settings })
-
ingest
editdelete_geoip_database
editDelete GeoIP database configurations.
Delete one or more IP geolocation database configurations.
client.ingest.deleteGeoipDatabase({ id })
Arguments
edit-
Request (object):
-
id
(string | string[]): A list of geoip database configurations to delete -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_ip_location_database
editDelete IP geolocation database configurations.
client.ingest.deleteIpLocationDatabase({ id })
Arguments
edit-
Request (object):
-
id
(string | string[]): A list of IP location database configurations. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. A value of-1
indicates that the request should never time out. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. A value of-1
indicates that the request should never time out.
-
delete_pipeline
editDelete pipelines. Delete one or more ingest pipelines.
client.ingest.deletePipeline({ id })
Arguments
edit-
Request (object):
-
id
(string): Pipeline ID or wildcard expression of pipeline IDs used to limit the request. To delete all ingest pipelines in a cluster, use a value of*
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
geo_ip_stats
editGet GeoIP statistics. Get download statistics for GeoIP2 databases that are used with the GeoIP processor.
client.ingest.geoIpStats()
get_geoip_database
editGet GeoIP database configurations.
Get information about one or more IP geolocation database configurations.
client.ingest.getGeoipDatabase({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string | string[]): A list of database configuration IDs to retrieve. Wildcard (*
) expressions are supported. To get all database configurations, omit this parameter or use*
.
-
get_ip_location_database
editGet IP geolocation database configurations.
client.ingest.getIpLocationDatabase({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string | string[]): List of database configuration IDs to retrieve. Wildcard (*
) expressions are supported. To get all database configurations, omit this parameter or use*
. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. A value of-1
indicates that the request should never time out.
-
get_pipeline
editGet pipelines.
Get information about one or more ingest pipelines. This API returns a local reference of the pipeline.
client.ingest.getPipeline({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): List of pipeline IDs to retrieve. Wildcard (*
) expressions are supported. To get all ingest pipelines, omit this parameter or use*
. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
summary
(Optional, boolean): Return pipelines without their definitions (default: false)
-
processor_grok
editRun a grok processor. Extract structured fields out of a single text field within a document. You must choose which field to extract matched fields from, as well as the grok pattern you expect will match. A grok pattern is like a regular expression that supports aliased expressions that can be reused.
client.ingest.processorGrok()
put_geoip_database
editCreate or update a GeoIP database configuration.
Refer to the create or update IP geolocation database configuration API.
client.ingest.putGeoipDatabase({ id, name, maxmind })
Arguments
edit-
Request (object):
-
id
(string): ID of the database configuration to create or update. -
name
(string): The provider-assigned name of the IP geolocation database to download. -
maxmind
({ account_id }): The configuration necessary to identify which IP geolocation provider to use to download the database, as well as any provider-specific configuration necessary for such downloading. At present, the only supported provider is maxmind, and the maxmind provider requires that an account_id (string) is configured. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_ip_location_database
editCreate or update an IP geolocation database configuration.
client.ingest.putIpLocationDatabase({ id })
Arguments
edit-
Request (object):
-
id
(string): The database configuration identifier. -
configuration
(Optional, { name, maxmind, ipinfo }) -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. A value of-1
indicates that the request should never time out. -
timeout
(Optional, string | -1 | 0): The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response indicates that it was not completely acknowledged. A value of-1
indicates that the request should never time out.
-
put_pipeline
editCreate or update a pipeline. Changes made using this API take effect immediately.
client.ingest.putPipeline({ id })
Arguments
edit-
Request (object):
-
id
(string): ID of the ingest pipeline to create or update. -
_meta
(Optional, Record<string, User-defined value>): Optional metadata about the ingest pipeline. May have any contents. This map is not automatically generated by Elasticsearch. -
description
(Optional, string): Description of the ingest pipeline. -
on_failure
(Optional, { append, attachment, bytes, circle, community_id, convert, csv, date, date_index_name, dissect, dot_expander, drop, enrich, fail, fingerprint, foreach, ip_location, geo_grid, geoip, grok, gsub, html_strip, inference, join, json, kv, lowercase, network_direction, pipeline, redact, registered_domain, remove, rename, reroute, script, set, set_security_user, sort, split, terminate, trim, uppercase, urldecode, uri_parts, user_agent }[]): Processors to run immediately after a processor failure. Each processor supports a processor-levelon_failure
value. If a processor without anon_failure
value fails, Elasticsearch uses this pipeline-level parameter as a fallback. The processors in this parameter run sequentially in the order specified. Elasticsearch will not attempt to run the pipeline’s remaining processors. -
processors
(Optional, { append, attachment, bytes, circle, community_id, convert, csv, date, date_index_name, dissect, dot_expander, drop, enrich, fail, fingerprint, foreach, ip_location, geo_grid, geoip, grok, gsub, html_strip, inference, join, json, kv, lowercase, network_direction, pipeline, redact, registered_domain, remove, rename, reroute, script, set, set_security_user, sort, split, terminate, trim, uppercase, urldecode, uri_parts, user_agent }[]): Processors used to perform transformations on documents before indexing. Processors run sequentially in the order specified. -
version
(Optional, number): Version number used by external systems to track ingest pipelines. This parameter is intended for external systems only. Elasticsearch does not use or validate pipeline version numbers. -
deprecated
(Optional, boolean): Marks this ingest pipeline as deprecated. When a deprecated ingest pipeline is referenced as the default or final pipeline when creating or updating a non-deprecated index template, Elasticsearch will emit a deprecation warning. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
if_version
(Optional, number): Required version for optimistic concurrency control for pipeline updates
-
simulate
editSimulate a pipeline.
Run an ingest pipeline against a set of provided documents. You can either specify an existing pipeline to use with the provided documents or supply a pipeline definition in the body of the request.
client.ingest.simulate({ docs })
Arguments
edit-
Request (object):
-
docs
({ _id, _index, _source }[]): Sample documents to test in the pipeline. -
id
(Optional, string): The pipeline to test. If you don’t specify apipeline
in the request body, this parameter is required. -
pipeline
(Optional, { description, on_failure, processors, version, deprecated, _meta }): The pipeline to test. If you don’t specify thepipeline
request path parameter, this parameter is required. If you specify both this and the request path parameter, the API only uses the request path parameter. -
verbose
(Optional, boolean): Iftrue
, the response includes output data for each processor in the executed pipeline.
-
license
editdelete
editDelete the license.
When the license expires, your subscription level reverts to Basic.
If the operator privileges feature is enabled, only operator users can use this API.
client.license.delete({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get
editGet license information.
Get information about your Elastic license including its type, its status, when it was issued, and when it expires.
>info
> If the master node is generating a new cluster state, the get license API may return a 404 Not Found
response.
> If you receive an unexpected 404 response after cluster startup, wait a short period and retry the request.
client.license.get({ ... })
Arguments
edit-
Request (object):
-
accept_enterprise
(Optional, boolean): Iftrue
, this parameter returns enterprise for Enterprise license types. Iffalse
, this parameter returns platinum for both platinum and enterprise license types. This behavior is maintained for backwards compatibility. This parameter is deprecated and will always be set to true in 8.x. -
local
(Optional, boolean): Specifies whether to retrieve local information. The default value isfalse
, which means the information is retrieved from the master node.
-
get_basic_status
editGet the basic license status.
client.license.getBasicStatus()
get_trial_status
editGet the trial status.
client.license.getTrialStatus()
post
editUpdate the license.
You can update your license at runtime without shutting down your nodes. License updates take effect immediately. If the license you are installing does not support all of the features that were available with your previous license, however, you are notified in the response. You must then re-submit the API request with the acknowledge parameter set to true.
If Elasticsearch security features are enabled and you are installing a gold or higher license, you must enable TLS on the transport networking layer before you install the license. If the operator privileges feature is enabled, only operator users can use this API.
client.license.post({ ... })
Arguments
edit-
Request (object):
-
license
(Optional, { expiry_date_in_millis, issue_date_in_millis, start_date_in_millis, issued_to, issuer, max_nodes, max_resource_units, signature, type, uid }) -
licenses
(Optional, { expiry_date_in_millis, issue_date_in_millis, start_date_in_millis, issued_to, issuer, max_nodes, max_resource_units, signature, type, uid }[]): A sequence of one or more JSON documents containing the license information. -
acknowledge
(Optional, boolean): Specifies whether you acknowledge the license changes. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
post_start_basic
editStart a basic license.
Start an indefinite basic license, which gives access to all the basic features.
In order to start a basic license, you must not currently have a basic license.
If the basic license does not support all of the features that are available with your current license, however, you are notified in the response.
You must then re-submit the API request with the acknowledge
parameter set to true
.
To check the status of your basic license, use the get basic license API.
client.license.postStartBasic({ ... })
Arguments
edit-
Request (object):
-
acknowledge
(Optional, boolean): whether the user has acknowledged acknowledge messages (default: false) -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
post_start_trial
editStart a trial. Start a 30-day trial, which gives access to all subscription features.
You are allowed to start a trial only if your cluster has not already activated a trial for the current major product version. For example, if you have already activated a trial for v8.0, you cannot start a new trial until v9.0. You can, however, request an extended trial at https://www.elastic.co/trialextension.
To check the status of your trial, use the get trial status API.
client.license.postStartTrial({ ... })
Arguments
edit-
Request (object):
-
acknowledge
(Optional, boolean): whether the user has acknowledged acknowledge messages (default: false) -
type_query_string
(Optional, string) -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
logstash
editdelete_pipeline
editDelete a Logstash pipeline. Delete a pipeline that is used for Logstash Central Management. If the request succeeds, you receive an empty response with an appropriate status code.
client.logstash.deletePipeline({ id })
Arguments
edit-
Request (object):
-
id
(string): An identifier for the pipeline.
-
get_pipeline
editGet Logstash pipelines. Get pipelines that are used for Logstash Central Management.
client.logstash.getPipeline({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string | string[]): A list of pipeline identifiers.
-
put_pipeline
editCreate or update a Logstash pipeline.
Create a pipeline that is used for Logstash Central Management. If the specified pipeline exists, it is replaced.
client.logstash.putPipeline({ id })
Arguments
edit-
Request (object):
-
id
(string): An identifier for the pipeline. -
pipeline
(Optional, { description, on_failure, processors, version, deprecated, _meta })
-
migration
editdeprecations
editGet deprecation information. Get information about different cluster, node, and index level settings that use deprecated features that will be removed or changed in the next major version.
This APIs is designed for indirect use by the Upgrade Assistant. You are strongly recommended to use the Upgrade Assistant.
client.migration.deprecations({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string): Comma-separate list of data streams or indices to check. Wildcard (*) expressions are supported.
-
get_feature_upgrade_status
editGet feature migration information. Version upgrades sometimes require changes to how features store configuration information and data in system indices. Check which features need to be migrated and the status of any migrations that are in progress.
This API is designed for indirect use by the Upgrade Assistant. You are strongly recommended to use the Upgrade Assistant.
client.migration.getFeatureUpgradeStatus()
post_feature_upgrade
editStart the feature migration. Version upgrades sometimes require changes to how features store configuration information and data in system indices. This API starts the automatic migration process.
Some functionality might be temporarily unavailable during the migration process.
The API is designed for indirect use by the Upgrade Assistant. We strongly recommend you use the Upgrade Assistant.
client.migration.postFeatureUpgrade()
ml
editclear_trained_model_deployment_cache
editClear trained model deployment cache.
Cache will be cleared on all nodes where the trained model is assigned. A trained model deployment may have an inference cache enabled. As requests are handled by each allocated node, their responses may be cached on that individual node. Calling this API clears the caches without restarting the deployment.
client.ml.clearTrainedModelDeploymentCache({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model.
-
close_job
editClose anomaly detection jobs.
A job can be opened and closed multiple times throughout its lifecycle. A closed job cannot receive data or perform analysis operations, but you can still explore and navigate results. When you close a job, it runs housekeeping tasks such as pruning the model history, flushing buffers, calculating final results and persisting the model snapshots. Depending upon the size of the job, it could take several minutes to close and the equivalent time to re-open. After it is closed, the job has a minimal overhead on the cluster except for maintaining its meta data. Therefore it is a best practice to close jobs that are no longer required to process data. If you close an anomaly detection job whose datafeed is running, the request first tries to stop the datafeed. This behavior is equivalent to calling stop datafeed API with the same timeout and force parameters as the close job request. When a datafeed that has a specified end date stops, it automatically closes its associated job.
client.ml.closeJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. You can close multiple anomaly detection jobs in a single API request by using a group name, a list of jobs, or a wildcard expression. You can close all jobs by using_all
or by specifying*
as the job identifier. -
allow_no_match
(Optional, boolean): Refer to the description for theallow_no_match
query parameter. -
force
(Optional, boolean): Refer to the descriptiion for theforce
query parameter. -
timeout
(Optional, string | -1 | 0): Refer to the description for thetimeout
query parameter.
-
delete_calendar
editDelete a calendar.
Remove all scheduled events from a calendar, then delete it.
client.ml.deleteCalendar({ calendar_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar.
-
delete_calendar_event
editDelete events from a calendar.
client.ml.deleteCalendarEvent({ calendar_id, event_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
event_id
(string): Identifier for the scheduled event. You can obtain this identifier by using the get calendar events API.
-
delete_calendar_job
editDelete anomaly jobs from a calendar.
client.ml.deleteCalendarJob({ calendar_id, job_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
job_id
(string | string[]): An identifier for the anomaly detection jobs. It can be a job identifier, a group name, or a list of jobs or groups.
-
delete_data_frame_analytics
editDelete a data frame analytics job.
client.ml.deleteDataFrameAnalytics({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. -
force
(Optional, boolean): Iftrue
, it deletes a job that is not stopped; this method is quicker than stopping and deleting the job. -
timeout
(Optional, string | -1 | 0): The time to wait for the job to be deleted.
-
delete_datafeed
editDelete a datafeed.
client.ml.deleteDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
force
(Optional, boolean): Use to forcefully delete a started datafeed; this method is quicker than stopping and deleting the datafeed.
-
delete_expired_data
editDelete expired ML data.
Delete all job results, model snapshots and forecast data that have exceeded
their retention days period. Machine learning state documents that are not
associated with any job are also deleted.
You can limit the request to a single or set of anomaly detection jobs by
using a job identifier, a group name, a list of jobs, or a
wildcard expression. You can delete expired data for all anomaly detection
jobs by using _all
, by specifying *
as the <job_id>
, or by omitting the
<job_id>
.
client.ml.deleteExpiredData({ ... })
Arguments
edit-
Request (object):
-
job_id
(Optional, string): Identifier for an anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. -
requests_per_second
(Optional, float): The desired requests per second for the deletion processes. The default behavior is no throttling. -
timeout
(Optional, string | -1 | 0): How long can the underlying delete processes run until they are canceled.
-
delete_filter
editDelete a filter.
If an anomaly detection job references the filter, you cannot delete the filter. You must update or delete the job before you can delete the filter.
client.ml.deleteFilter({ filter_id })
Arguments
edit-
Request (object):
-
filter_id
(string): A string that uniquely identifies a filter.
-
delete_forecast
editDelete forecasts from a job.
By default, forecasts are retained for 14 days. You can specify a
different retention period with the expires_in
parameter in the forecast
jobs API. The delete forecast API enables you to delete one or more
forecasts before they expire.
client.ml.deleteForecast({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
forecast_id
(Optional, string): A list of forecast identifiers. If you do not specify this optional parameter or if you specify_all
or*
the API deletes all forecasts from the job. -
allow_no_forecasts
(Optional, boolean): Specifies whether an error occurs when there are no forecasts. In particular, if this parameter is set tofalse
and there are no forecasts associated with the job, attempts to delete all forecasts return an error. -
timeout
(Optional, string | -1 | 0): Specifies the period of time to wait for the completion of the delete operation. When this period of time elapses, the API fails and returns an error.
-
delete_job
editDelete an anomaly detection job.
All job configuration, model state and results are deleted. It is not currently possible to delete multiple jobs using wildcards or a comma separated list. If you delete a job that has a datafeed, the request first tries to delete the datafeed. This behavior is equivalent to calling the delete datafeed API with the same timeout and force parameters as the delete job request.
client.ml.deleteJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
force
(Optional, boolean): Use to forcefully delete an opened job; this method is quicker than closing and deleting the job. -
delete_user_annotations
(Optional, boolean): Specifies whether annotations that have been added by the user should be deleted along with any auto-generated annotations when the job is reset. -
wait_for_completion
(Optional, boolean): Specifies whether the request should return immediately or wait until the job deletion completes.
-
delete_model_snapshot
editDelete a model snapshot.
You cannot delete the active model snapshot. To delete that snapshot, first
revert to a different one. To identify the active model snapshot, refer to
the model_snapshot_id
in the results from the get jobs API.
client.ml.deleteModelSnapshot({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): Identifier for the model snapshot.
-
delete_trained_model
editDelete an unreferenced trained model.
The request deletes a trained inference model that is not referenced by an ingest pipeline.
client.ml.deleteTrainedModel({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
force
(Optional, boolean): Forcefully deletes a trained model that is referenced by ingest pipelines or has a started deployment. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
delete_trained_model_alias
editDelete a trained model alias.
This API deletes an existing model alias that refers to a trained model. If
the model alias is missing or refers to a model other than the one identified
by the model_id
, this API returns an error.
client.ml.deleteTrainedModelAlias({ model_alias, model_id })
Arguments
edit-
Request (object):
-
model_alias
(string): The model alias to delete. -
model_id
(string): The trained model ID to which the model alias refers.
-
estimate_model_memory
editEstimate job model memory usage.
Make an estimation of the memory usage for an anomaly detection job model. The estimate is based on analysis configuration details for the job and cardinality estimates for the fields it references.
client.ml.estimateModelMemory({ ... })
Arguments
edit-
Request (object):
-
analysis_config
(Optional, { bucket_span, categorization_analyzer, categorization_field_name, categorization_filters, detectors, influencers, latency, model_prune_window, multivariate_by_fields, per_partition_categorization, summary_count_field_name }): For a list of the properties that you can specify in theanalysis_config
component of the body of this API. -
max_bucket_cardinality
(Optional, Record<string, number>): Estimates of the highest cardinality in a single bucket that is observed for influencer fields over the time period that the job analyzes data. To produce a good answer, values must be provided for all influencer fields. Providing values for fields that are not listed asinfluencers
has no effect on the estimation. -
overall_cardinality
(Optional, Record<string, number>): Estimates of the cardinality that is observed for fields over the whole time period that the job analyzes data. To produce a good answer, values must be provided for fields referenced in theby_field_name
,over_field_name
andpartition_field_name
of any detectors. Providing values for other fields has no effect on the estimation. It can be omitted from the request if no detectors have aby_field_name
,over_field_name
orpartition_field_name
.
-
evaluate_data_frame
editEvaluate data frame analytics.
The API packages together commonly used evaluation metrics for various types of machine learning features. This has been designed for use on indexes created by data frame analytics. Evaluation requires both a ground truth field and an analytics result field to be present.
client.ml.evaluateDataFrame({ evaluation, index })
Arguments
edit-
Request (object):
-
evaluation
({ classification, outlier_detection, regression }): Defines the type of evaluation you want to perform. -
index
(string): Defines theindex
in which the evaluation will be performed. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): A query clause that retrieves a subset of data from the source index.
-
explain_data_frame_analytics
editExplain data frame analytics config.
This API provides explanations for a data frame analytics config that either exists already or one that has not been created yet. The following explanations are provided: * which fields are included or not in the analysis and why, * how much memory is estimated to be required. The estimate can be used when deciding the appropriate value for model_memory_limit setting later on. If you have object fields or fields that are excluded via source filtering, they are not included in the explanation.
client.ml.explainDataFrameAnalytics({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
source
(Optional, { index, query, runtime_mappings, _source }): The configuration of how to source the analysis data. It requires an index. Optionally, query and _source may be specified. -
dest
(Optional, { index, results_field }): The destination configuration, consisting of index and optionally results_field (ml by default). -
analysis
(Optional, { classification, outlier_detection, regression }): The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression. -
description
(Optional, string): A description of the job. -
model_memory_limit
(Optional, string): The approximate maximum amount of memory resources that are permitted for analytical processing. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. -
max_num_threads
(Optional, number): The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself. -
analyzed_fields
(Optional, { includes, excludes }): Specify includes and/or excludes patterns to select which fields will be included in the analysis. The patterns specified in excludes are applied last, therefore excludes takes precedence. In other words, if the same field is specified in both includes and excludes, then the field will not be included in the analysis. -
allow_lazy_start
(Optional, boolean): Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node.
-
flush_job
editForce buffered data to be processed. The flush jobs API is only applicable when sending data for analysis using the post data API. Depending on the content of the buffer, then it might additionally calculate new results. Both flush and close operations are similar, however the flush is more efficient if you are expecting to send more data for analysis. When flushing, the job remains open and is available to continue analyzing data. A close operation additionally prunes and persists the model state to disk and the job must be opened again before analyzing further data.
client.ml.flushJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
advance_time
(Optional, string | Unit): Refer to the description for theadvance_time
query parameter. -
calc_interim
(Optional, boolean): Refer to the description for thecalc_interim
query parameter. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
skip_time
(Optional, string | Unit): Refer to the description for theskip_time
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter.
-
forecast
editPredict future behavior of a time series.
Forecasts are not supported for jobs that perform population analysis; an
error occurs if you try to create a forecast for a job that has an
over_field_name
in its configuration. Forcasts predict future behavior
based on historical data.
client.ml.forecast({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. The job must be open when you create a forecast; otherwise, an error occurs. -
duration
(Optional, string | -1 | 0): Refer to the description for theduration
query parameter. -
expires_in
(Optional, string | -1 | 0): Refer to the description for theexpires_in
query parameter. -
max_model_memory
(Optional, string): Refer to the description for themax_model_memory
query parameter.
-
get_buckets
editGet anomaly detection job results for buckets. The API presents a chronological view of the records, grouped by bucket.
client.ml.getBuckets({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
timestamp
(Optional, string | Unit): The timestamp of a single bucket result. If you do not specify this parameter, the API returns information about all buckets. -
anomaly_score
(Optional, number): Refer to the description for theanomaly_score
query parameter. -
desc
(Optional, boolean): Refer to the description for thedesc
query parameter. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
exclude_interim
(Optional, boolean): Refer to the description for theexclude_interim
query parameter. -
expand
(Optional, boolean): Refer to the description for theexpand
query parameter. -
page
(Optional, { from, size }) -
sort
(Optional, string): Refer to the desription for thesort
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter. -
from
(Optional, number): Skips the specified number of buckets. -
size
(Optional, number): Specifies the maximum number of buckets to obtain.
-
get_calendar_events
editGet info about events in calendars.
client.ml.getCalendarEvents({ calendar_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. You can get information for multiple calendars by using a list of ids or a wildcard expression. You can get information for all calendars by using_all
or*
or by omitting the calendar identifier. -
end
(Optional, string | Unit): Specifies to get events with timestamps earlier than this time. -
from
(Optional, number): Skips the specified number of events. -
job_id
(Optional, string): Specifies to get events for a specific anomaly detection job identifier or job group. It must be used with a calendar identifier of_all
or*
. -
size
(Optional, number): Specifies the maximum number of events to obtain. -
start
(Optional, string | Unit): Specifies to get events with timestamps after this time.
-
get_calendars
editGet calendar configuration info.
client.ml.getCalendars({ ... })
Arguments
edit-
Request (object):
-
calendar_id
(Optional, string): A string that uniquely identifies a calendar. You can get information for multiple calendars by using a list of ids or a wildcard expression. You can get information for all calendars by using_all
or*
or by omitting the calendar identifier. -
page
(Optional, { from, size }): This object is supported only when you omit the calendar identifier. -
from
(Optional, number): Skips the specified number of calendars. This parameter is supported only when you omit the calendar identifier. -
size
(Optional, number): Specifies the maximum number of calendars to obtain. This parameter is supported only when you omit the calendar identifier.
-
get_categories
editGet anomaly detection job results for categories.
client.ml.getCategories({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
category_id
(Optional, string): Identifier for the category, which is unique in the job. If you specify neither the category ID nor the partition_field_value, the API returns information about all categories. If you specify only the partition_field_value, it returns information about all categories for the specified partition. -
page
(Optional, { from, size }): Configures pagination. This parameter has thefrom
andsize
properties. -
from
(Optional, number): Skips the specified number of categories. -
partition_field_value
(Optional, string): Only return categories for the specified partition. -
size
(Optional, number): Specifies the maximum number of categories to obtain.
-
get_data_frame_analytics
editGet data frame analytics job configuration info. You can get information for multiple data frame analytics jobs in a single API request by using a list of data frame analytics jobs or a wildcard expression.
client.ml.getDataFrameAnalytics({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the data frame analytics job. If you do not specify this option, the API returns information for the first hundred data frame analytics jobs. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no data frame analytics jobs that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
-
The default value returns an empty data_frame_analytics array when there
are no matches and the subset of results when there are partial matches.
If this parameter is false
, the request returns a 404 status code when
there are no matches or only partial matches.
from
(Optional, number): Skips the specified number of data frame analytics jobs.
size
(Optional, number): Specifies the maximum number of data frame analytics jobs to obtain.
* *exclude_generated
(Optional, boolean): Indicates if certain fields should be removed from the configuration on
retrieval. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
get_data_frame_analytics_stats
editGet data frame analytics jobs usage info.
client.ml.getDataFrameAnalyticsStats({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the data frame analytics job. If you do not specify this option, the API returns information for the first hundred data frame analytics jobs. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no data frame analytics jobs that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
-
The default value returns an empty data_frame_analytics array when there
are no matches and the subset of results when there are partial matches.
If this parameter is false
, the request returns a 404 status code when
there are no matches or only partial matches.
from
(Optional, number): Skips the specified number of data frame analytics jobs.
size
(Optional, number): Specifies the maximum number of data frame analytics jobs to obtain.
* *verbose
(Optional, boolean): Defines whether the stats response should be verbose.
get_datafeed_stats
editGet datafeeds usage info.
You can get statistics for multiple datafeeds in a single API request by
using a list of datafeeds or a wildcard expression. You can
get statistics for all datafeeds by using _all
, by specifying *
as the
<feed_id>
, or by omitting the <feed_id>
. If the datafeed is stopped, the
only information you receive is the datafeed_id
and the state
.
This API returns a maximum of 10,000 datafeeds.
client.ml.getDatafeedStats({ ... })
Arguments
edit-
Request (object):
-
datafeed_id
(Optional, string | string[]): Identifier for the datafeed. It can be a datafeed identifier or a wildcard expression. If you do not specify one of these options, the API returns information about all datafeeds. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no datafeeds that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
-
The default value is true
, which returns an empty datafeeds
array
when there are no matches and the subset of results when there are
partial matches. If this parameter is false
, the request returns a
404
status code when there are no matches or only partial matches.
get_datafeeds
editGet datafeeds configuration info.
You can get information for multiple datafeeds in a single API request by
using a list of datafeeds or a wildcard expression. You can
get information for all datafeeds by using _all
, by specifying *
as the
<feed_id>
, or by omitting the <feed_id>
.
This API returns a maximum of 10,000 datafeeds.
client.ml.getDatafeeds({ ... })
Arguments
edit-
Request (object):
-
datafeed_id
(Optional, string | string[]): Identifier for the datafeed. It can be a datafeed identifier or a wildcard expression. If you do not specify one of these options, the API returns information about all datafeeds. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no datafeeds that match.
-
Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
-
The default value is true
, which returns an empty datafeeds
array
when there are no matches and the subset of results when there are
partial matches. If this parameter is false
, the request returns a
404
status code when there are no matches or only partial matches.
* *exclude_generated
(Optional, boolean): Indicates if certain fields should be removed from the configuration on
retrieval. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
get_filters
editGet filters. You can get a single filter or all filters.
client.ml.getFilters({ ... })
Arguments
edit-
Request (object):
-
filter_id
(Optional, string | string[]): A string that uniquely identifies a filter. -
from
(Optional, number): Skips the specified number of filters. -
size
(Optional, number): Specifies the maximum number of filters to obtain.
-
get_influencers
editGet anomaly detection job results for influencers.
Influencers are the entities that have contributed to, or are to blame for,
the anomalies. Influencer results are available only if an
influencer_field_name
is specified in the job configuration.
client.ml.getInfluencers({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
page
(Optional, { from, size }): Configures pagination. This parameter has thefrom
andsize
properties. -
desc
(Optional, boolean): If true, the results are sorted in descending order. -
end
(Optional, string | Unit): Returns influencers with timestamps earlier than this time. The default value means it is unset and results are not limited to specific timestamps. -
exclude_interim
(Optional, boolean): If true, the output excludes interim results. By default, interim results are included. -
influencer_score
(Optional, number): Returns influencers with anomaly scores greater than or equal to this value. -
from
(Optional, number): Skips the specified number of influencers. -
size
(Optional, number): Specifies the maximum number of influencers to obtain. -
sort
(Optional, string): Specifies the sort field for the requested influencers. By default, the influencers are sorted by theinfluencer_score
value. -
start
(Optional, string | Unit): Returns influencers with timestamps after this time. The default value means it is unset and results are not limited to specific timestamps.
-
get_job_stats
editGet anomaly detection jobs usage info.
client.ml.getJobStats({ ... })
Arguments
edit-
Request (object):
-
job_id
(Optional, string): Identifier for the anomaly detection job. It can be a job identifier, a group name, a list of jobs, or a wildcard expression. If you do not specify one of these options, the API returns information for all anomaly detection jobs. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
If true
, the API returns an empty jobs
array when
there are no matches and the subset of results when there are partial
matches. If false
, the API returns a 404
status
code when there are no matches or only partial matches.
get_jobs
editGet anomaly detection jobs configuration info.
You can get information for multiple anomaly detection jobs in a single API
request by using a group name, a list of jobs, or a wildcard
expression. You can get information for all anomaly detection jobs by using
_all
, by specifying *
as the <job_id>
, or by omitting the <job_id>
.
client.ml.getJobs({ ... })
Arguments
edit-
Request (object):
-
job_id
(Optional, string | string[]): Identifier for the anomaly detection job. It can be a job identifier, a group name, or a wildcard expression. If you do not specify one of these options, the API returns information for all anomaly detection jobs. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
The default value is true
, which returns an empty jobs
array when
there are no matches and the subset of results when there are partial
matches. If this parameter is false
, the request returns a 404
status
code when there are no matches or only partial matches.
* *exclude_generated
(Optional, boolean): Indicates if certain fields should be removed from the configuration on
retrieval. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
get_memory_stats
editGet machine learning memory usage info. Get information about how machine learning jobs and trained models are using memory, on each node, both within the JVM heap, and natively, outside of the JVM.
client.ml.getMemoryStats({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string): The names of particular nodes in the cluster to target. For example,nodeId1,nodeId2
orml:true
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_model_snapshot_upgrade_stats
editGet anomaly detection job model snapshot upgrade usage info.
client.ml.getModelSnapshotUpgradeStats({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): A numerical character string that uniquely identifies the model snapshot. You can get information for multiple snapshots by using a list or a wildcard expression. You can get all snapshots by using_all
, by specifying*
as the snapshot ID, or by omitting the snapshot ID. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
The default value is true, which returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.
get_model_snapshots
editGet model snapshots info.
client.ml.getModelSnapshots({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(Optional, string): A numerical character string that uniquely identifies the model snapshot. You can get information for multiple snapshots by using a list or a wildcard expression. You can get all snapshots by using_all
, by specifying*
as the snapshot ID, or by omitting the snapshot ID. -
desc
(Optional, boolean): Refer to the description for thedesc
query parameter. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
page
(Optional, { from, size }) -
sort
(Optional, string): Refer to the description for thesort
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter. -
from
(Optional, number): Skips the specified number of snapshots. -
size
(Optional, number): Specifies the maximum number of snapshots to obtain.
-
get_overall_buckets
editGet overall bucket results.
Retrievs overall bucket results that summarize the bucket results of multiple anomaly detection jobs.
The overall_score
is calculated by combining the scores of all the
buckets within the overall bucket span. First, the maximum
anomaly_score
per anomaly detection job in the overall bucket is
calculated. Then the top_n
of those scores are averaged to result in
the overall_score
. This means that you can fine-tune the
overall_score
so that it is more or less sensitive to the number of
jobs that detect an anomaly at the same time. For example, if you set
top_n
to 1
, the overall_score
is the maximum bucket score in the
overall bucket. Alternatively, if you set top_n
to the number of jobs,
the overall_score
is high only when all jobs detect anomalies in that
overall bucket. If you set the bucket_span
parameter (to a value
greater than its default), the overall_score
is the maximum
overall_score
of the overall buckets that have a span equal to the
jobs' largest bucket span.
client.ml.getOverallBuckets({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. It can be a job identifier, a group name, a list of jobs or groups, or a wildcard expression.
-
You can summarize the bucket results for all anomaly detection jobs by
using _all
or by specifying *
as the <job_id>
.
allow_no_match
(Optional, boolean): Refer to the description for the allow_no_match
query parameter.
bucket_span
(Optional, string | -1 | 0): Refer to the description for the bucket_span
query parameter.
end
(Optional, string | Unit): Refer to the description for the end
query parameter.
exclude_interim
(Optional, boolean): Refer to the description for the exclude_interim
query parameter.
overall_score
(Optional, number | string): Refer to the description for the overall_score
query parameter.
start
(Optional, string | Unit): Refer to the description for the start
query parameter.
* *top_n
(Optional, number): Refer to the description for the top_n
query parameter.
get_records
editGet anomaly records for an anomaly detection job. Records contain the detailed analytical results. They describe the anomalous activity that has been identified in the input data based on the detector configuration. There can be many anomaly records depending on the characteristics and size of the input data. In practice, there are often too many to be able to manually process them. The machine learning features therefore perform a sophisticated aggregation of the anomaly records into buckets. The number of record results depends on the number of anomalies found in each bucket, which relates to the number of time series being modeled and the number of detectors.
client.ml.getRecords({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
desc
(Optional, boolean): Refer to the description for thedesc
query parameter. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
exclude_interim
(Optional, boolean): Refer to the description for theexclude_interim
query parameter. -
page
(Optional, { from, size }) -
record_score
(Optional, number): Refer to the description for therecord_score
query parameter. -
sort
(Optional, string): Refer to the description for thesort
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter. -
from
(Optional, number): Skips the specified number of records. -
size
(Optional, number): Specifies the maximum number of records to obtain.
-
get_trained_models
editGet trained model configuration info.
client.ml.getTrainedModels({ ... })
Arguments
edit-
Request (object):
-
model_id
(Optional, string | string[]): The unique identifier of the trained model or a model alias.
-
You can get information for multiple trained models in a single API
request by using a list of model IDs or a wildcard
expression.
* *allow_no_match
(Optional, boolean): Specifies what to do when the request:
- Contains wildcard expressions and there are no models that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
If true, it returns an empty array when there are no matches and the
subset of results when there are partial matches.
decompress_definition
(Optional, boolean): Specifies whether the included model definition should be returned as a
JSON map (true) or in a custom compressed format (false).
exclude_generated
(Optional, boolean): Indicates if certain fields should be removed from the configuration on
retrieval. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
from
(Optional, number): Skips the specified number of models.
include
(Optional, Enum("definition" | "feature_importance_baseline" | "hyperparameters" | "total_feature_importance" | "definition_status")): A comma delimited string of optional fields to include in the response
body.
include_model_definition
(Optional, boolean): parameter is deprecated! Use [include=definition] instead
size
(Optional, number): Specifies the maximum number of models to obtain.
* *tags
(Optional, string | string[]): A comma delimited string of tags. A trained model can have many tags, or
none. When supplied, only trained models that contain all the supplied
tags are returned.
get_trained_models_stats
editGet trained models usage info. You can get usage information for multiple trained models in a single API request by using a list of model IDs or a wildcard expression.
client.ml.getTrainedModelsStats({ ... })
Arguments
edit-
Request (object):
-
model_id
(Optional, string | string[]): The unique identifier of the trained model or a model alias. It can be a list or a wildcard expression. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no models that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
If true, it returns an empty array when there are no matches and the
subset of results when there are partial matches.
from
(Optional, number): Skips the specified number of models.
size
(Optional, number): Specifies the maximum number of models to obtain.
infer_trained_model
editEvaluate a trained model.
client.ml.inferTrainedModel({ model_id, docs })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
docs
(Record<string, User-defined value>[]): An array of objects to pass to the model for inference. The objects should contain a fields matching your configured trained model input. Typically, for NLP models, the field name istext_field
. Currently, for NLP models, only a single value is allowed. -
inference_config
(Optional, { regression, classification, text_classification, zero_shot_classification, fill_mask, ner, pass_through, text_embedding, text_expansion, question_answering }): The inference configuration updates to apply on the API call -
timeout
(Optional, string | -1 | 0): Controls the amount of time to wait for inference results.
-
info
editGet machine learning information. Get defaults and limits used by machine learning. This endpoint is designed to be used by a user interface that needs to fully understand machine learning configurations where some options are not specified, meaning that the defaults should be used. This endpoint may be used to find out what those defaults are. It also provides information about the maximum size of machine learning jobs that could run in the current cluster configuration.
client.ml.info()
open_job
editOpen anomaly detection jobs.
An anomaly detection job must be opened to be ready to receive and analyze data. It can be opened and closed multiple times throughout its lifecycle. When you open a new job, it starts with an empty model. When you open an existing job, the most recent model state is automatically loaded. The job is ready to resume its analysis from where it left off, once new data is received.
client.ml.openJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
timeout
(Optional, string | -1 | 0): Refer to the description for thetimeout
query parameter.
-
post_calendar_events
editAdd scheduled events to the calendar.
client.ml.postCalendarEvents({ calendar_id, events })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
events
({ calendar_id, event_id, description, end_time, start_time }[]): A list of one of more scheduled events. The event’s start and end times can be specified as integer milliseconds since the epoch or as a string in ISO 8601 format.
-
post_data
editSend data to an anomaly detection job for analysis.
For each job, data can be accepted from only a single connection at a time. It is not currently possible to post data to multiple jobs using wildcards or a list.
client.ml.postData({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. The job must have a state of open to receive and process the data. -
data
(Optional, TData[]) -
reset_end
(Optional, string | Unit): Specifies the end of the bucket resetting range. -
reset_start
(Optional, string | Unit): Specifies the start of the bucket resetting range.
-
preview_data_frame_analytics
editPreview features used by data frame analytics. Preview the extracted features used by a data frame analytics config.
client.ml.previewDataFrameAnalytics({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the data frame analytics job. -
config
(Optional, { source, analysis, model_memory_limit, max_num_threads, analyzed_fields }): A data frame analytics config as described in create data frame analytics jobs. Note thatid
anddest
don’t need to be provided in the context of this API.
-
preview_datafeed
editPreview a datafeed. This API returns the first "page" of search results from a datafeed. You can preview an existing datafeed or provide configuration details for a datafeed and anomaly detection job in the API. The preview shows the structure of the data that will be passed to the anomaly detection engine. IMPORTANT: When Elasticsearch security features are enabled, the preview uses the credentials of the user that called the API. However, when the datafeed starts it uses the roles of the last user that created or updated the datafeed. To get a preview that accurately reflects the behavior of the datafeed, use the appropriate credentials. You can also use secondary authorization headers to supply the credentials.
client.ml.previewDatafeed({ ... })
Arguments
edit-
Request (object):
-
datafeed_id
(Optional, string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. NOTE: If you use this path parameter, you cannot provide datafeed or anomaly detection job configuration details in the request body. -
datafeed_config
(Optional, { aggregations, chunking_config, datafeed_id, delayed_data_check_config, frequency, indices, indices_options, job_id, max_empty_searches, query, query_delay, runtime_mappings, script_fields, scroll_size }): The datafeed definition to preview. -
job_config
(Optional, { allow_lazy_open, analysis_config, analysis_limits, background_persist_interval, custom_settings, daily_model_snapshot_retention_after_days, data_description, datafeed_config, description, groups, job_id, job_type, model_plot_config, model_snapshot_retention_days, renormalization_window_days, results_index_name, results_retention_days }): The configuration details for the anomaly detection job that is associated with the datafeed. If thedatafeed_config
object does not include ajob_id
that references an existing anomaly detection job, you must supply thisjob_config
object. If you include both ajob_id
and ajob_config
, the latter information is used. You cannot specify ajob_config
object unless you also supply adatafeed_config
object. -
start
(Optional, string | Unit): The start time from where the datafeed preview should begin -
end
(Optional, string | Unit): The end time when the datafeed preview should stop
-
put_calendar
editCreate a calendar.
client.ml.putCalendar({ calendar_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
job_ids
(Optional, string[]): An array of anomaly detection job identifiers. -
description
(Optional, string): A description of the calendar.
-
put_calendar_job
editAdd anomaly detection job to calendar.
client.ml.putCalendarJob({ calendar_id, job_id })
Arguments
edit-
Request (object):
-
calendar_id
(string): A string that uniquely identifies a calendar. -
job_id
(string | string[]): An identifier for the anomaly detection jobs. It can be a job identifier, a group name, or a list of jobs or groups.
-
put_data_frame_analytics
editCreate a data frame analytics job.
This API creates a data frame analytics job that performs an analysis on the
source indices and stores the outcome in a destination index.
By default, the query used in the source configuration is {"match_all": {}}
.
If the destination index does not exist, it is created automatically when you start the job.
If you supply only a subset of the regression or classification parameters, hyperparameter optimization occurs. It determines a value for each of the undefined parameters.
client.ml.putDataFrameAnalytics({ id, analysis, dest, source })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
analysis
({ classification, outlier_detection, regression }): The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression. -
dest
({ index, results_field }): The destination configuration. -
source
({ index, query, runtime_mappings, _source }): The configuration of how to source the analysis data. -
allow_lazy_start
(Optional, boolean): Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node. If set tofalse
and a machine learning node with capacity to run the job cannot be immediately found, the API returns an error. If set totrue
, the API does not return an error; the job waits in thestarting
state until sufficient machine learning node capacity is available. This behavior is also affected by the cluster-widexpack.ml.max_lazy_ml_nodes
setting. -
analyzed_fields
(Optional, { includes, excludes }): Specifiesincludes
and/orexcludes
patterns to select which fields will be included in the analysis. The patterns specified inexcludes
are applied last, thereforeexcludes
takes precedence. In other words, if the same field is specified in bothincludes
andexcludes
, then the field will not be included in the analysis. Ifanalyzed_fields
is not set, only the relevant fields will be included. For example, all the numeric fields for outlier detection. The supported fields vary for each type of analysis. Outlier detection requires numeric orboolean
data to analyze. The algorithms don’t support missing values therefore fields that have data types other than numeric or boolean are ignored. Documents where included fields contain missing values, null values, or an array are also ignored. Therefore thedest
index may contain documents that don’t have an outlier score. Regression supports fields that are numeric,boolean
,text
,keyword
, andip
data types. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the regression analysis. Classification supports fields that are numeric,boolean
,text
,keyword
, andip
data types. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedest
index that don’t contain a results field are not included in the classification analysis. Classification analysis can be improved by mapping ordinal variable values to a single number. For example, in case of age ranges, you can model the values as0-14 = 0
,15-24 = 1
,25-34 = 2
, and so on. -
description
(Optional, string): A description of the job. -
max_num_threads
(Optional, number): The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself. -
_meta
(Optional, Record<string, User-defined value>) -
model_memory_limit
(Optional, string): The approximate maximum amount of memory resources that are permitted for analytical processing. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. -
headers
(Optional, Record<string, string | string[]>) -
version
(Optional, string)
-
put_datafeed
editCreate a datafeed.
Datafeeds retrieve data from Elasticsearch for analysis by an anomaly detection job.
You can associate only one datafeed with each anomaly detection job.
The datafeed contains a query that runs at a defined interval (frequency
).
If you are concerned about delayed data, you can add a delay (query_delay') at each interval.
By default, the datafeed uses the following query: `{"match_all": {"boost": 1}}
.
When Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had
at the time of creation and runs the query using those same roles. If you provide secondary authorization headers,
those credentials are used instead.
You must use Kibana, this API, or the create anomaly detection jobs API to create a datafeed. Do not add a datafeed
directly to the .ml-config
index. Do not give users write
privileges on the .ml-config
index.
client.ml.putDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data. -
chunking_config
(Optional, { mode, time_span }): Datafeeds might be required to search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. Chunking configuration controls how the size of these time chunks are calculated; it is an advanced configuration option. -
delayed_data_check_config
(Optional, { check_window, enabled }): Specifies whether the datafeed checks for missing data and the size of the window. The datafeed can optionally search over indices that have already been read in an effort to determine whether any data has subsequently been added to the index. If missing data is found, it is a good indication that thequery_delay
is set too low and the data is being indexed after the datafeed has passed that moment in time. This check runs only on real-time datafeeds. -
frequency
(Optional, string | -1 | 0): The interval at which scheduled queries are made while the datafeed runs in real time. The default value is either the bucket span for short bucket spans, or, for longer bucket spans, a sensible fraction of the bucket span. Whenfrequency
is shorter than the bucket span, interim results for the last (partial) bucket are written then eventually overwritten by the full bucket results. If the datafeed uses aggregations, this value must be divisible by the interval of the date histogram aggregation. -
indices
(Optional, string | string[]): An array of index names. Wildcards are supported. If any of the indices are in remote clusters, the machine learning nodes must have theremote_cluster_client
role. -
indices_options
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, ignore_throttled }): Specifies index expansion options that are used during search -
job_id
(Optional, string): Identifier for the anomaly detection job. -
max_empty_searches
(Optional, number): If a real-time datafeed has never seen any data (including during any initial training period), it automatically stops and closes the associated job after this many real-time searches return no documents. In other words, it stops afterfrequency
timesmax_empty_searches
of real-time operation. If not set, a datafeed with no end time that sees no data remains started until it is explicitly stopped. By default, it is not set. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. -
query_delay
(Optional, string | -1 | 0): The number of seconds behind real time that data is queried. For example, if data from 10:04 a.m. might not be searchable in Elasticsearch until 10:06 a.m., set this property to 120 seconds. The default value is randomly selected between60s
and120s
. This randomness improves the query performance when there are multiple jobs running on the same node. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Specifies runtime fields for the datafeed search. -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields. -
scroll_size
(Optional, number): The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value ofindex.max_result_window
, which is 10,000 by default. -
headers
(Optional, Record<string, string | string[]>) -
allow_no_indices
(Optional, boolean): If true, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the_all
string or when no indices are specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values. -
ignore_throttled
(Optional, boolean): If true, concrete, expanded, or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): If true, unavailable indices (missing or closed) are ignored.
-
put_filter
editCreate a filter.
A filter contains a list of strings. It can be used by one or more anomaly detection jobs.
Specifically, filters are referenced in the custom_rules
property of detector configuration objects.
client.ml.putFilter({ filter_id })
Arguments
edit-
Request (object):
-
filter_id
(string): A string that uniquely identifies a filter. -
description
(Optional, string): A description of the filter. -
items
(Optional, string[]): The items of the filter. A wildcard*
can be used at the beginning or the end of an item. Up to 10000 items are allowed in each filter.
-
put_job
editCreate an anomaly detection job.
If you include a datafeed_config
, you must have read index privileges on the source index.
If you include a datafeed_config
but do not provide a query, the datafeed uses {"match_all": {"boost": 1}}
.
client.ml.putJob({ job_id, analysis_config, data_description })
Arguments
edit-
Request (object):
-
job_id
(string): The identifier for the anomaly detection job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
analysis_config
({ bucket_span, categorization_analyzer, categorization_field_name, categorization_filters, detectors, influencers, latency, model_prune_window, multivariate_by_fields, per_partition_categorization, summary_count_field_name }): Specifies how to analyze the data. After you create a job, you cannot change the analysis configuration; all the properties are informational. -
data_description
({ format, time_field, time_format, field_delimiter }): Defines the format of the input data when you send data to the job by using the post data API. Note that when configure a datafeed, these properties are automatically set. When data is received via the post data API, it is not stored in Elasticsearch. Only the results for anomaly detection are retained. -
allow_lazy_open
(Optional, boolean): Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node. By default, if a machine learning node with capacity to run the job cannot immediately be found, the open anomaly detection jobs API returns an error. However, this is also subject to the cluster-widexpack.ml.max_lazy_ml_nodes
setting. If this option is set to true, the open anomaly detection jobs API does not return an error and the job waits in the opening state until sufficient machine learning node capacity is available. -
analysis_limits
(Optional, { categorization_examples_limit, model_memory_limit }): Limits can be applied for the resources required to hold the mathematical models in memory. These limits are approximate and can be set per job. They do not control the memory used by other processes, for example the Elasticsearch Java processes. -
background_persist_interval
(Optional, string | -1 | 0): Advanced configuration option. The time between each periodic persistence of the model. The default value is a randomized value between 3 to 4 hours, which avoids all jobs persisting at exactly the same time. The smallest allowed value is 1 hour. For very large models (several GB), persistence could take 10-20 minutes, so do not set thebackground_persist_interval
value too low. -
custom_settings
(Optional, User-defined value): Advanced configuration option. Contains custom meta data about the job. -
daily_model_snapshot_retention_after_days
(Optional, number): Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job. Valid values range from 0 tomodel_snapshot_retention_days
. -
datafeed_config
(Optional, { aggregations, chunking_config, datafeed_id, delayed_data_check_config, frequency, indices, indices_options, job_id, max_empty_searches, query, query_delay, runtime_mappings, script_fields, scroll_size }): Defines a datafeed for the anomaly detection job. If Elasticsearch security features are enabled, your datafeed remembers which roles the user who created it had at the time of creation and runs the query using those same roles. If you provide secondary authorization headers, those credentials are used instead. -
description
(Optional, string): A description of the job. -
groups
(Optional, string[]): A list of job groups. A job can belong to no groups or many. -
model_plot_config
(Optional, { annotations_enabled, enabled, terms }): This advanced configuration option stores model information along with the results. It provides a more detailed view into anomaly detection. If you enable model plot it can add considerable overhead to the performance of the system; it is not feasible for jobs with many entities. Model plot provides a simplified and indicative view of the model and its bounds. It does not display complex features such as multivariate correlations or multimodal data. As such, anomalies may occasionally be reported which cannot be seen in the model plot. Model plot config can be configured when the job is created or updated later. It must be disabled if performance issues are experienced. -
model_snapshot_retention_days
(Optional, number): Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. By default, snapshots ten days older than the newest snapshot are deleted. -
renormalization_window_days
(Optional, number): Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. The default value is the longer of 30 days or 100 bucket spans. -
results_index_name
(Optional, string): A text string that affects the name of the machine learning results index. By default, the job generates an index named.ml-anomalies-shared
. -
results_retention_days
(Optional, number): Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. Annotations generated by the system also count as results for retention purposes; they are deleted after the same number of days as results. Annotations added by users are retained forever. -
allow_no_indices
(Optional, boolean): Iftrue
, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the_all
string or when no indices are specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values. Valid values are:
-
-
all
: Match any data stream or index, including hidden ones. -
closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed. -
hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, or both. -
none
: Wildcard patterns are not accepted. -
open
: Match open, non-hidden indices. Also matches any non-hidden data stream.-
ignore_throttled
(Optional, boolean): Iftrue
, concrete, expanded or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): Iftrue
, unavailable indices (missing or closed) are ignored.
-
put_trained_model
editCreate a trained model. Enable you to supply a trained model that is not created by data frame analytics.
client.ml.putTrainedModel({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
compressed_definition
(Optional, string): The compressed (GZipped and Base64 encoded) inference definition of the model. If compressed_definition is specified, then definition cannot be specified. -
definition
(Optional, { preprocessors, trained_model }): The inference definition for the model. If definition is specified, then compressed_definition cannot be specified. -
description
(Optional, string): A human-readable description of the inference trained model. -
inference_config
(Optional, { regression, classification, text_classification, zero_shot_classification, fill_mask, learning_to_rank, ner, pass_through, text_embedding, text_expansion, question_answering }): The default configuration for inference. This can be either a regression or classification configuration. It must match the underlying definition.trained_model’s target_type. For pre-packaged models such as ELSER the config is not required. -
input
(Optional, { field_names }): The input field names for the model definition. -
metadata
(Optional, User-defined value): An object map that contains metadata about the model. -
model_type
(Optional, Enum("tree_ensemble" | "lang_ident" | "pytorch")): The model type. -
model_size_bytes
(Optional, number): The estimated memory usage in bytes to keep the trained model in memory. This property is supported only if defer_definition_decompression is true or the model definition is not supplied. -
platform_architecture
(Optional, string): The platform architecture (if applicable) of the trained mode. If the model only works on one platform, because it is heavily optimized for a particular processor architecture and OS combination, then this field specifies which. The format of the string must match the platform identifiers used by Elasticsearch, so one of,linux-x86_64
,linux-aarch64
,darwin-x86_64
,darwin-aarch64
, orwindows-x86_64
. For portable models (those that work independent of processor architecture or OS features), leave this field unset. -
tags
(Optional, string[]): An array of tags to organize the model. -
prefix_strings
(Optional, { ingest, search }): Optional prefix strings applied at inference -
defer_definition_decompression
(Optional, boolean): If set totrue
and acompressed_definition
is provided, the request defers definition decompression and skips relevant validations. -
wait_for_completion
(Optional, boolean): Whether to wait for all child operations (e.g. model download) to complete.
-
put_trained_model_alias
editCreate or update a trained model alias. A trained model alias is a logical name used to reference a single trained model. You can use aliases instead of trained model identifiers to make it easier to reference your models. For example, you can use aliases in inference aggregations and processors. An alias must be unique and refer to only a single trained model. However, you can have multiple aliases for each trained model. If you use this API to update an alias such that it references a different trained model ID and the model uses a different type of data frame analytics, an error occurs. For example, this situation occurs if you have a trained model for regression analysis and a trained model for classification analysis; you cannot reassign an alias from one type of trained model to another. If you use this API to update an alias and there are very few input fields in common between the old and new trained models for the model alias, the API returns a warning.
client.ml.putTrainedModelAlias({ model_alias, model_id })
Arguments
edit-
Request (object):
-
model_alias
(string): The alias to create or update. This value cannot end in numbers. -
model_id
(string): The identifier for the trained model that the alias refers to. -
reassign
(Optional, boolean): Specifies whether the alias gets reassigned to the specified trained model if it is already assigned to a different model. If the alias is already assigned and this parameter is false, the API returns an error.
-
put_trained_model_definition_part
editCreate part of a trained model definition.
client.ml.putTrainedModelDefinitionPart({ model_id, part, definition, total_definition_length, total_parts })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
part
(number): The definition part number. When the definition is loaded for inference the definition parts are streamed in the order of their part number. The first part must be0
and the final part must betotal_parts - 1
. -
definition
(string): The definition part for the model. Must be a base64 encoded string. -
total_definition_length
(number): The total uncompressed definition length in bytes. Not base64 encoded. -
total_parts
(number): The total number of parts that will be uploaded. Must be greater than 0.
-
put_trained_model_vocabulary
editCreate a trained model vocabulary.
This API is supported only for natural language processing (NLP) models.
The vocabulary is stored in the index as described in inference_config.*.vocabulary
of the trained model definition.
client.ml.putTrainedModelVocabulary({ model_id, vocabulary })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
vocabulary
(string[]): The model vocabulary, which must not be empty. -
merges
(Optional, string[]): The optional model merges if required by the tokenizer. -
scores
(Optional, number[]): The optional vocabulary value scores if required by the tokenizer.
-
reset_job
editReset an anomaly detection job. All model state and results are deleted. The job is ready to start over as if it had just been created. It is not currently possible to reset multiple jobs using wildcards or a comma separated list.
client.ml.resetJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): The ID of the job to reset. -
wait_for_completion
(Optional, boolean): Should this request wait until the operation has completed before returning. -
delete_user_annotations
(Optional, boolean): Specifies whether annotations that have been added by the user should be deleted along with any auto-generated annotations when the job is reset.
-
revert_model_snapshot
editRevert to a snapshot. The machine learning features react quickly to anomalous input, learning new behaviors in data. Highly anomalous input increases the variance in the models whilst the system learns whether this is a new step-change in behavior or a one-off event. In the case where this anomalous input is known to be a one-off, then it might be appropriate to reset the model state to a time before this event. For example, you might consider reverting to a saved snapshot after Black Friday or a critical system failure.
client.ml.revertModelSnapshot({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): You can specifyempty
as the <snapshot_id>. Reverting to the empty snapshot means the anomaly detection job starts learning a new model from scratch when it is started. -
delete_intervening_results
(Optional, boolean): Refer to the description for thedelete_intervening_results
query parameter.
-
set_upgrade_mode
editSet upgrade_mode for ML indices. Sets a cluster wide upgrade_mode setting that prepares machine learning indices for an upgrade. When upgrading your cluster, in some circumstances you must restart your nodes and reindex your machine learning indices. In those circumstances, there must be no machine learning jobs running. You can close the machine learning jobs, do the upgrade, then open all the jobs again. Alternatively, you can use this API to temporarily halt tasks associated with the jobs and datafeeds and prevent new jobs from opening. You can also use this API during upgrades that do not require you to reindex your machine learning indices, though stopping jobs is not a requirement in that case. You can see the current value for the upgrade_mode setting by using the get machine learning info API.
client.ml.setUpgradeMode({ ... })
Arguments
edit-
Request (object):
-
enabled
(Optional, boolean): Whentrue
, it enablesupgrade_mode
which temporarily halts all job and datafeed tasks and prohibits new job and datafeed tasks from starting. -
timeout
(Optional, string | -1 | 0): The time to wait for the request to be completed.
-
start_data_frame_analytics
editStart a data frame analytics job.
A data frame analytics job can be started and stopped multiple times
throughout its lifecycle.
If the destination index does not exist, it is created automatically the
first time you start the data frame analytics job. The
index.number_of_shards
and index.number_of_replicas
settings for the
destination index are copied from the source index. If there are multiple
source indices, the destination index copies the highest setting values. The
mappings for the destination index are also copied from the source indices.
If there are any mapping conflicts, the job fails to start.
If the destination index exists, it is used as is. You can therefore set up
the destination index in advance with custom settings and mappings.
client.ml.startDataFrameAnalytics({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
timeout
(Optional, string | -1 | 0): Controls the amount of time to wait until the data frame analytics job starts.
-
start_datafeed
editStart datafeeds.
A datafeed must be started in order to retrieve data from Elasticsearch. A datafeed can be started and stopped multiple times throughout its lifecycle.
Before you can start a datafeed, the anomaly detection job must be open. Otherwise, an error occurs.
If you restart a stopped datafeed, it continues processing input data from the next millisecond after it was stopped. If new data was indexed for that exact millisecond between stopping and starting, it will be ignored.
When Elasticsearch security features are enabled, your datafeed remembers which roles the last user to create or update it had at the time of creation or update and runs the query using those same roles. If you provided secondary authorization headers when you created or updated the datafeed, those credentials are used instead.
client.ml.startDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
end
(Optional, string | Unit): Refer to the description for theend
query parameter. -
start
(Optional, string | Unit): Refer to the description for thestart
query parameter. -
timeout
(Optional, string | -1 | 0): Refer to the description for thetimeout
query parameter.
-
start_trained_model_deployment
editStart a trained model deployment. It allocates the model to every machine learning node.
client.ml.startTrainedModelDeployment({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. Currently, only PyTorch models are supported. -
adaptive_allocations
(Optional, { enabled, min_number_of_allocations, max_number_of_allocations }): Adaptive allocations configuration. When enabled, the number of allocations is set based on the current load. If adaptive_allocations is enabled, do not set the number of allocations manually. -
cache_size
(Optional, number | string): The inference cache size (in memory outside the JVM heap) per node for the model. The default value is the same size as themodel_size_bytes
. To disable the cache,0b
can be provided. -
deployment_id
(Optional, string): A unique identifier for the deployment of the model. -
number_of_allocations
(Optional, number): The number of model allocations on each node where the model is deployed. All allocations on a node share the same copy of the model in memory but use a separate set of threads to evaluate the model. Increasing this value generally increases the throughput. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads. If adaptive_allocations is enabled, do not set this value, because it’s automatically set. -
priority
(Optional, Enum("normal" | "low")): The deployment priority. -
queue_capacity
(Optional, number): Specifies the number of inference requests that are allowed in the queue. After the number of requests exceeds this value, new requests are rejected with a 429 error. -
threads_per_allocation
(Optional, number): Sets the number of threads used by each model allocation during inference. This generally increases the inference speed. The inference process is a compute-bound process; any number greater than the number of available hardware threads on the machine does not increase the inference speed. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads. -
timeout
(Optional, string | -1 | 0): Specifies the amount of time to wait for the model to deploy. -
wait_for
(Optional, Enum("started" | "starting" | "fully_allocated")): Specifies the allocation status to wait for before returning.
-
stop_data_frame_analytics
editStop data frame analytics jobs. A data frame analytics job can be started and stopped multiple times throughout its lifecycle.
client.ml.stopDataFrameAnalytics({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no data frame analytics jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
The default value is true, which returns an empty data_frame_analytics
array when there are no matches and the subset of results when there are
partial matches. If this parameter is false, the request returns a 404
status code when there are no matches or only partial matches.
force
(Optional, boolean): If true, the data frame analytics job is stopped forcefully.
timeout
(Optional, string | -1 | 0): Controls the amount of time to wait until the data frame analytics job
stops. Defaults to 20 seconds.
stop_datafeed
editStop datafeeds. A datafeed that is stopped ceases to retrieve data from Elasticsearch. A datafeed can be started and stopped multiple times throughout its lifecycle.
client.ml.stopDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): Identifier for the datafeed. You can stop multiple datafeeds in a single API request by using a comma-separated list of datafeeds or a wildcard expression. You can close all datafeeds by using_all
or by specifying*
as the identifier. -
allow_no_match
(Optional, boolean): Refer to the description for theallow_no_match
query parameter. -
force
(Optional, boolean): Refer to the description for theforce
query parameter. -
timeout
(Optional, string | -1 | 0): Refer to the description for thetimeout
query parameter.
-
stop_trained_model_deployment
editStop a trained model deployment.
client.ml.stopTrainedModelDeployment({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. -
allow_no_match
(Optional, boolean): Specifies what to do when the request: contains wildcard expressions and there are no deployments that match; contains the_all
string or no identifiers and there are no matches; or contains wildcard expressions and there are only partial matches. By default, it returns an empty array when there are no matches and the subset of results when there are partial matches. Iffalse
, the request returns a 404 status code when there are no matches or only partial matches. -
force
(Optional, boolean): Forcefully stops the deployment, even if it is used by ingest pipelines. You can’t use these pipelines until you restart the model deployment.
-
update_data_frame_analytics
editUpdate a data frame analytics job.
client.ml.updateDataFrameAnalytics({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
description
(Optional, string): A description of the job. -
model_memory_limit
(Optional, string): The approximate maximum amount of memory resources that are permitted for analytical processing. If yourelasticsearch.yml
file contains anxpack.ml.max_model_memory_limit
setting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limit
values greater than that setting. -
max_num_threads
(Optional, number): The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself. -
allow_lazy_start
(Optional, boolean): Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node.
-
update_datafeed
editUpdate a datafeed. You must stop and start the datafeed for the changes to be applied. When Elasticsearch security features are enabled, your datafeed remembers which roles the user who updated it had at the time of the update and runs the query using those same roles. If you provide secondary authorization headers, those credentials are used instead.
client.ml.updateDatafeed({ datafeed_id })
Arguments
edit-
Request (object):
-
datafeed_id
(string): A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters. -
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data. -
chunking_config
(Optional, { mode, time_span }): Datafeeds might search over long time periods, for several months or years. This search is split into time chunks in order to ensure the load on Elasticsearch is managed. Chunking configuration controls how the size of these time chunks are calculated; it is an advanced configuration option. -
delayed_data_check_config
(Optional, { check_window, enabled }): Specifies whether the datafeed checks for missing data and the size of the window. The datafeed can optionally search over indices that have already been read in an effort to determine whether any data has subsequently been added to the index. If missing data is found, it is a good indication that thequery_delay
is set too low and the data is being indexed after the datafeed has passed that moment in time. This check runs only on real-time datafeeds. -
frequency
(Optional, string | -1 | 0): The interval at which scheduled queries are made while the datafeed runs in real time. The default value is either the bucket span for short bucket spans, or, for longer bucket spans, a sensible fraction of the bucket span. Whenfrequency
is shorter than the bucket span, interim results for the last (partial) bucket are written then eventually overwritten by the full bucket results. If the datafeed uses aggregations, this value must be divisible by the interval of the date histogram aggregation. -
indices
(Optional, string[]): An array of index names. Wildcards are supported. If any of the indices are in remote clusters, the machine learning nodes must have theremote_cluster_client
role. -
indices_options
(Optional, { allow_no_indices, expand_wildcards, ignore_unavailable, ignore_throttled }): Specifies index expansion options that are used during search. -
job_id
(Optional, string) -
max_empty_searches
(Optional, number): If a real-time datafeed has never seen any data (including during any initial training period), it automatically stops and closes the associated job after this many real-time searches return no documents. In other words, it stops afterfrequency
timesmax_empty_searches
of real-time operation. If not set, a datafeed with no end time that sees no data remains started until it is explicitly stopped. By default, it is not set. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. Note that if you change the query, the analyzed data is also changed. Therefore, the time required to learn might be long and the understandability of the results is unpredictable. If you want to make significant changes to the source data, it is recommended that you clone the job and datafeed and make the amendments in the clone. Let both run in parallel and close one when you are satisfied with the results of the job. -
query_delay
(Optional, string | -1 | 0): The number of seconds behind real time that data is queried. For example, if data from 10:04 a.m. might not be searchable in Elasticsearch until 10:06 a.m., set this property to 120 seconds. The default value is randomly selected between60s
and120s
. This randomness improves the query performance when there are multiple jobs running on the same node. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): Specifies runtime fields for the datafeed search. -
script_fields
(Optional, Record<string, { script, ignore_failure }>): Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields. -
scroll_size
(Optional, number): The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value ofindex.max_result_window
. -
allow_no_indices
(Optional, boolean): Iftrue
, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the_all
string or when no indices are specified. -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports a list of values. Valid values are:
-
-
all
: Match any data stream or index, including hidden ones. -
closed
: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed. -
hidden
: Match hidden data streams and hidden indices. Must be combined withopen
,closed
, or both. -
none
: Wildcard patterns are not accepted. -
open
: Match open, non-hidden indices. Also matches any non-hidden data stream.-
ignore_throttled
(Optional, boolean): Iftrue
, concrete, expanded or aliased indices are ignored when frozen. -
ignore_unavailable
(Optional, boolean): Iftrue
, unavailable indices (missing or closed) are ignored.
-
update_filter
editUpdate a filter. Updates the description of a filter, adds items, or removes items from the list.
client.ml.updateFilter({ filter_id })
Arguments
edit-
Request (object):
-
filter_id
(string): A string that uniquely identifies a filter. -
add_items
(Optional, string[]): The items to add to the filter. -
description
(Optional, string): A description for the filter. -
remove_items
(Optional, string[]): The items to remove from the filter.
-
update_job
editUpdate an anomaly detection job. Updates certain properties of an anomaly detection job.
client.ml.updateJob({ job_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the job. -
allow_lazy_open
(Optional, boolean): Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node. Iffalse
and a machine learning node with capacity to run the job cannot immediately be found, the open anomaly detection jobs API returns an error. However, this is also subject to the cluster-widexpack.ml.max_lazy_ml_nodes
setting. If this option is set totrue
, the open anomaly detection jobs API does not return an error and the job waits in the opening state until sufficient machine learning node capacity is available. -
analysis_limits
(Optional, { model_memory_limit }) -
background_persist_interval
(Optional, string | -1 | 0): Advanced configuration option. The time between each periodic persistence of the model. The default value is a randomized value between 3 to 4 hours, which avoids all jobs persisting at exactly the same time. The smallest allowed value is 1 hour. For very large models (several GB), persistence could take 10-20 minutes, so do not set the value too low. If the job is open when you make the update, you must stop the datafeed, close the job, then reopen the job and restart the datafeed for the changes to take effect. -
custom_settings
(Optional, Record<string, User-defined value>): Advanced configuration option. Contains custom meta data about the job. For example, it can contain custom URL information as shown in Adding custom URLs to machine learning results. -
categorization_filters
(Optional, string[]) -
description
(Optional, string): A description of the job. -
model_plot_config
(Optional, { annotations_enabled, enabled, terms }) -
model_prune_window
(Optional, string | -1 | 0) -
daily_model_snapshot_retention_after_days
(Optional, number): Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job. Valid values range from 0 tomodel_snapshot_retention_days
. For jobs created before version 7.8.0, the default value matchesmodel_snapshot_retention_days
. -
model_snapshot_retention_days
(Optional, number): Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. -
renormalization_window_days
(Optional, number): Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. -
results_retention_days
(Optional, number): Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. -
groups
(Optional, string[]): A list of job groups. A job can belong to no groups or many. -
detectors
(Optional, { detector_index, description, custom_rules }[]): An array of detector update objects. -
per_partition_categorization
(Optional, { enabled, stop_on_warn }): Settings related to how categorization interacts with partition fields.
-
update_model_snapshot
editUpdate a snapshot. Updates certain properties of a snapshot.
client.ml.updateModelSnapshot({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): Identifier for the model snapshot. -
description
(Optional, string): A description of the model snapshot. -
retain
(Optional, boolean): Iftrue
, this snapshot will not be deleted during automatic cleanup of snapshots older thanmodel_snapshot_retention_days
. However, this snapshot will be deleted when the job is deleted.
-
update_trained_model_deployment
editUpdate a trained model deployment.
client.ml.updateTrainedModelDeployment({ model_id })
Arguments
edit-
Request (object):
-
model_id
(string): The unique identifier of the trained model. Currently, only PyTorch models are supported. -
number_of_allocations
(Optional, number): The number of model allocations on each node where the model is deployed. All allocations on a node share the same copy of the model in memory but use a separate set of threads to evaluate the model. Increasing this value generally increases the throughput. If this setting is greater than the number of hardware threads it will automatically be changed to a value less than the number of hardware threads. If adaptive_allocations is enabled, do not set this value, because it’s automatically set. -
adaptive_allocations
(Optional, { enabled, min_number_of_allocations, max_number_of_allocations }): Adaptive allocations configuration. When enabled, the number of allocations is set based on the current load. If adaptive_allocations is enabled, do not set the number of allocations manually.
-
upgrade_job_snapshot
editUpgrade a snapshot. Upgrade an anomaly detection model snapshot to the latest major version. Over time, older snapshot formats are deprecated and removed. Anomaly detection jobs support only snapshots that are from the current or previous major version. This API provides a means to upgrade a snapshot to the current major version. This aids in preparing the cluster for an upgrade to the next major version. Only one snapshot per anomaly detection job can be upgraded at a time and the upgraded snapshot cannot be the current snapshot of the anomaly detection job.
client.ml.upgradeJobSnapshot({ job_id, snapshot_id })
Arguments
edit-
Request (object):
-
job_id
(string): Identifier for the anomaly detection job. -
snapshot_id
(string): A numerical character string that uniquely identifies the model snapshot. -
wait_for_completion
(Optional, boolean): When true, the API won’t respond until the upgrade is complete. Otherwise, it responds as soon as the upgrade task is assigned to a node. -
timeout
(Optional, string | -1 | 0): Controls the time to wait for the request to complete.
-
nodes
editclear_repositories_metering_archive
editClear the archived repositories metering. Clear the archived repositories metering information in the cluster.
client.nodes.clearRepositoriesMeteringArchive({ node_id, max_archive_version })
Arguments
edit-
Request (object):
-
node_id
(string | string[]): List of node IDs or names used to limit returned information. -
max_archive_version
(number): Specifies the maximumarchive_version
to be cleared from the archive.
-
get_repositories_metering_info
editGet cluster repositories metering. Get repositories metering information for a cluster. This API exposes monotonically non-decreasing counters and it is expected that clients would durably store the information needed to compute aggregations over a period of time. Additionally, the information exposed by this API is volatile, meaning that it will not be present after node restarts.
client.nodes.getRepositoriesMeteringInfo({ node_id })
Arguments
edit-
Request (object):
-
node_id
(string | string[]): List of node IDs or names used to limit returned information. All the nodes selective options are explained [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster.html#cluster-nodes).
-
hot_threads
editGet the hot threads for nodes. Get a breakdown of the hot threads on each selected node in the cluster. The output is plain text with a breakdown of the top hot threads for each node.
client.nodes.hotThreads({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): List of node IDs or names used to limit returned information. -
ignore_idle_threads
(Optional, boolean): If true, known idle threads (e.g. waiting in a socket select, or to get a task from an empty queue) are filtered out. -
interval
(Optional, string | -1 | 0): The interval to do the second sampling of threads. -
snapshots
(Optional, number): Number of samples of thread stacktrace. -
threads
(Optional, number): Specifies the number of hot threads to provide information for. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
type
(Optional, Enum("cpu" | "wait" | "block" | "gpu" | "mem")): The type to sample. -
sort
(Optional, Enum("cpu" | "wait" | "block" | "gpu" | "mem")): The sort order for cpu type (default: total)
-
info
editGet node information.
By default, the API returns all attributes and core settings for cluster nodes.
client.nodes.info({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): List of node IDs or names used to limit returned information. -
metric
(Optional, string | string[]): Limits the information returned to the specific metrics. Supports a list, such as http,ingest. -
flat_settings
(Optional, boolean): If true, returns settings in flat format. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
reload_secure_settings
editReload the keystore on nodes in the cluster.
Secure settings are stored in an on-disk keystore. Certain of these settings are reloadable. That is, you can change them on disk and reload them without restarting any nodes in the cluster. When you have updated reloadable secure settings in your keystore, you can use this API to reload those settings on each node.
When the Elasticsearch keystore is password protected and not simply obfuscated, you must provide the password for the keystore when you reload the secure settings. Reloading the settings for the whole cluster assumes that the keystores for all nodes are protected with the same password; this method is allowed only when inter-node communications are encrypted. Alternatively, you can reload the secure settings on each node by locally accessing the API and passing the node-specific Elasticsearch keystore password.
client.nodes.reloadSecureSettings({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): The names of particular nodes in the cluster to target. -
secure_settings_password
(Optional, string): The password for the Elasticsearch keystore. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
stats
editGet node statistics. Get statistics for nodes in a cluster. By default, all stats are returned. You can limit the returned information by using metrics.
client.nodes.stats({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): List of node IDs or names used to limit returned information. -
metric
(Optional, string | string[]): Limit the information returned to the specified metrics -
index_metric
(Optional, string | string[]): Limit the information returned for indices metric to the specific index metrics. It can be used only if indices (or all) metric is specified. -
completion_fields
(Optional, string | string[]): List or wildcard expressions of fields to include in fielddata and suggest statistics. -
fielddata_fields
(Optional, string | string[]): List or wildcard expressions of fields to include in fielddata statistics. -
fields
(Optional, string | string[]): List or wildcard expressions of fields to include in the statistics. -
groups
(Optional, boolean): List of search groups to include in the search statistics. -
include_segment_file_sizes
(Optional, boolean): If true, the call reports the aggregated disk usage of each one of the Lucene index files (only applies if segment stats are requested). -
level
(Optional, Enum("cluster" | "indices" | "shards")): Indicates whether statistics are aggregated at the cluster, index, or shard level. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
types
(Optional, string[]): A list of document types for the indexing index metric. -
include_unloaded_segments
(Optional, boolean): Iftrue
, the response includes information from segments that are not loaded into memory.
-
usage
editGet feature usage information.
client.nodes.usage({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): A list of node IDs or names to limit the returned information; use_local
to return information from the node you’re connecting to, leave empty to get information from all nodes -
metric
(Optional, string | string[]): Limits the information returned to the specific metrics. A list of the following options:_all
,rest_actions
. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
query_rules
editdelete_rule
editDelete a query rule. Delete a query rule within a query ruleset. This is a destructive action that is only recoverable by re-adding the same rule with the create or update query rule API.
client.queryRules.deleteRule({ ruleset_id, rule_id })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset containing the rule to delete -
rule_id
(string): The unique identifier of the query rule within the specified ruleset to delete
-
delete_ruleset
editDelete a query ruleset. Remove a query ruleset and its associated data. This is a destructive action that is not recoverable.
client.queryRules.deleteRuleset({ ruleset_id })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset to delete
-
get_rule
editGet a query rule. Get details about a query rule within a query ruleset.
client.queryRules.getRule({ ruleset_id, rule_id })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset containing the rule to retrieve -
rule_id
(string): The unique identifier of the query rule within the specified ruleset to retrieve
-
get_ruleset
editGet a query ruleset. Get details about a query ruleset.
client.queryRules.getRuleset({ ruleset_id })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset
-
list_rulesets
editGet all query rulesets. Get summarized information about the query rulesets.
client.queryRules.listRulesets({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): The offset from the first result to fetch. -
size
(Optional, number): The maximum number of results to retrieve.
-
put_rule
editCreate or update a query rule. Create or update a query rule within a query ruleset.
Due to limitations within pinned queries, you can only pin documents using ids or docs, but cannot use both in single rule. It is advised to use one or the other in query rulesets, to avoid errors. Additionally, pinned queries have a maximum limit of 100 pinned hits. If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.
client.queryRules.putRule({ ruleset_id, rule_id, type, criteria, actions })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset containing the rule to be created or updated. -
rule_id
(string): The unique identifier of the query rule within the specified ruleset to be created or updated. -
type
(Enum("pinned" | "exclude")): The type of rule. -
criteria
({ type, metadata, values } | { type, metadata, values }[]): The criteria that must be met for the rule to be applied. If multiple criteria are specified for a rule, all criteria must be met for the rule to be applied. -
actions
({ ids, docs }): The actions to take when the rule is matched. The format of this action depends on the rule type. -
priority
(Optional, number)
-
put_ruleset
editCreate or update a query ruleset.
There is a limit of 100 rules per ruleset.
This limit can be increased by using the xpack.applications.rules.max_rules_per_ruleset
cluster setting.
Due to limitations within pinned queries, you can only select documents using ids
or docs
, but cannot use both in single rule.
It is advised to use one or the other in query rulesets, to avoid errors.
Additionally, pinned queries have a maximum limit of 100 pinned hits.
If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.
client.queryRules.putRuleset({ ruleset_id, rules })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset to be created or updated. -
rules
({ rule_id, type, criteria, actions, priority } | { rule_id, type, criteria, actions, priority }[])
-
test
editTest a query ruleset. Evaluate match criteria against a query ruleset to identify the rules that would match that criteria.
client.queryRules.test({ ruleset_id, match_criteria })
Arguments
edit-
Request (object):
-
ruleset_id
(string): The unique identifier of the query ruleset to be created or updated -
match_criteria
(Record<string, User-defined value>): The match criteria to apply to rules in the given query ruleset. Match criteria should match the keys defined in thecriteria.metadata
field of the rule.
-
rollup
editdelete_job
editDelete a rollup job.
A job must be stopped before it can be deleted. If you attempt to delete a started job, an error occurs. Similarly, if you attempt to delete a nonexistent job, an exception occurs.
When you delete a job, you remove only the process that is actively monitoring and rolling up data. The API does not delete any previously rolled up data. This is by design; a user may wish to roll up a static data set. Because the data set is static, after it has been fully rolled up there is no need to keep the indexing rollup job around (as there will be no new data). Thus the job can be deleted, leaving behind the rolled up data for analysis. If you wish to also remove the rollup data and the rollup index contains the data for only a single job, you can delete the whole rollup index. If the rollup index stores data from several jobs, you must issue a delete-by-query that targets the rollup job’s identifier in the rollup index. For example:
POST my_rollup_index/_delete_by_query { "query": { "term": { "_rollup.id": "the_rollup_job_id" } } }
client.rollup.deleteJob({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the job.
-
get_jobs
editGet rollup job information. Get the configuration, stats, and status of rollup jobs.
This API returns only active (both STARTED
and STOPPED
) jobs.
If a job was created, ran for a while, then was deleted, the API does not return any details about it.
For details about a historical rollup job, the rollup capabilities API may be more useful.
client.rollup.getJobs({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Identifier for the rollup job. If it is_all
or omitted, the API returns all rollup jobs.
-
get_rollup_caps
editGet the rollup job capabilities. Get the capabilities of any rollup jobs that have been configured for a specific index or index pattern.
This API is useful because a rollup job is often configured to rollup only a subset of fields from the source index. Furthermore, only certain aggregations can be configured for various fields, leading to a limited subset of functionality depending on that configuration. This API enables you to inspect an index and determine:
- Does this index have associated rollup data somewhere in the cluster?
- If yes to the first question, what fields were rolled up, what aggregations can be performed, and where does the data live?
client.rollup.getRollupCaps({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): Index, indices or index-pattern to return rollup capabilities for._all
may be used to fetch rollup capabilities from all jobs.
-
get_rollup_index_caps
editGet the rollup index capabilities. Get the rollup capabilities of all jobs inside of a rollup index. A single rollup index may store the data for multiple rollup jobs and may have a variety of capabilities depending on those jobs. This API enables you to determine:
- What jobs are stored in an index (or indices specified via a pattern)?
- What target indices were rolled up, what fields were used in those rollups, and what aggregations can be performed on each job?
client.rollup.getRollupIndexCaps({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): Data stream or index to check for rollup capabilities. Wildcard (*
) expressions are supported.
-
put_job
editCreate a rollup job.
From 8.15.0, calling this API in a cluster with no rollup usage will fail with a message about the deprecation and planned removal of rollup features. A cluster needs to contain either a rollup job or a rollup index in order for this API to be allowed to run.
The rollup job configuration contains all the details about how the job should run, when it indexes documents, and what future queries will be able to run against the rollup index.
There are three main sections to the job configuration: the logistical details about the job (for example, the cron schedule), the fields that are used for grouping, and what metrics to collect for each group.
Jobs are created in a STOPPED
state. You can start them with the start rollup jobs API.
client.rollup.putJob({ id, cron, groups, index_pattern, page_size, rollup_index })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the rollup job. This can be any alphanumeric string and uniquely identifies the data that is associated with the rollup job. The ID is persistent; it is stored with the rolled up data. If you create a job, let it run for a while, then delete the job, the data that the job rolled up is still be associated with this job ID. You cannot create a new job with the same ID since that could lead to problems with mismatched job configurations. -
cron
(string): A cron string which defines the intervals when the rollup job should be executed. When the interval triggers, the indexer attempts to rollup the data in the index pattern. The cron pattern is unrelated to the time interval of the data being rolled up. For example, you may wish to create hourly rollups of your document but to only run the indexer on a daily basis at midnight, as defined by the cron. The cron pattern is defined just like a Watcher cron schedule. -
groups
({ date_histogram, histogram, terms }): Defines the grouping fields and aggregations that are defined for this rollup job. These fields will then be available later for aggregating into buckets. These aggs and fields can be used in any combination. Think of the groups configuration as defining a set of tools that can later be used in aggregations to partition the data. Unlike raw data, we have to think ahead to which fields and aggregations might be used. Rollups provide enough flexibility that you simply need to determine which fields are needed, not in what order they are needed. -
index_pattern
(string): The index or index pattern to roll up. Supports wildcard-style patterns (logstash-*
). The job attempts to rollup the entire index or index-pattern. -
page_size
(number): The number of bucket results that are processed on each iteration of the rollup indexer. A larger value tends to execute faster, but requires more memory during processing. This value has no effect on how the data is rolled up; it is merely used for tweaking the speed or memory cost of the indexer. -
rollup_index
(string): The index that contains the rollup results. The index can be shared with other rollup jobs. The data is stored so that it doesn’t interfere with unrelated jobs. -
metrics
(Optional, { field, metrics }[]): Defines the metrics to collect for each grouping tuple. By default, only the doc_counts are collected for each group. To make rollup useful, you will often add metrics like averages, mins, maxes, etc. Metrics are defined on a per-field basis and for each field you configure which metric should be collected. -
timeout
(Optional, string | -1 | 0): Time to wait for the request to complete. -
headers
(Optional, Record<string, string | string[]>)
-
rollup_search
editSearch rolled-up data. The rollup search endpoint is needed because, internally, rolled-up documents utilize a different document structure than the original data. It rewrites standard Query DSL into a format that matches the rollup documents then takes the response and rewrites it back to what a client would expect given the original query.
The request body supports a subset of features from the regular search API. The following functionality is not available:
size
: Because rollups work on pre-aggregated data, no search hits can be returned and so size must be set to zero or omitted entirely.
highlighter
, suggestors
, post_filter
, profile
, explain
: These are similarly disallowed.
Searching both historical rollup and non-rollup data
The rollup search API has the capability to search across both "live" non-rollup data and the aggregated rollup data. This is done by simply adding the live indices to the URI. For example:
GET sensor-1,sensor_rollup/_rollup_search { "size": 0, "aggregations": { "max_temperature": { "max": { "field": "temperature" } } } }
The rollup search endpoint does two things when the search runs:
- The original request is sent to the non-rollup index unaltered.
- A rewritten version of the original request is sent to the rollup index.
When the two responses are received, the endpoint rewrites the rollup response and merges the two together. During the merging process, if there is any overlap in buckets between the two responses, the buckets from the non-rollup index are used.
client.rollup.rollupSearch({ index })
Arguments
edit-
Request (object):
-
index
(string | string[]): A list of data streams and indices used to limit the request. This parameter has the following rules:
-
-
At least one data stream, index, or wildcard expression must be specified. This target can include a rollup or non-rollup index. For data streams, the stream’s backing indices can only serve as non-rollup indices. Omitting the parameter or using
_all
are not permitted. - Multiple non-rollup indices may be specified.
- Only one rollup index may be specified. If more than one are supplied, an exception occurs.
-
Wildcard expressions (
*
) may be used. If they match more than one rollup index, an exception occurs. However, you can use an expression to match multiple non-rollup indices or data streams.-
aggregations
(Optional, Record<string, { aggregations, meta, adjacency_matrix, auto_date_histogram, avg, avg_bucket, boxplot, bucket_script, bucket_selector, bucket_sort, bucket_count_ks_test, bucket_correlation, cardinality, categorize_text, children, composite, cumulative_cardinality, cumulative_sum, date_histogram, date_range, derivative, diversified_sampler, extended_stats, extended_stats_bucket, frequent_item_sets, filter, filters, geo_bounds, geo_centroid, geo_distance, geohash_grid, geo_line, geotile_grid, geohex_grid, global, histogram, ip_range, ip_prefix, inference, line, matrix_stats, max, max_bucket, median_absolute_deviation, min, min_bucket, missing, moving_avg, moving_percentiles, moving_fn, multi_terms, nested, normalize, parent, percentile_ranks, percentiles, percentiles_bucket, range, rare_terms, rate, reverse_nested, random_sampler, sampler, scripted_metric, serial_diff, significant_terms, significant_text, stats, stats_bucket, string_stats, sum, sum_bucket, terms, time_series, top_hits, t_test, top_metrics, value_count, weighted_avg, variable_width_histogram }>): Specifies aggregations. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): Specifies a DSL query that is subject to some limitations. -
size
(Optional, number): Must be zero if set, as rollups work on pre-aggregated data. -
rest_total_hits_as_int
(Optional, boolean): Indicates whether hits.total should be rendered as an integer or an object in the rest search response -
typed_keys
(Optional, boolean): Specify whether aggregation and suggester names should be prefixed by their respective types in the response
-
start_job
editStart rollup jobs. If you try to start a job that does not exist, an exception occurs. If you try to start a job that is already started, nothing happens.
client.rollup.startJob({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the rollup job.
-
stop_job
editStop rollup jobs. If you try to stop a job that does not exist, an exception occurs. If you try to stop a job that is already stopped, nothing happens.
Since only a stopped job can be deleted, it can be useful to block the API until the indexer has fully stopped.
This is accomplished with the wait_for_completion
query parameter, and optionally a timeout. For example:
POST _rollup/job/sensor/_stop?wait_for_completion=true&timeout=10s
The parameter blocks the API call from returning until either the job has moved to STOPPED or the specified time has elapsed. If the specified time elapses without the job moving to STOPPED, a timeout exception occurs.
client.rollup.stopJob({ id })
Arguments
edit-
Request (object):
-
id
(string): Identifier for the rollup job. -
timeout
(Optional, string | -1 | 0): Ifwait_for_completion
istrue
, the API blocks for (at maximum) the specified duration while waiting for the job to stop. If more thantimeout
time has passed, the API throws a timeout exception. NOTE: Even if a timeout occurs, the stop request is still processing and eventually moves the job to STOPPED. The timeout simply means the API call itself timed out while waiting for the status change. -
wait_for_completion
(Optional, boolean): If set totrue
, causes the API to block until the indexer state completely stops. If set tofalse
, the API returns immediately and the indexer is stopped asynchronously in the background.
-
search_application
editdelete
editDelete a search application.
Remove a search application and its associated alias. Indices attached to the search application are not removed.
client.searchApplication.delete({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the search application to delete.
-
delete_behavioral_analytics
editDelete a behavioral analytics collection. The associated data stream is also deleted.
client.searchApplication.deleteBehavioralAnalytics({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the analytics collection to be deleted
-
get
editGet search application details.
client.searchApplication.get({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the search application
-
get_behavioral_analytics
editGet behavioral analytics collections.
client.searchApplication.getBehavioralAnalytics({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string[]): A list of analytics collections to limit the returned information
-
list
editGet search applications. Get information about search applications.
client.searchApplication.list({ ... })
Arguments
edit-
Request (object):
-
q
(Optional, string): Query in the Lucene query string syntax. -
from
(Optional, number): Starting offset. -
size
(Optional, number): Specifies a max number of results to get.
-
post_behavioral_analytics_event
editCreate a behavioral analytics collection event.
client.searchApplication.postBehavioralAnalyticsEvent({ collection_name, event_type })
Arguments
edit-
Request (object):
-
collection_name
(string): The name of the behavioral analytics collection. -
event_type
(Enum("page_view" | "search" | "search_click")): The analytics event type. -
payload
(Optional, User-defined value) -
debug
(Optional, boolean): Whether the response type has to include more details
-
put
editCreate or update a search application.
client.searchApplication.put({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the search application to be created or updated. -
search_application
(Optional, { indices, analytics_collection_name, template }) -
create
(Optional, boolean): Iftrue
, this request cannot replace or update existing Search Applications.
-
put_behavioral_analytics
editCreate a behavioral analytics collection.
client.searchApplication.putBehavioralAnalytics({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the analytics collection to be created or updated.
-
render_query
editRender a search application query.
Generate an Elasticsearch query using the specified query parameters and the search template associated with the search application or a default template if none is specified.
If a parameter used in the search template is not specified in params
, the parameter’s default value will be used.
The API returns the specific Elasticsearch query that would be generated and run by calling the search application search API.
You must have read
privileges on the backing alias of the search application.
client.searchApplication.renderQuery({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the search application to render teh query for. -
params
(Optional, Record<string, User-defined value>)
-
search
editRun a search application search. Generate and run an Elasticsearch query that uses the specified query parameteter and the search template associated with the search application or default template. Unspecified template parameters are assigned their default values if applicable.
client.searchApplication.search({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the search application to be searched. -
params
(Optional, Record<string, User-defined value>): Query parameters specific to this request, which will override any defaults specified in the template. -
typed_keys
(Optional, boolean): Determines whether aggregation names are prefixed by their respective types in the response.
-
searchable_snapshots
editcache_stats
editGet cache statistics. Get statistics about the shared cache for partially mounted indices.
client.searchableSnapshots.cacheStats({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): The names of the nodes in the cluster to target. -
master_timeout
(Optional, string | -1 | 0)
-
clear_cache
editClear the cache. Clear indices and data streams from the shared cache for partially mounted indices.
client.searchableSnapshots.clearCache({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams, indices, and aliases to clear from the cache. It supports wildcards (*
). -
expand_wildcards
(Optional, Enum("all" | "open" | "closed" | "hidden" | "none") | Enum("all" | "open" | "closed" | "hidden" | "none")[]): Whether to expand wildcard expression to concrete indices that are open, closed or both. -
allow_no_indices
(Optional, boolean): Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes_all
string or when no indices have been specified) -
ignore_unavailable
(Optional, boolean): Whether specified concrete indices should be ignored when unavailable (missing or closed)
-
mount
editMount a snapshot. Mount a snapshot as a searchable snapshot index. Do not use this API for snapshots managed by index lifecycle management (ILM). Manually mounting ILM-managed snapshots can interfere with ILM processes.
client.searchableSnapshots.mount({ repository, snapshot, index })
Arguments
edit-
Request (object):
-
repository
(string): The name of the repository containing the snapshot of the index to mount. -
snapshot
(string): The name of the snapshot of the index to mount. -
index
(string): The name of the index contained in the snapshot whose data is to be mounted. If norenamed_index
is specified, this name will also be used to create the new index. -
renamed_index
(Optional, string): The name of the index that will be created. -
index_settings
(Optional, Record<string, User-defined value>): The settings that should be added to the index when it is mounted. -
ignore_index_settings
(Optional, string[]): The names of settings that should be removed from the index when it is mounted. -
master_timeout
(Optional, string | -1 | 0): The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
. -
wait_for_completion
(Optional, boolean): If true, the request blocks until the operation is complete. -
storage
(Optional, string): The mount option for the searchable snapshot index.
-
stats
editGet searchable snapshot statistics.
client.searchableSnapshots.stats({ ... })
Arguments
edit-
Request (object):
-
index
(Optional, string | string[]): A list of data streams and indices to retrieve statistics for. -
level
(Optional, Enum("cluster" | "indices" | "shards")): Return stats aggregated at cluster, index or shard level
-
security
editactivate_user_profile
editActivate a user profile.
Create or update a user profile on behalf of another user.
The user profile feature is designed only for use by Kibana and Elastic’s Observability, Enterprise Search, and Elastic Security solutions.
Individual users and external applications should not call this API directly.
The calling application must have either an access_token
or a combination of username
and password
for the user that the profile document is intended for.
Elastic reserves the right to change or remove this feature in future releases without prior notice.
This API creates or updates a profile document for end users with information that is extracted from the user’s authentication object including username
, full_name,
roles
, and the authentication realm.
For example, in the JWT access_token
case, the profile user’s username
is extracted from the JWT token claim pointed to by the claims.principal
setting of the JWT realm that authenticated the token.
When updating a profile document, the API enables the document if it was disabled.
Any updates do not change existing content for either the labels
or data
fields.
client.security.activateUserProfile({ grant_type })
Arguments
edit-
Request (object):
-
grant_type
(Enum("password" | "access_token")): The type of grant. -
access_token
(Optional, string): The user’s Elasticsearch access token or JWT. Bothaccess
andid
JWT token types are supported and they depend on the underlying JWT realm configuration. If you specify theaccess_token
grant type, this parameter is required. It is not valid with other grant types. -
password
(Optional, string): The user’s password. If you specify thepassword
grant type, this parameter is required. It is not valid with other grant types. -
username
(Optional, string): The username that identifies the user. If you specify thepassword
grant type, this parameter is required. It is not valid with other grant types.
-
authenticate
editAuthenticate a user.
Authenticates a user and returns information about the authenticated user. Include the user information in a [basic auth header](https://en.wikipedia.org/wiki/Basic_access_authentication). A successful call returns a JSON structure that shows user information such as their username, the roles that are assigned to the user, any assigned metadata, and information about the realms that authenticated and authorized the user. If the user cannot be authenticated, this API returns a 401 status code.
client.security.authenticate()
bulk_delete_role
editBulk delete roles.
The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The bulk delete roles API cannot delete roles that are defined in roles files.
client.security.bulkDeleteRole({ names })
Arguments
edit-
Request (object):
-
names
(string[]): An array of role names to delete -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
bulk_put_role
editBulk create or update roles.
The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The bulk create or update roles API cannot update roles that are defined in roles files.
client.security.bulkPutRole({ roles })
Arguments
edit-
Request (object):
-
roles
(Record<string, { cluster, indices, remote_indices, remote_cluster, global, applications, metadata, run_as, description, restriction, transient_metadata }>): A dictionary of role name to RoleDescriptor objects to add or update -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
bulk_update_api_keys
editBulk update API keys. Update the attributes for multiple API keys.
It is not possible to use an API key as the authentication credential for this API. To update API keys, the owner user’s credentials are required.
This API is similar to the update API key API but enables you to apply the same update to multiple API keys in one API call. This operation can greatly improve performance over making individual updates.
It is not possible to update expired or invalidated API keys.
This API supports updates to API key access scope, metadata and expiration.
The access scope of each API key is derived from the role_descriptors
you specify in the request and a snapshot of the owner user’s permissions at the time of the request.
The snapshot of the owner’s permissions is updated automatically on every call.
If you don’t specify role_descriptors
in the request, a call to this API might still change an API key’s access scope. This change can occur if the owner user’s permissions have changed since the API key was created or last modified.
A successful request returns a JSON structure that contains the IDs of all updated API keys, the IDs of API keys that already had the requested changes and did not require an update, and error details for any failed update.
client.security.bulkUpdateApiKeys({ ids })
Arguments
edit-
Request (object):
-
ids
(string | string[]): The API key identifiers. -
expiration
(Optional, string | -1 | 0): Expiration time for the API keys. By default, API keys never expire. This property can be omitted to leave the value unchanged. -
metadata
(Optional, Record<string, User-defined value>): Arbitrary nested metadata to associate with the API keys. Within themetadata
object, top-level keys beginning with an underscore (_
) are reserved for system usage. Any information specified with this parameter fully replaces metadata previously associated with the API key. -
role_descriptors
(Optional, Record<string, { cluster, indices, remote_indices, remote_cluster, global, applications, metadata, run_as, description, restriction, transient_metadata }>): The role descriptors to assign to the API keys. An API key’s effective permissions are an intersection of its assigned privileges and the point-in-time snapshot of permissions of the owner user. You can assign new privileges by specifying them in this parameter. To remove assigned privileges, supply therole_descriptors
parameter as an empty object{}
. If an API key has no assigned privileges, it inherits the owner user’s full permissions. The snapshot of the owner’s permissions is always updated, whether you supply therole_descriptors
parameter. The structure of a role descriptor is the same as the request for the create API keys API.
-
change_password
editChange passwords.
Change the passwords of users in the native realm and built-in users.
client.security.changePassword({ ... })
Arguments
edit-
Request (object):
-
username
(Optional, string): The user whose password you want to change. If you do not specify this parameter, the password is changed for the current user. -
password
(Optional, string): The new password value. Passwords must be at least 6 characters long. -
password_hash
(Optional, string): A hash of the new password value. This must be produced using the same hashing algorithm as has been configured for password storage. For more details, see the explanation of thexpack.security.authc.password_hashing.algorithm
setting. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
clear_api_key_cache
editClear the API key cache.
Evict a subset of all entries from the API key cache. The cache is also automatically cleared on state changes of the security index.
client.security.clearApiKeyCache({ ids })
Arguments
edit-
Request (object):
-
ids
(string | string[]): List of API key IDs to evict from the API key cache. To evict all API keys, use*
. Does not support other wildcard patterns.
-
clear_cached_privileges
editClear the privileges cache.
Evict privileges from the native application privilege cache. The cache is also automatically cleared for applications that have their privileges updated.
client.security.clearCachedPrivileges({ application })
Arguments
edit-
Request (object):
-
application
(string): A list of applications. To clear all applications, use an asterism (*
). It does not support other wildcard patterns.
-
clear_cached_realms
editClear the user cache.
Evict users from the user cache. You can completely clear the cache or evict specific users.
User credentials are cached in memory on each node to avoid connecting to a remote authentication service or hitting the disk for every incoming request. There are realm settings that you can use to configure the user cache. For more information, refer to the documentation about controlling the user cache.
client.security.clearCachedRealms({ realms })
Arguments
edit-
Request (object):
-
realms
(string | string[]): A list of realms. To clear all realms, use an asterisk (*
). It does not support other wildcard patterns. -
usernames
(Optional, string[]): A list of the users to clear from the cache. If you do not specify this parameter, the API evicts all users from the user cache.
-
clear_cached_roles
editClear the roles cache.
Evict roles from the native role cache.
client.security.clearCachedRoles({ name })
Arguments
edit-
Request (object):
-
name
(string | string[]): A list of roles to evict from the role cache. To evict all roles, use an asterisk (*
). It does not support other wildcard patterns.
-
clear_cached_service_tokens
editClear service account token caches.
Evict a subset of all entries from the service account token caches.
Two separate caches exist for service account tokens: one cache for tokens backed by the service_tokens
file, and another for tokens backed by the .security
index.
This API clears matching entries from both caches.
The cache for service account tokens backed by the .security
index is cleared automatically on state changes of the security index.
The cache for tokens backed by the service_tokens
file is cleared automatically on file changes.
client.security.clearCachedServiceTokens({ namespace, service, name })
Arguments
edit-
Request (object):
-
namespace
(string): The namespace, which is a top-level grouping of service accounts. -
service
(string): The name of the service, which must be unique within its namespace. -
name
(string | string[]): A list of token names to evict from the service account token caches. Use a wildcard (*
) to evict all tokens that belong to a service account. It does not support other wildcard patterns.
-
create_api_key
editCreate an API key.
Create an API key for access without requiring basic authentication.
If the credential that is used to authenticate this request is an API key, the derived API key cannot have any privileges. If you specify privileges, the API returns an error.
A successful request returns a JSON structure that contains the API key, its unique id, and its name. If applicable, it also returns expiration information for the API key in milliseconds.
By default, API keys never expire. You can specify expiration information when you create the API keys.
The API keys are created by the Elasticsearch API key service, which is automatically enabled. To configure or turn off the API key service, refer to API key service setting documentation.
client.security.createApiKey({ ... })
Arguments
edit-
Request (object):
-
expiration
(Optional, string | -1 | 0): The expiration time for the API key. By default, API keys never expire. -
name
(Optional, string): A name for the API key. -
role_descriptors
(Optional, Record<string, { cluster, indices, remote_indices, remote_cluster, global, applications, metadata, run_as, description, restriction, transient_metadata }>): An array of role descriptors for this API key. When it is not specified or it is an empty array, the API key will have a point in time snapshot of permissions of the authenticated user. If you supply role descriptors, the resultant permissions are an intersection of API keys permissions and the authenticated user’s permissions thereby limiting the access scope for API keys. The structure of role descriptor is the same as the request for the create role API. For more details, refer to the create or update roles API.
-
Due to the way in which this permission intersection is calculated, it is not possible to create an API key that is a child of another API key, unless the derived key is created without any privileges.
In this case, you must explicitly specify a role descriptor with no privileges.
The derived API key can be used for authentication; it will not have authority to call Elasticsearch APIs.
metadata
(Optional, Record<string, User-defined value>): Arbitrary metadata that you want to associate with the API key. It supports nested data structure. Within the metadata object, keys beginning with _
are reserved for system usage.
refresh
(Optional, Enum(true | false | "wait_for")): If true
(the default) then refresh the affected shards to make this operation visible to search, if wait_for
then wait for a refresh to make this operation visible to search, if false
then do nothing with refreshes.
create_cross_cluster_api_key
editCreate a cross-cluster API key.
Create an API key of the cross_cluster
type for the API key based remote cluster access.
A cross_cluster
API key cannot be used to authenticate through the REST interface.
To authenticate this request you must use a credential that is not an API key. Even if you use an API key that has the required privilege, the API returns an error.
Cross-cluster API keys are created by the Elasticsearch API key service, which is automatically enabled.
Unlike REST API keys, a cross-cluster API key does not capture permissions of the authenticated user. The API key’s effective permission is exactly as specified with the access
property.
A successful request returns a JSON structure that contains the API key, its unique ID, and its name. If applicable, it also returns expiration information for the API key in milliseconds.
By default, API keys never expire. You can specify expiration information when you create the API keys.
Cross-cluster API keys can only be updated with the update cross-cluster API key API. Attempting to update them with the update REST API key API or the bulk update REST API keys API will result in an error.
client.security.createCrossClusterApiKey({ access, name })
Arguments
edit-
Request (object):
-
access
({ replication, search }): The access to be granted to this API key. The access is composed of permissions for cross-cluster search and cross-cluster replication. At least one of them must be specified.
-
No explicit privileges should be specified for either search or replication access.
The creation process automatically converts the access specification to a role descriptor which has relevant privileges assigned accordingly.
name
(string): Specifies the name for this API key.
expiration
(Optional, string | -1 | 0): Expiration time for the API key.
By default, API keys never expire.
* *metadata
(Optional, Record<string, User-defined value>): Arbitrary metadata that you want to associate with the API key.
It supports nested data structure.
Within the metadata object, keys beginning with _
are reserved for system usage.
create_service_token
editCreate a service account token.
Create a service accounts token for access without requiring basic authentication.
Service account tokens never expire. You must actively delete them if they are no longer needed.
client.security.createServiceToken({ namespace, service })
Arguments
edit-
Request (object):
-
namespace
(string): The name of the namespace, which is a top-level grouping of service accounts. -
service
(string): The name of the service. -
name
(Optional, string): The name for the service account token. If omitted, a random name will be generated.
-
Token names must be at least one and no more than 256 characters.
They can contain alphanumeric characters (a-z, A-Z, 0-9), dashes (-
), and underscores (_
), but cannot begin with an underscore.
Token names must be unique in the context of the associated service account.
They must also be globally unique with their fully qualified names, which are comprised of the service account principal and token name, such as <namespace>/<service>/<token-name>
.
* *refresh
(Optional, Enum(true | false | "wait_for")): If true
then refresh the affected shards to make this operation visible to search, if wait_for
(the default) then wait for a refresh to make this operation visible to search, if false
then do nothing with refreshes.
delegate_pki
editDelegate PKI authentication.
This API implements the exchange of an X509Certificate chain for an Elasticsearch access token.
The certificate chain is validated, according to RFC 5280, by sequentially considering the trust configuration of every installed PKI realm that has delegation.enabled
set to true
.
A successfully trusted client certificate is also subject to the validation of the subject distinguished name according to thw username_pattern
of the respective realm.
This API is called by smart and trusted proxies, such as Kibana, which terminate the user’s TLS session but still want to authenticate the user by using a PKI realm—-as if the user connected directly to Elasticsearch.
The association between the subject public key in the target certificate and the corresponding private key is not validated. This is part of the TLS authentication process and it is delegated to the proxy that calls this API. The proxy is trusted to have performed the TLS authentication and this API translates that authentication into an Elasticsearch access token.
client.security.delegatePki({ x509_certificate_chain })
Arguments
edit-
Request (object):
-
x509_certificate_chain
(string[]): The X509Certificate chain, which is represented as an ordered string array. Each string in the array is a base64-encoded (Section 4 of RFC4648 - not base64url-encoded) of the certificate’s DER encoding.
-
The first element is the target certificate that contains the subject distinguished name that is requesting access. This may be followed by additional certificates; each subsequent certificate is used to certify the previous one.
delete_privileges
editDelete application privileges.
To use this API, you must have one of the following privileges:
-
The
manage_security
cluster privilege (or a greater privilege such asall
). - The "Manage Application Privileges" global privilege for the application being referenced in the request.
client.security.deletePrivileges({ application, name })
Arguments
edit-
Request (object):
-
application
(string): The name of the application. Application privileges are always associated with exactly one application. -
name
(string | string[]): The name of the privilege. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
delete_role
editDelete roles.
Delete roles in the native realm. The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The delete roles API cannot remove roles that are defined in roles files.
client.security.deleteRole({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the role. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
delete_role_mapping
editDelete role mappings.
Role mappings define which roles are assigned to each user. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The delete role mappings API cannot remove role mappings that are defined in role mapping files.
client.security.deleteRoleMapping({ name })
Arguments
edit-
Request (object):
-
name
(string): The distinct name that identifies the role mapping. The name is used solely as an identifier to facilitate interaction via the API; it does not affect the behavior of the mapping in any way. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
delete_service_token
editDelete service account tokens.
Delete service account tokens for a service in a specified namespace.
client.security.deleteServiceToken({ namespace, service, name })
Arguments
edit-
Request (object):
-
namespace
(string): The namespace, which is a top-level grouping of service accounts. -
service
(string): The service name. -
name
(string): The name of the service account token. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
then refresh the affected shards to make this operation visible to search, ifwait_for
(the default) then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
delete_user
editDelete users.
Delete users from the native realm.
client.security.deleteUser({ username })
Arguments
edit-
Request (object):
-
username
(string): An identifier for the user. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
disable_user
editDisable users.
Disable users in the native realm. By default, when you create users, they are enabled. You can use this API to revoke a user’s access to Elasticsearch.
client.security.disableUser({ username })
Arguments
edit-
Request (object):
-
username
(string): An identifier for the user. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
disable_user_profile
editDisable a user profile.
Disable user profiles so that they are not visible in user profile searches.
The user profile feature is designed only for use by Kibana and Elastic’s Observability, Enterprise Search, and Elastic Security solutions. Individual users and external applications should not call this API directly. Elastic reserves the right to change or remove this feature in future releases without prior notice.
When you activate a user profile, its automatically enabled and visible in user profile searches. You can use the disable user profile API to disable a user profile so it’s not visible in these searches. To re-enable a disabled user profile, use the enable user profile API .
client.security.disableUserProfile({ uid })
Arguments
edit-
Request (object):
-
uid
(string): Unique identifier for the user profile. -
refresh
(Optional, Enum(true | false | "wait_for")): If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.
-
enable_user
editEnable users.
Enable users in the native realm. By default, when you create users, they are enabled.
client.security.enableUser({ username })
Arguments
edit-
Request (object):
-
username
(string): An identifier for the user. -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
enable_user_profile
editEnable a user profile.
Enable user profiles to make them visible in user profile searches.
The user profile feature is designed only for use by Kibana and Elastic’s Observability, Enterprise Search, and Elastic Security solutions. Individual users and external applications should not call this API directly. Elastic reserves the right to change or remove this feature in future releases without prior notice.
When you activate a user profile, it’s automatically enabled and visible in user profile searches. If you later disable the user profile, you can use the enable user profile API to make the profile visible in these searches again.
client.security.enableUserProfile({ uid })
Arguments
edit-
Request (object):
-
uid
(string): A unique identifier for the user profile. -
refresh
(Optional, Enum(true | false | "wait_for")): If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, nothing is done with refreshes.
-
enroll_kibana
editEnroll Kibana.
Enable a Kibana instance to configure itself for communication with a secured Elasticsearch cluster.
This API is currently intended for internal use only by Kibana. Kibana uses this API internally to configure itself for communications with an Elasticsearch cluster that already has security features enabled.
client.security.enrollKibana()
enroll_node
editEnroll a node.
Enroll a new node to allow it to join an existing cluster with security features enabled.
The response contains all the necessary information for the joining node to bootstrap discovery and security related settings so that it can successfully join the cluster. The response contains key and certificate material that allows the caller to generate valid signed certificates for the HTTP layer of all nodes in the cluster.
client.security.enrollNode()
get_api_key
editGet API key information.
Retrieves information for one or more API keys.
NOTE: If you have only the manage_own_api_key
privilege, this API returns only the API keys that you own.
If you have read_security
, manage_api_key
or greater privileges (including manage_security
), this API returns all API keys regardless of ownership.
client.security.getApiKey({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): An API key id. This parameter cannot be used with any ofname
,realm_name
orusername
. -
name
(Optional, string): An API key name. This parameter cannot be used with any ofid
,realm_name
orusername
. It supports prefix search with wildcard. -
owner
(Optional, boolean): A boolean flag that can be used to query API keys owned by the currently authenticated user. Therealm_name
orusername
parameters cannot be specified when this parameter is set totrue
as they are assumed to be the currently authenticated ones. -
realm_name
(Optional, string): The name of an authentication realm. This parameter cannot be used with eitherid
orname
or whenowner
flag is set totrue
. -
username
(Optional, string): The username of a user. This parameter cannot be used with eitherid
orname
or whenowner
flag is set totrue
. -
with_limited_by
(Optional, boolean): Return the snapshot of the owner user’s role descriptors associated with the API key. An API key’s actual permission is the intersection of its assigned role descriptors and the owner user’s role descriptors. -
active_only
(Optional, boolean): A boolean flag that can be used to query API keys that are currently active. An API key is considered active if it is neither invalidated, nor expired at query time. You can specify this together with other parameters such asowner
orname
. Ifactive_only
is false, the response will include both active and inactive (expired or invalidated) keys. -
with_profile_uid
(Optional, boolean): Determines whether to also retrieve the profile uid, for the API key owner principal, if it exists.
-
get_builtin_privileges
editGet builtin privileges.
Get the list of cluster privileges and index privileges that are available in this version of Elasticsearch.
client.security.getBuiltinPrivileges()
get_privileges
editGet application privileges.
To use this API, you must have one of the following privileges:
-
The
read_security
cluster privilege (or a greater privilege such asmanage_security
orall
). - The "Manage Application Privileges" global privilege for the application being referenced in the request.
client.security.getPrivileges({ ... })
Arguments
edit-
Request (object):
-
application
(Optional, string): The name of the application. Application privileges are always associated with exactly one application. If you do not specify this parameter, the API returns information about all privileges for all applications. -
name
(Optional, string | string[]): The name of the privilege. If you do not specify this parameter, the API returns information about all privileges for the requested application.
-
get_role
editGet roles.
Get roles in the native realm. The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The get roles API cannot retrieve roles that are defined in roles files.
client.security.getRole({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): The name of the role. You can specify multiple roles as a list. If you do not specify this parameter, the API returns information about all roles.
-
get_role_mapping
editGet role mappings.
Role mappings define which roles are assigned to each user. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The get role mappings API cannot retrieve role mappings that are defined in role mapping files.
client.security.getRoleMapping({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string | string[]): The distinct name that identifies the role mapping. The name is used solely as an identifier to facilitate interaction via the API; it does not affect the behavior of the mapping in any way. You can specify multiple mapping names as a list. If you do not specify this parameter, the API returns information about all role mappings.
-
get_service_accounts
editGet service accounts.
Get a list of service accounts that match the provided path parameters.
Currently, only the elastic/fleet-server
service account is available.
client.security.getServiceAccounts({ ... })
Arguments
edit-
Request (object):
-
namespace
(Optional, string): The name of the namespace. Omit this parameter to retrieve information about all service accounts. If you omit this parameter, you must also omit theservice
parameter. -
service
(Optional, string): The service name. Omit this parameter to retrieve information about all service accounts that belong to the specifiednamespace
.
-
get_service_credentials
editGet service account credentials.
To use this API, you must have at least the read_security
cluster privilege (or a greater privilege such as manage_service_account
or manage_security
).
The response includes service account tokens that were created with the create service account tokens API as well as file-backed tokens from all nodes of the cluster.
For tokens backed by the service_tokens
file, the API collects them from all nodes of the cluster.
Tokens with the same name from different nodes are assumed to be the same token and are only counted once towards the total number of service tokens.
client.security.getServiceCredentials({ namespace, service })
Arguments
edit-
Request (object):
-
namespace
(string): The name of the namespace. -
service
(string): The service name.
-
get_settings
editGet security index settings.
Get the user-configurable settings for the security internal index (.security
and associated indices).
Only a subset of the index settings — those that are user-configurable—will be shown.
This includes:
-
index.auto_expand_replicas
-
index.number_of_replicas
client.security.getSettings({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_token
editGet a token.
Create a bearer token for access without requiring basic authentication.
The tokens are created by the Elasticsearch Token Service, which is automatically enabled when you configure TLS on the HTTP interface.
Alternatively, you can explicitly enable the xpack.security.authc.token.enabled
setting.
When you are running in production mode, a bootstrap check prevents you from enabling the token service unless you also enable TLS on the HTTP interface.
The get token API takes the same parameters as a typical OAuth 2.0 token API except for the use of a JSON request body.
A successful get token API call returns a JSON structure that contains the access token, the amount of time (seconds) that the token expires in, the type, and the scope if available.
The tokens returned by the get token API have a finite period of time for which they are valid and after that time period, they can no longer be used.
That time period is defined by the xpack.security.authc.token.timeout
setting.
If you want to invalidate a token immediately, you can do so by using the invalidate token API.
client.security.getToken({ ... })
Arguments
edit-
Request (object):
-
grant_type
(Optional, Enum("password" | "client_credentials" | "_kerberos" | "refresh_token")): The type of grant. Supported grant types are:password
,_kerberos
,client_credentials
, andrefresh_token
. -
scope
(Optional, string): The scope of the token. Currently tokens are only issued for a scope of FULL regardless of the value sent with the request. -
password
(Optional, string): The user’s password. If you specify thepassword
grant type, this parameter is required. This parameter is not valid with any other supported grant type. -
kerberos_ticket
(Optional, string): The base64 encoded kerberos ticket. If you specify the_kerberos
grant type, this parameter is required. This parameter is not valid with any other supported grant type. -
refresh_token
(Optional, string): The string that was returned when you created the token, which enables you to extend its life. If you specify therefresh_token
grant type, this parameter is required. This parameter is not valid with any other supported grant type. -
username
(Optional, string): The username that identifies the user. If you specify thepassword
grant type, this parameter is required. This parameter is not valid with any other supported grant type.
-
get_user
editGet users.
Get information about users in the native realm and built-in users.
client.security.getUser({ ... })
Arguments
edit-
Request (object):
-
username
(Optional, string | string[]): An identifier for the user. You can specify multiple usernames as a list. If you omit this parameter, the API retrieves information about all users. -
with_profile_uid
(Optional, boolean): Determines whether to retrieve the user profile UID, if it exists, for the users.
-
get_user_privileges
editGet user privileges.
Get the security privileges for the logged in user. All users can use this API, but only to determine their own privileges. To check the privileges of other users, you must use the run as feature. To check whether a user has a specific list of privileges, use the has privileges API.
client.security.getUserPrivileges({ ... })
Arguments
edit-
Request (object):
-
application
(Optional, string): The name of the application. Application privileges are always associated with exactly one application. If you do not specify this parameter, the API returns information about all privileges for all applications. -
priviledge
(Optional, string): The name of the privilege. If you do not specify this parameter, the API returns information about all privileges for the requested application. -
username
(Optional, string | null)
-
get_user_profile
editGet a user profile.
Get a user’s profile using the unique profile ID.
The user profile feature is designed only for use by Kibana and Elastic’s Observability, Enterprise Search, and Elastic Security solutions. Individual users and external applications should not call this API directly. Elastic reserves the right to change or remove this feature in future releases without prior notice.
client.security.getUserProfile({ uid })
Arguments
edit-
Request (object):
-
uid
(string | string[]): A unique identifier for the user profile. -
data
(Optional, string | string[]): A list of filters for thedata
field of the profile document. To return all content usedata=*
. To return a subset of content usedata=<key>
to retrieve content nested under the specified<key>
. By default returns nodata
content.
-
grant_api_key
editGrant an API key.
Create an API key on behalf of another user. This API is similar to the create API keys API, however it creates the API key for a user that is different than the user that runs the API. The caller must have authentication credentials for the user on whose behalf the API key will be created. It is not possible to use this API to create an API key without that user’s credentials. The supported user authentication credential types are:
- username and password
- Elasticsearch access tokens
- JWTs
The user, for whom the authentication credentials is provided, can optionally "run as" (impersonate) another user. In this case, the API key will be created on behalf of the impersonated user.
This API is intended be used by applications that need to create and manage API keys for end users, but cannot guarantee that those users have permission to create API keys on their own behalf. The API keys are created by the Elasticsearch API key service, which is automatically enabled.
A successful grant API key API call returns a JSON structure that contains the API key, its unique id, and its name. If applicable, it also returns expiration information for the API key in milliseconds.
By default, API keys never expire. You can specify expiration information when you create the API keys.
client.security.grantApiKey({ api_key, grant_type })
Arguments
edit-
Request (object):
-
api_key
({ name, expiration, role_descriptors, metadata }): The API key. -
grant_type
(Enum("access_token" | "password")): The type of grant. Supported grant types are:access_token
,password
. -
access_token
(Optional, string): The user’s access token. If you specify theaccess_token
grant type, this parameter is required. It is not valid with other grant types. -
username
(Optional, string): The user name that identifies the user. If you specify thepassword
grant type, this parameter is required. It is not valid with other grant types. -
password
(Optional, string): The user’s password. If you specify thepassword
grant type, this parameter is required. It is not valid with other grant types. -
run_as
(Optional, string): The name of the user to be impersonated.
-
has_privileges
editCheck user privileges.
Determine whether the specified user has a specified list of privileges. All users can use this API, but only to determine their own privileges. To check the privileges of other users, you must use the run as feature.
client.security.hasPrivileges({ ... })
Arguments
edit-
Request (object):
-
user
(Optional, string): Username -
application
(Optional, { application, privileges, resources }[]) -
cluster
(Optional, Enum("all" | "cancel_task" | "create_snapshot" | "cross_cluster_replication" | "cross_cluster_search" | "delegate_pki" | "grant_api_key" | "manage" | "manage_api_key" | "manage_autoscaling" | "manage_behavioral_analytics" | "manage_ccr" | "manage_data_frame_transforms" | "manage_data_stream_global_retention" | "manage_enrich" | "manage_ilm" | "manage_index_templates" | "manage_inference" | "manage_ingest_pipelines" | "manage_logstash_pipelines" | "manage_ml" | "manage_oidc" | "manage_own_api_key" | "manage_pipeline" | "manage_rollup" | "manage_saml" | "manage_search_application" | "manage_search_query_rules" | "manage_search_synonyms" | "manage_security" | "manage_service_account" | "manage_slm" | "manage_token" | "manage_transform" | "manage_user_profile" | "manage_watcher" | "monitor" | "monitor_data_frame_transforms" | "monitor_data_stream_global_retention" | "monitor_enrich" | "monitor_inference" | "monitor_ml" | "monitor_rollup" | "monitor_snapshot" | "monitor_stats" | "monitor_text_structure" | "monitor_transform" | "monitor_watcher" | "none" | "post_behavioral_analytics_event" | "read_ccr" | "read_fleet_secrets" | "read_ilm" | "read_pipeline" | "read_security" | "read_slm" | "transport_client" | "write_connector_secrets" | "write_fleet_secrets")[]): A list of the cluster privileges that you want to check. -
index
(Optional, { names, privileges, allow_restricted_indices }[])
-
has_privileges_user_profile
editCheck user profile privileges.
Determine whether the users associated with the specified user profile IDs have all the requested privileges.
The user profile feature is designed only for use by Kibana and Elastic’s Observability, Enterprise Search, and Elastic Security solutions. Individual users and external applications should not call this API directly. Elastic reserves the right to change or remove this feature in future releases without prior notice.
client.security.hasPrivilegesUserProfile({ uids, privileges })
Arguments
edit-
Request (object):
-
uids
(string[]): A list of profile IDs. The privileges are checked for associated users of the profiles. -
privileges
({ application, cluster, index }): An object containing all the privileges to be checked.
-
invalidate_api_key
editInvalidate API keys.
This API invalidates API keys created by the create API key or grant API key APIs. Invalidated API keys fail authentication, but they can still be viewed using the get API key information and query API key information APIs, for at least the configured retention period, until they are automatically deleted.
To use this API, you must have at least the manage_security
, manage_api_key
, or manage_own_api_key
cluster privileges.
The manage_security
privilege allows deleting any API key, including both REST and cross cluster API keys.
The manage_api_key
privilege allows deleting any REST API key, but not cross cluster API keys.
The manage_own_api_key
only allows deleting REST API keys that are owned by the user.
In addition, with the manage_own_api_key
privilege, an invalidation request must be issued in one of the three formats:
-
Set the parameter
owner=true
. -
Or, set both
username
andrealm_name
to match the user’s identity. -
Or, if the request is issued by an API key, that is to say an API key invalidates itself, specify its ID in the
ids
field.
client.security.invalidateApiKey({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string) -
ids
(Optional, string[]): A list of API key ids. This parameter cannot be used with any ofname
,realm_name
, orusername
. -
name
(Optional, string): An API key name. This parameter cannot be used with any ofids
,realm_name
orusername
. -
owner
(Optional, boolean): Query API keys owned by the currently authenticated user. Therealm_name
orusername
parameters cannot be specified when this parameter is set totrue
as they are assumed to be the currently authenticated ones.
-
At least one of ids
, name
, username
, and realm_name
must be specified if owner
is false
.
realm_name
(Optional, string): The name of an authentication realm.
This parameter cannot be used with either ids
or name
, or when owner
flag is set to true
.
username
(Optional, string): The username of a user.
This parameter cannot be used with either ids
or name
or when owner
flag is set to true
.
invalidate_token
editInvalidate a token.
The access tokens returned by the get token API have a finite period of time for which they are valid.
After that time period, they can no longer be used.
The time period is defined by the xpack.security.authc.token.timeout
setting.
The refresh tokens returned by the get token API are only valid for 24 hours. They can also be used exactly once. If you want to invalidate one or more access or refresh tokens immediately, use this invalidate token API.
While all parameters are optional, at least one of them is required.
More specifically, either one of token
or refresh_token
parameters is required.
If none of these two are specified, then realm_name
and/or username
need to be specified.
client.security.invalidateToken({ ... })
Arguments
edit-
Request (object):
-
token
(Optional, string): An access token. This parameter cannot be used if any ofrefresh_token
,realm_name
, orusername
are used. -
refresh_token
(Optional, string): A refresh token. This parameter cannot be used if any ofrefresh_token
,realm_name
, orusername
are used. -
realm_name
(Optional, string): The name of an authentication realm. This parameter cannot be used with eitherrefresh_token
ortoken
. -
username
(Optional, string): The username of a user. This parameter cannot be used with eitherrefresh_token
ortoken
.
-
oidc_authenticate
editAuthenticate OpenID Connect.
Exchange an OpenID Connect authentication response message for an Elasticsearch internal access token and refresh token that can be subsequently used for authentication.
Elasticsearch exposes all the necessary OpenID Connect related functionality with the OpenID Connect APIs. These APIs are used internally by Kibana in order to provide OpenID Connect based authentication, but can also be used by other, custom web applications or other clients.
client.security.oidcAuthenticate({ nonce, redirect_uri, state })
Arguments
edit-
Request (object):
-
nonce
(string): Associate a client session with an ID token and mitigate replay attacks. This value needs to be the same as the one that was provided to the/_security/oidc/prepare
API or the one that was generated by Elasticsearch and included in the response to that call. -
redirect_uri
(string): The URL to which the OpenID Connect Provider redirected the User Agent in response to an authentication request after a successful authentication. This URL must be provided as-is (URL encoded), taken from the body of the response or as the value of a location header in the response from the OpenID Connect Provider. -
state
(string): Maintain state between the authentication request and the response. This value needs to be the same as the one that was provided to the/_security/oidc/prepare
API or the one that was generated by Elasticsearch and included in the response to that call. -
realm
(Optional, string): The name of the OpenID Connect realm. This property is useful in cases where multiple realms are defined.
-
oidc_logout
editLogout of OpenID Connect.
Invalidate an access token and a refresh token that were generated as a response to the /_security/oidc/authenticate
API.
If the OpenID Connect authentication realm in Elasticsearch is accordingly configured, the response to this call will contain a URI pointing to the end session endpoint of the OpenID Connect Provider in order to perform single logout.
Elasticsearch exposes all the necessary OpenID Connect related functionality with the OpenID Connect APIs. These APIs are used internally by Kibana in order to provide OpenID Connect based authentication, but can also be used by other, custom web applications or other clients.
client.security.oidcLogout({ token })
Arguments
edit-
Request (object):
-
token
(string): The access token to be invalidated. -
refresh_token
(Optional, string): The refresh token to be invalidated.
-
oidc_prepare_authentication
editPrepare OpenID connect authentication.
Create an oAuth 2.0 authentication request as a URL string based on the configuration of the OpenID Connect authentication realm in Elasticsearch.
The response of this API is a URL pointing to the Authorization Endpoint of the configured OpenID Connect Provider, which can be used to redirect the browser of the user in order to continue the authentication process.
Elasticsearch exposes all the necessary OpenID Connect related functionality with the OpenID Connect APIs. These APIs are used internally by Kibana in order to provide OpenID Connect based authentication, but can also be used by other, custom web applications or other clients.
client.security.oidcPrepareAuthentication({ ... })
Arguments
edit-
Request (object):
-
iss
(Optional, string): In the case of a third party initiated single sign on, this is the issuer identifier for the OP that the RP is to send the authentication request to. It cannot be specified when realm is specified. One of realm or iss is required. -
login_hint
(Optional, string): In the case of a third party initiated single sign on, it is a string value that is included in the authentication request as the login_hint parameter. This parameter is not valid when realm is specified. -
nonce
(Optional, string): The value used to associate a client session with an ID token and to mitigate replay attacks. If the caller of the API does not provide a value, Elasticsearch will generate one with sufficient entropy and return it in the response. -
realm
(Optional, string): The name of the OpenID Connect realm in Elasticsearch the configuration of which should be used in order to generate the authentication request. It cannot be specified when iss is specified. One of realm or iss is required. -
state
(Optional, string): The value used to maintain state between the authentication request and the response, typically used as a Cross-Site Request Forgery mitigation. If the caller of the API does not provide a value, Elasticsearch will generate one with sufficient entropy and return it in the response.
-
put_privileges
editCreate or update application privileges.
To use this API, you must have one of the following privileges:
-
The
manage_security
cluster privilege (or a greater privilege such asall
). - The "Manage Application Privileges" global privilege for the application being referenced in the request.
Application names are formed from a prefix, with an optional suffix that conform to the following rules:
- The prefix must begin with a lowercase ASCII letter.
- The prefix must contain only ASCII letters or digits.
- The prefix must be at least 3 characters long.
-
If the suffix exists, it must begin with either a dash
-
or_
. -
The suffix cannot contain any of the following characters:
\
,/
,*
,?
,"
,<
,>
,|
,,
,*
. - No part of the name can contain whitespace.
Privilege names must begin with a lowercase ASCII letter and must contain only ASCII letters and digits along with the characters _
, -
, and .
.
Action names can contain any number of printable ASCII characters and must contain at least one of the following characters: /
, *
, :
.
client.security.putPrivileges({ ... })
Arguments
edit-
Request (object):
-
privileges
(Optional, Record<string, Record<string, { allocate, delete, downsample, freeze, forcemerge, migrate, readonly, rollover, set_priority, searchable_snapshot, shrink, unfollow, wait_for_snapshot }>>) -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
put_role
editCreate or update roles.
The role management APIs are generally the preferred way to manage roles in the native realm, rather than using file-based role management. The create or update roles API cannot update roles that are defined in roles files. File-based role management is not available in Elastic Serverless.
client.security.putRole({ name })
Arguments
edit-
Request (object):
-
name
(string): The name of the role. -
applications
(Optional, { application, privileges, resources }[]): A list of application privilege entries. -
cluster
(Optional, Enum("all" | "cancel_task" | "create_snapshot" | "cross_cluster_replication" | "cross_cluster_search" | "delegate_pki" | "grant_api_key" | "manage" | "manage_api_key" | "manage_autoscaling" | "manage_behavioral_analytics" | "manage_ccr" | "manage_data_frame_transforms" | "manage_data_stream_global_retention" | "manage_enrich" | "manage_ilm" | "manage_index_templates" | "manage_inference" | "manage_ingest_pipelines" | "manage_logstash_pipelines" | "manage_ml" | "manage_oidc" | "manage_own_api_key" | "manage_pipeline" | "manage_rollup" | "manage_saml" | "manage_search_application" | "manage_search_query_rules" | "manage_search_synonyms" | "manage_security" | "manage_service_account" | "manage_slm" | "manage_token" | "manage_transform" | "manage_user_profile" | "manage_watcher" | "monitor" | "monitor_data_frame_transforms" | "monitor_data_stream_global_retention" | "monitor_enrich" | "monitor_inference" | "monitor_ml" | "monitor_rollup" | "monitor_snapshot" | "monitor_stats" | "monitor_text_structure" | "monitor_transform" | "monitor_watcher" | "none" | "post_behavioral_analytics_event" | "read_ccr" | "read_fleet_secrets" | "read_ilm" | "read_pipeline" | "read_security" | "read_slm" | "transport_client" | "write_connector_secrets" | "write_fleet_secrets")[]): A list of cluster privileges. These privileges define the cluster-level actions for users with this role. -
global
(Optional, Record<string, User-defined value>): An object defining global privileges. A global privilege is a form of cluster privilege that is request-aware. Support for global privileges is currently limited to the management of application privileges. -
indices
(Optional, { field_security, names, privileges, query, allow_restricted_indices }[]): A list of indices permissions entries. -
remote_indices
(Optional, { clusters, field_security, names, privileges, query, allow_restricted_indices }[]): A list of remote indices permissions entries.
-
Remote indices are effective for remote clusters configured with the API key based model.
They have no effect for remote clusters configured with the certificate based model.
remote_cluster
(Optional, { clusters, privileges }[]): A list of remote cluster permissions entries.
metadata
(Optional, Record<string, User-defined value>): Optional metadata. Within the metadata object, keys that begin with an underscore (_
) are reserved for system use.
run_as
(Optional, string[]): A list of users that the owners of this role can impersonate. Note: in Serverless, the run-as feature is disabled. For API compatibility, you can still specify an empty run_as
field, but a non-empty list will be rejected.
description
(Optional, string): Optional description of the role descriptor
transient_metadata
(Optional, Record<string, User-defined value>): Indicates roles that might be incompatible with the current cluster license, specifically roles with document and field level security. When the cluster license doesn’t allow certain features for a given role, this parameter is updated dynamically to list the incompatible features. If enabled
is false
, the role is ignored, but is still listed in the response from the authenticate API.
refresh
(Optional, Enum(true | false | "wait_for")): If true
(the default) then refresh the affected shards to make this operation visible to search, if wait_for
then wait for a refresh to make this operation visible to search, if false
then do nothing with refreshes.
put_role_mapping
editCreate or update role mappings.
Role mappings define which roles are assigned to each user. Each mapping has rules that identify users and a list of roles that are granted to those users. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The create or update role mappings API cannot update role mappings that are defined in role mapping files.
This API does not create roles. Rather, it maps users to existing roles. Roles can be created by using the create or update roles API or roles files.
Role templates
The most common use for role mappings is to create a mapping from a known value on the user to a fixed role name.
For example, all users in the cn=admin,dc=example,dc=com
LDAP group should be given the superuser role in Elasticsearch.
The roles
field is used for this purpose.
For more complex needs, it is possible to use Mustache templates to dynamically determine the names of the roles that should be granted to the user.
The role_templates
field is used for this purpose.
To use role templates successfully, the relevant scripting feature must be enabled. Otherwise, all attempts to create a role mapping with role templates fail.
All of the user fields that are available in the role mapping rules are also available in the role templates. Thus it is possible to assign a user to a role that reflects their username, their groups, or the name of the realm to which they authenticated.
By default a template is evaluated to produce a single string that is the name of the role which should be assigned to the user. If the format of the template is set to "json" then the template is expected to produce a JSON string or an array of JSON strings for the role names.
client.security.putRoleMapping({ name })
Arguments
edit-
Request (object):
-
name
(string): The distinct name that identifies the role mapping. The name is used solely as an identifier to facilitate interaction via the API; it does not affect the behavior of the mapping in any way. -
enabled
(Optional, boolean): Mappings that haveenabled
set tofalse
are ignored when role mapping is performed. -
metadata
(Optional, Record<string, User-defined value>): Additional metadata that helps define which roles are assigned to each user. Within the metadata object, keys beginning with_
are reserved for system usage. -
roles
(Optional, string[]): A list of role names that are granted to the users that match the role mapping rules. Exactly one ofroles
orrole_templates
must be specified. -
role_templates
(Optional, { format, template }[]): A list of Mustache templates that will be evaluated to determine the roles names that should granted to the users that match the role mapping rules. Exactly one ofroles
orrole_templates
must be specified. -
rules
(Optional, { any, all, field, except }): The rules that determine which users should be matched by the mapping. A rule is a logical condition that is expressed by using a JSON DSL. -
run_as
(Optional, string[]) -
refresh
(Optional, Enum(true | false | "wait_for")): Iftrue
(the default) then refresh the affected shards to make this operation visible to search, ifwait_for
then wait for a refresh to make this operation visible to search, iffalse
then do nothing with refreshes.
-
put_user
editCreate or update users.
Add and update users in the native realm. A password is required for adding a new user but is optional when updating an existing user. To change a user’s password without updating any other fields, use the change password API.
client.security.putUser({ username })
Arguments
edit-
Request (object):
-
username
(string): An identifier for the user.
-
Usernames must be at least 1 and no more than 507 characters.
They can contain alphanumeric characters (a-z, A-Z, 0-9), spaces, punctuation, and printable symbols in the Basic Latin (ASCII) block.
Leading or trailing whitespace is not allowed.
email
(Optional, string | null): The email of the user.
full_name
(Optional, string | null): The full name of the user.
metadata
(Optional, Record<string, User-defined value>): Arbitrary metadata that you want to associate with the user.
password
(Optional, string): The user’s password.
Passwords must be at least 6 characters long.
When adding a user, one of password
or password_hash
is required.
When updating an existing user, the password is optional, so that other fields on the user (such as their roles) may be updated without modifying the user’s password
password_hash
(Optional, string): A hash of the user’s password.
This must be produced using the same hashing algorithm as has been configured for password storage.
For more details, see the explanation of the xpack.security.authc.password_hashing.algorithm
setting in the user cache and password hash algorithm documentation.
Using this parameter allows the client to pre-hash the password for performance and/or confidentiality reasons.
The password
parameter and the password_hash
parameter cannot be used in the same request.
roles
(Optional, string[]): A set of roles the user has.
The roles determine the user’s access permissions.
To create a user without any roles, specify an empty list ([]
).
enabled
(Optional, boolean): Specifies whether the user is enabled.
refresh
(Optional, Enum(true | false | "wait_for")): Valid values are true
, false
, and wait_for
.
These values have the same meaning as in the index API, but the default value for this API is true.
query_api_keys
editFind API keys with a query.
Get a paginated list of API keys and their information. You can optionally filter the results with a query.
To use this API, you must have at least the manage_own_api_key
or the read_security
cluster privileges.
If you have only the manage_own_api_key
privilege, this API returns only the API keys that you own.
If you have the read_security
, manage_api_key
, or greater privileges (including manage_security
), this API returns all API keys regardless of ownership.
client.security.queryApiKeys({ ... })
Arguments
edit-
Request (object):
-
aggregations
(Optional, Record<string, { aggregations, meta, cardinality, composite, date_range, filter, filters, missing, range, terms, value_count }>): Any aggregations to run over the corpus of returned API keys. Aggregations and queries work together. Aggregations are computed only on the API keys that match the query. This supports only a subset of aggregation types, namely:terms
,range
,date_range
,missing
,cardinality
,value_count
,composite
,filter
, andfilters
. Additionally, aggregations only run over the same subset of fields that query works with. -
query
(Optional, { bool, exists, ids, match, match_all, prefix, range, simple_query_string, term, terms, wildcard }): A query to filter which API keys to return. If the query parameter is missing, it is equivalent to amatch_all
query. The query supports a subset of query types, includingmatch_all
,bool
,term
,terms
,match
,ids
,prefix
,wildcard
,exists
,range
, andsimple_query_string
. You can query the following public information associated with an API key:id
,type
,name
,creation
,expiration
,invalidated
,invalidation
,username
,realm
, andmetadata
.
-
The queryable string values associated with API keys are internally mapped as keywords.
Consequently, if no analyzer
parameter is specified for a match
query, then the provided match query string is interpreted as a single keyword value.
Such a match query is hence equivalent to a term
query.
from
(Optional, number): The starting document offset.
It must not be negative.
By default, you cannot page through more than 10,000 hits using the from
and size
parameters.
To page through more hits, use the search_after
parameter.
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): The sort definition.
Other than id
, all public fields of an API key are eligible for sorting.
In addition, sort can also be applied to the _doc
field to sort by index order.
size
(Optional, number): The number of hits to return.
It must not be negative.
The size
parameter can be set to 0
, in which case no API key matches are returned, only the aggregation results.
By default, you cannot page through more than 10,000 hits using the from
and size
parameters.
To page through more hits, use the search_after
parameter.
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): The search after definition.
with_limited_by
(Optional, boolean): Return the snapshot of the owner user’s role descriptors associated with the API key.
An API key’s actual permission is the intersection of its assigned role descriptors and the owner user’s role descriptors (effectively limited by it).
An API key cannot retrieve any API key’s limited-by role descriptors (including itself) unless it has manage_api_key
or higher privileges.
with_profile_uid
(Optional, boolean): Determines whether to also retrieve the profile UID for the API key owner principal.
If it exists, the profile UID is returned under the profile_uid
response field for each API key.
* *typed_keys
(Optional, boolean): Determines whether aggregation names are prefixed by their respective types in the response.
query_role
editFind roles with a query.
Get roles in a paginated manner. The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The query roles API does not retrieve roles that are defined in roles files, nor built-in ones. You can optionally filter the results with a query. Also, the results can be paginated and sorted.
client.security.queryRole({ ... })
Arguments
edit-
Request (object):
-
query
(Optional, { bool, exists, ids, match, match_all, prefix, range, simple_query_string, term, terms, wildcard }): A query to filter which roles to return. If the query parameter is missing, it is equivalent to amatch_all
query. The query supports a subset of query types, includingmatch_all
,bool
,term
,terms
,match
,ids
,prefix
,wildcard
,exists
,range
, andsimple_query_string
. You can query the following information associated with roles:name
,description
,metadata
,applications.application
,applications.privileges
, andapplications.resources
. -
from
(Optional, number): The starting document offset. It must not be negative. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): The sort definition. You can sort onusername
,roles
, orenabled
. In addition, sort can also be applied to the_doc
field to sort by index order. -
size
(Optional, number): The number of hits to return. It must not be negative. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): The search after definition.
-
query_user
editFind users with a query.
Get information for users in a paginated manner. You can optionally filter the results with a query.
As opposed to the get user API, built-in users are excluded from the result. This API is only for native users.
client.security.queryUser({ ... })
Arguments
edit-
Request (object):
-
query
(Optional, { ids, bool, exists, match, match_all, prefix, range, simple_query_string, term, terms, wildcard }): A query to filter which users to return. If the query parameter is missing, it is equivalent to amatch_all
query. The query supports a subset of query types, includingmatch_all
,bool
,term
,terms
,match
,ids
,prefix
,wildcard
,exists
,range
, andsimple_query_string
. You can query the following information associated with user:username
,roles
,enabled
,full_name
, andemail
. -
from
(Optional, number): The starting document offset. It must not be negative. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): The sort definition. Fields eligible for sorting are:username
,roles
,enabled
. In addition, sort can also be applied to the_doc
field to sort by index order. -
size
(Optional, number): The number of hits to return. It must not be negative. By default, you cannot page through more than 10,000 hits using thefrom
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): The search after definition -
with_profile_uid
(Optional, boolean): Determines whether to retrieve the user profile UID, if it exists, for the users.
-
saml_authenticate
editAuthenticate SAML.
Submit a SAML response message to Elasticsearch for consumption.
This API is intended for use by custom web applications other than Kibana. If you are using Kibana, refer to the documentation for configuring SAML single-sign-on on the Elastic Stack.
The SAML message that is submitted can be:
- A response to a SAML authentication request that was previously created using the SAML prepare authentication API.
- An unsolicited SAML message in the case of an IdP-initiated single sign-on (SSO) flow.
In either case, the SAML message needs to be a base64 encoded XML document with a root element of <Response>
.
After successful validation, Elasticsearch responds with an Elasticsearch internal access token and refresh token that can be subsequently used for authentication. This API endpoint essentially exchanges SAML responses that indicate successful authentication in the IdP for Elasticsearch access and refresh tokens, which can be used for authentication against Elasticsearch.
client.security.samlAuthenticate({ content, ids })
Arguments
edit-
Request (object):
-
content
(string): The SAML response as it was sent by the user’s browser, usually a Base64 encoded XML document. -
ids
(string | string[]): A JSON array with all the valid SAML Request Ids that the caller of the API has for the current user. -
realm
(Optional, string): The name of the realm that should authenticate the SAML response. Useful in cases where many SAML realms are defined.
-
saml_complete_logout
editLogout of SAML completely.
Verifies the logout response sent from the SAML IdP.
This API is intended for use by custom web applications other than Kibana. If you are using Kibana, refer to the documentation for configuring SAML single-sign-on on the Elastic Stack.
The SAML IdP may send a logout response back to the SP after handling the SP-initiated SAML Single Logout. This API verifies the response by ensuring the content is relevant and validating its signature. An empty response is returned if the verification process is successful. The response can be sent by the IdP with either the HTTP-Redirect or the HTTP-Post binding. The caller of this API must prepare the request accordingly so that this API can handle either of them.
client.security.samlCompleteLogout({ realm, ids })
Arguments
edit-
Request (object):
-
realm
(string): The name of the SAML realm in Elasticsearch for which the configuration is used to verify the logout response. -
ids
(string | string[]): A JSON array with all the valid SAML Request Ids that the caller of the API has for the current user. -
query_string
(Optional, string): If the SAML IdP sends the logout response with the HTTP-Redirect binding, this field must be set to the query string of the redirect URI. -
content
(Optional, string): If the SAML IdP sends the logout response with the HTTP-Post binding, this field must be set to the value of the SAMLResponse form parameter from the logout response.
-
saml_invalidate
editInvalidate SAML.
Submit a SAML LogoutRequest message to Elasticsearch for consumption.
This API is intended for use by custom web applications other than Kibana. If you are using Kibana, refer to the documentation for configuring SAML single-sign-on on the Elastic Stack.
The logout request comes from the SAML IdP during an IdP initiated Single Logout.
The custom web application can use this API to have Elasticsearch process the LogoutRequest
.
After successful validation of the request, Elasticsearch invalidates the access token and refresh token that corresponds to that specific SAML principal and provides a URL that contains a SAML LogoutResponse message.
Thus the user can be redirected back to their IdP.
client.security.samlInvalidate({ query_string })
Arguments
edit-
Request (object):
-
query_string
(string): The query part of the URL that the user was redirected to by the SAML IdP to initiate the Single Logout. This query should include a single parameter namedSAMLRequest
that contains a SAML logout request that is deflated and Base64 encoded. If the SAML IdP has signed the logout request, the URL should include two extra parameters namedSigAlg
andSignature
that contain the algorithm used for the signature and the signature value itself. In order for Elasticsearch to be able to verify the IdP’s signature, the value of thequery_string
field must be an exact match to the string provided by the browser. The client application must not attempt to parse or process the string in any way. -
acs
(Optional, string): The Assertion Consumer Service URL that matches the one of the SAML realm in Elasticsearch that should be used. You must specify either this parameter or therealm
parameter. -
realm
(Optional, string): The name of the SAML realm in Elasticsearch the configuration. You must specify either this parameter or theacs
parameter.
-
saml_logout
editLogout of SAML.
Submits a request to invalidate an access token and refresh token.
This API is intended for use by custom web applications other than Kibana. If you are using Kibana, refer to the documentation for configuring SAML single-sign-on on the Elastic Stack.
This API invalidates the tokens that were generated for a user by the SAML authenticate API. If the SAML realm in Elasticsearch is configured accordingly and the SAML IdP supports this, the Elasticsearch response contains a URL to redirect the user to the IdP that contains a SAML logout request (starting an SP-initiated SAML Single Logout).
client.security.samlLogout({ token })
Arguments
edit-
Request (object):
-
token
(string): The access token that was returned as a response to calling the SAML authenticate API. Alternatively, the most recent token that was received after refreshing the original one by using arefresh_token
. -
refresh_token
(Optional, string): The refresh token that was returned as a response to calling the SAML authenticate API. Alternatively, the most recent refresh token that was received after refreshing the original access token.
-
saml_prepare_authentication
editPrepare SAML authentication.
Create a SAML authentication request (<AuthnRequest>
) as a URL string based on the configuration of the respective SAML realm in Elasticsearch.
This API is intended for use by custom web applications other than Kibana. If you are using Kibana, refer to the documentation for configuring SAML single-sign-on on the Elastic Stack.
This API returns a URL pointing to the SAML Identity Provider.
You can use the URL to redirect the browser of the user in order to continue the authentication process.
The URL includes a single parameter named SAMLRequest
, which contains a SAML Authentication request that is deflated and Base64 encoded.
If the configuration dictates that SAML authentication requests should be signed, the URL has two extra parameters named SigAlg
and Signature
.
These parameters contain the algorithm used for the signature and the signature value itself.
It also returns a random string that uniquely identifies this SAML Authentication request.
The caller of this API needs to store this identifier as it needs to be used in a following step of the authentication process.
client.security.samlPrepareAuthentication({ ... })
Arguments
edit-
Request (object):
-
acs
(Optional, string): The Assertion Consumer Service URL that matches the one of the SAML realms in Elasticsearch. The realm is used to generate the authentication request. You must specify either this parameter or therealm
parameter. -
realm
(Optional, string): The name of the SAML realm in Elasticsearch for which the configuration is used to generate the authentication request. You must specify either this parameter or theacs
parameter. -
relay_state
(Optional, string): A string that will be included in the redirect URL that this API returns as theRelayState
query parameter. If the Authentication Request is signed, this value is used as part of the signature computation.
-
saml_service_provider_metadata
editCreate SAML service provider metadata.
Generate SAML metadata for a SAML 2.0 Service Provider.
The SAML 2.0 specification provides a mechanism for Service Providers to describe their capabilities and configuration using a metadata file. This API generates Service Provider metadata based on the configuration of a SAML realm in Elasticsearch.
client.security.samlServiceProviderMetadata({ realm_name })
Arguments
edit-
Request (object):
-
realm_name
(string): The name of the SAML realm in Elasticsearch.
-
suggest_user_profiles
editSuggest a user profile.
Get suggestions for user profiles that match specified search criteria.
The user profile feature is designed only for use by Kibana and Elastic’s Observability, Enterprise Search, and Elastic Security solutions. Individual users and external applications should not call this API directly. Elastic reserves the right to change or remove this feature in future releases without prior notice.
client.security.suggestUserProfiles({ ... })
Arguments
edit-
Request (object):
-
name
(Optional, string): A query string used to match name-related fields in user profile documents. Name-related fields are the user’susername
,full_name
, andemail
. -
size
(Optional, number): The number of profiles to return. -
data
(Optional, string | string[]): A list of filters for thedata
field of the profile document. To return all content usedata=*
. To return a subset of content, usedata=<key>
to retrieve content nested under the specified<key>
. By default, the API returns nodata
content. It is an error to specifydata
as both the query parameter and the request body field. -
hint
(Optional, { uids, labels }): Extra search criteria to improve relevance of the suggestion result. Profiles matching the spcified hint are ranked higher in the response. Profiles not matching the hint aren’t excluded from the response as long as the profile matches thename
field query.
-
update_api_key
editUpdate an API key.
Update attributes of an existing API key. This API supports updates to an API key’s access scope, expiration, and metadata.
To use this API, you must have at least the manage_own_api_key
cluster privilege.
Users can only update API keys that they created or that were granted to them.
To update another user’s API key, use the run_as
feature to submit a request on behalf of another user.
It’s not possible to use an API key as the authentication credential for this API. The owner user’s credentials are required.
Use this API to update API keys created by the create API key or grant API Key APIs. If you need to apply the same update to many API keys, you can use the bulk update API keys API to reduce overhead. It’s not possible to update expired API keys or API keys that have been invalidated by the invalidate API key API.
The access scope of an API key is derived from the role_descriptors
you specify in the request and a snapshot of the owner user’s permissions at the time of the request.
The snapshot of the owner’s permissions is updated automatically on every call.
If you don’t specify role_descriptors
in the request, a call to this API might still change the API key’s access scope.
This change can occur if the owner user’s permissions have changed since the API key was created or last modified.
client.security.updateApiKey({ id })
Arguments
edit-
Request (object):
-
id
(string): The ID of the API key to update. -
role_descriptors
(Optional, Record<string, { cluster, indices, remote_indices, remote_cluster, global, applications, metadata, run_as, description, restriction, transient_metadata }>): The role descriptors to assign to this API key. The API key’s effective permissions are an intersection of its assigned privileges and the point in time snapshot of permissions of the owner user. You can assign new privileges by specifying them in this parameter. To remove assigned privileges, you can supply an emptyrole_descriptors
parameter, that is to say, an empty object{}
. If an API key has no assigned privileges, it inherits the owner user’s full permissions. The snapshot of the owner’s permissions is always updated, whether you supply therole_descriptors
parameter or not. The structure of a role descriptor is the same as the request for the create API keys API. -
metadata
(Optional, Record<string, User-defined value>): Arbitrary metadata that you want to associate with the API key. It supports a nested data structure. Within the metadata object, keys beginning with_
are reserved for system usage. When specified, this value fully replaces the metadata previously associated with the API key. -
expiration
(Optional, string | -1 | 0): The expiration time for the API key. By default, API keys never expire. This property can be omitted to leave the expiration unchanged.
-
update_cross_cluster_api_key
editUpdate a cross-cluster API key.
Update the attributes of an existing cross-cluster API key, which is used for API key based remote cluster access.
To use this API, you must have at least the manage_security
cluster privilege.
Users can only update API keys that they created.
To update another user’s API key, use the run_as
feature to submit a request on behalf of another user.
It’s not possible to use an API key as the authentication credential for this API. To update an API key, the owner user’s credentials are required.
It’s not possible to update expired API keys, or API keys that have been invalidated by the invalidate API key API.
This API supports updates to an API key’s access scope, metadata, and expiration.
The owner user’s information, such as the username
and realm
, is also updated automatically on every call.
This API cannot update REST API keys, which should be updated by either the update API key or bulk update API keys API.
client.security.updateCrossClusterApiKey({ id, access })
Arguments
edit-
Request (object):
-
id
(string): The ID of the cross-cluster API key to update. -
access
({ replication, search }): The access to be granted to this API key. The access is composed of permissions for cross cluster search and cross cluster replication. At least one of them must be specified. When specified, the new access assignment fully replaces the previously assigned access. -
expiration
(Optional, string | -1 | 0): The expiration time for the API key. By default, API keys never expire. This property can be omitted to leave the value unchanged. -
metadata
(Optional, Record<string, User-defined value>): Arbitrary metadata that you want to associate with the API key. It supports nested data structure. Within the metadata object, keys beginning with_
are reserved for system usage. When specified, this information fully replaces metadata previously associated with the API key.
-
update_settings
editUpdate security index settings.
Update the user-configurable settings for the security internal index (.security
and associated indices). Only a subset of settings are allowed to be modified. This includes index.auto_expand_replicas
and index.number_of_replicas
.
If index.auto_expand_replicas
is set, index.number_of_replicas
will be ignored during updates.
If a specific index is not in use on the system and settings are provided for it, the request will be rejected. This API does not yet support configuring the settings for indices before they are in use.
client.security.updateSettings({ ... })
Arguments
edit-
Request (object):
-
security
(Optional, { index }): Settings for the index used for most security configuration, including native realm users and roles configured with the API. -
security-profile
(Optional, { index }): Settings for the index used to store profile information. -
security-tokens
(Optional, { index }): Settings for the index used to store tokens. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
update_user_profile_data
editUpdate user profile data.
Update specific data for the user profile that is associated with a unique ID.
The user profile feature is designed only for use by Kibana and Elastic’s Observability, Enterprise Search, and Elastic Security solutions. Individual users and external applications should not call this API directly. Elastic reserves the right to change or remove this feature in future releases without prior notice.
To use this API, you must have one of the following privileges:
-
The
manage_user_profile
cluster privilege. -
The
update_profile_data
global privilege for the namespaces that are referenced in the request.
This API updates the labels
and data
fields of an existing user profile document with JSON objects.
New keys and their values are added to the profile document and conflicting keys are replaced by data that’s included in the request.
For both labels and data, content is namespaced by the top-level fields.
The update_profile_data
global privilege grants privileges for updating only the allowed namespaces.
client.security.updateUserProfileData({ uid })
Arguments
edit-
Request (object):
-
uid
(string): A unique identifier for the user profile. -
labels
(Optional, Record<string, User-defined value>): Searchable data that you want to associate with the user profile. This field supports a nested data structure. Within the labels object, top-level keys cannot begin with an underscore (_
) or contain a period (.
). -
data
(Optional, Record<string, User-defined value>): Non-searchable data that you want to associate with the user profile. This field supports a nested data structure. Within thedata
object, top-level keys cannot begin with an underscore (_
) or contain a period (.
). The data object is not searchable, but can be retrieved with the get user profile API. -
if_seq_no
(Optional, number): Only perform the operation if the document has this sequence number. -
if_primary_term
(Optional, number): Only perform the operation if the document has this primary term. -
refresh
(Optional, Enum(true | false | "wait_for")): If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, nothing is done with refreshes.
-
shutdown
editdelete_node
editCancel node shutdown preparations. Remove a node from the shutdown list so it can resume normal operations. You must explicitly clear the shutdown request when a node rejoins the cluster or when a node has permanently left the cluster. Shutdown requests are never removed automatically by Elasticsearch.
This feature is designed for indirect use by Elastic Cloud, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
If the operator privileges feature is enabled, you must be an operator to use this API.
client.shutdown.deleteNode({ node_id })
Arguments
edit-
Request (object):
-
node_id
(string): The node id of node to be removed from the shutdown state -
master_timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_node
editGet the shutdown status.
Get information about nodes that are ready to be shut down, have shut down preparations still in progress, or have stalled. The API returns status information for each part of the shut down process.
This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
If the operator privileges feature is enabled, you must be an operator to use this API.
client.shutdown.getNode({ ... })
Arguments
edit-
Request (object):
-
node_id
(Optional, string | string[]): Which node for which to retrieve the shutdown status -
master_timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
put_node
editPrepare a node to be shut down.
This feature is designed for indirect use by Elastic Cloud, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
If you specify a node that is offline, it will be prepared for shut down when it rejoins the cluster.
If the operator privileges feature is enabled, you must be an operator to use this API.
The API migrates ongoing tasks and index shards to other nodes as needed to prepare a node to be restarted or shut down and removed from the cluster. This ensures that Elasticsearch can be stopped safely with minimal disruption to the cluster.
You must specify the type of shutdown: restart
, remove
, or replace
.
If a node is already being prepared for shutdown, you can use this API to change the shutdown type.
This API does NOT terminate the Elasticsearch process. Monitor the node shutdown status to determine when it is safe to stop Elasticsearch.
client.shutdown.putNode({ node_id, type, reason })
Arguments
edit-
Request (object):
-
node_id
(string): The node identifier. This parameter is not validated against the cluster’s active nodes. This enables you to register a node for shut down while it is offline. No error is thrown if you specify an invalid node ID. -
type
(Enum("restart" | "remove" | "replace")): Valid values are restart, remove, or replace. Use restart when you need to temporarily shut down a node to perform an upgrade, make configuration changes, or perform other maintenance. Because the node is expected to rejoin the cluster, data is not migrated off of the node. Use remove when you need to permanently remove a node from the cluster. The node is not marked ready for shutdown until data is migrated off of the node Use replace to do a 1:1 replacement of a node with another node. Certain allocation decisions will be ignored (such as disk watermarks) in the interest of true replacement of the source node with the target node. During a replace-type shutdown, rollover and index creation may result in unassigned shards, and shrink may fail until the replacement is complete. -
reason
(string): A human-readable reason that the node is being shut down. This field provides information for other cluster operators; it does not affect the shut down process. -
allocation_delay
(Optional, string): Only valid if type is restart. Controls how long Elasticsearch will wait for the node to restart and join the cluster before reassigning its shards to other nodes. This works the same as delaying allocation with the index.unassigned.node_left.delayed_timeout setting. If you specify both a restart allocation delay and an index-level allocation delay, the longer of the two is used. -
target_node_name
(Optional, string): Only valid if type is replace. Specifies the name of the node that is replacing the node being shut down. Shards from the shut down node are only allowed to be allocated to the target node, and no other data will be allocated to the target node. During relocation of data certain allocation rules are ignored, such as disk watermarks or user attribute filtering rules. -
master_timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, Enum("nanos" | "micros" | "ms" | "s" | "m" | "h" | "d")): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
simulate
editingest
editSimulate data ingestion. Run ingest pipelines against a set of provided documents, optionally with substitute pipeline definitions, to simulate ingesting data into an index.
This API is meant to be used for troubleshooting or pipeline development, as it does not actually index any data into Elasticsearch.
The API runs the default and final pipeline for that index against a set of documents provided in the body of the request. If a pipeline contains a reroute processor, it follows that reroute processor to the new index, running that index’s pipelines as well the same way that a non-simulated ingest would. No data is indexed into Elasticsearch. Instead, the transformed document is returned, along with the list of pipelines that have been run and the name of the index where the document would have been indexed if this were not a simulation. The transformed document is validated against the mappings that would apply to this index, and any validation error is reported in the result.
This API differs from the simulate pipeline API in that you specify a single pipeline for that API, and it runs only that one pipeline. The simulate pipeline API is more useful for developing a single pipeline, while the simulate ingest API is more useful for troubleshooting the interaction of the various pipelines that get applied when ingesting into an index.
By default, the pipeline definitions that are currently in the system are used. However, you can supply substitute pipeline definitions in the body of the request. These will be used in place of the pipeline definitions that are already in the system. This can be used to replace existing pipeline definitions or to create new ones. The pipeline substitutions are used only within this request.
client.simulate.ingest({ docs })
Arguments
edit-
Request (object):
-
docs
({ _id, _index, _source }[]): Sample documents to test in the pipeline. -
index
(Optional, string): The index to simulate ingesting into. This value can be overridden by specifying an index on each document. If you specify this parameter in the request path, it is used for any documents that do not explicitly specify an index argument. -
component_template_substitutions
(Optional, Record<string, { template, version, _meta, deprecated }>): A map of component template names to substitute component template definition objects. -
index_template_substitutions
(Optional, Record<string, { index_patterns, composed_of, template, version, priority, _meta, allow_auto_create, data_stream, deprecated, ignore_missing_component_templates }>): A map of index template names to substitute index template definition objects. -
mapping_addition
(Optional, { all_field, date_detection, dynamic, dynamic_date_formats, dynamic_templates, _field_names, index_field, _meta, numeric_detection, properties, _routing, _size, _source, runtime, enabled, subobjects, _data_stream_timestamp }) -
pipeline_substitutions
(Optional, Record<string, { description, on_failure, processors, version, deprecated, _meta }>): Pipelines to test. If you don’t specify thepipeline
request path parameter, this parameter is required. If you specify both this and the request path parameter, the API only uses the request path parameter. -
pipeline
(Optional, string): The pipeline to use as the default pipeline. This value can be used to override the default pipeline of the index.
-
slm
editdelete_lifecycle
editDelete a policy. Delete a snapshot lifecycle policy definition. This operation prevents any future snapshots from being taken but does not cancel in-progress snapshots or remove previously-taken snapshots.
client.slm.deleteLifecycle({ policy_id })
Arguments
edit-
Request (object):
-
policy_id
(string): The id of the snapshot lifecycle policy to remove -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
execute_lifecycle
editRun a policy. Immediately create a snapshot according to the snapshot lifecycle policy without waiting for the scheduled time. The snapshot policy is normally applied according to its schedule, but you might want to manually run a policy before performing an upgrade or other maintenance.
client.slm.executeLifecycle({ policy_id })
Arguments
edit-
Request (object):
-
policy_id
(string): The id of the snapshot lifecycle policy to be executed -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
execute_retention
editRun a retention policy. Manually apply the retention policy to force immediate removal of snapshots that are expired according to the snapshot lifecycle policy retention rules. The retention policy is normally applied according to its schedule.
client.slm.executeRetention({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_lifecycle
editGet policy information. Get snapshot lifecycle policy definitions and information about the latest snapshot attempts.
client.slm.getLifecycle({ ... })
Arguments
edit-
Request (object):
-
policy_id
(Optional, string | string[]): List of snapshot lifecycle policies to retrieve -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_stats
editGet snapshot lifecycle management statistics. Get global and policy-level statistics about actions taken by snapshot lifecycle management.
client.slm.getStats({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_status
editGet the snapshot lifecycle management status.
client.slm.getStatus({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
.
-
put_lifecycle
editCreate or update a policy. Create or update a snapshot lifecycle policy. If the policy already exists, this request increments the policy version. Only the latest version of a policy is stored.
client.slm.putLifecycle({ policy_id })
Arguments
edit-
Request (object):
-
policy_id
(string): The identifier for the snapshot lifecycle policy you want to create or update. -
config
(Optional, { ignore_unavailable, indices, include_global_state, feature_states, metadata, partial }): Configuration for each snapshot created by the policy. -
name
(Optional, string): Name automatically assigned to each snapshot created by the policy. Date math is supported. To prevent conflicting snapshot names, a UUID is automatically appended to each snapshot name. -
repository
(Optional, string): Repository used to store snapshots created by this policy. This repository must exist prior to the policy’s creation. You can create a repository using the snapshot repository API. -
retention
(Optional, { expire_after, max_count, min_count }): Retention rules used to retain and delete snapshots created by the policy. -
schedule
(Optional, string): Periodic or absolute schedule at which the policy creates snapshots. SLM applies schedule changes immediately. -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
.
-
start
editStart snapshot lifecycle management. Snapshot lifecycle management (SLM) starts automatically when a cluster is formed. Manually starting SLM is necessary only if it has been stopped using the stop SLM API.
client.slm.start({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
.
-
stop
editStop snapshot lifecycle management. Stop all snapshot lifecycle management (SLM) operations and the SLM plugin. This API is useful when you are performing maintenance on a cluster and need to prevent SLM from performing any actions on your data streams or indices. Stopping SLM does not stop any snapshots that are in progress. You can manually trigger snapshots with the run snapshot lifecycle policy API even if SLM is stopped.
The API returns a response as soon as the request is acknowledged, but the plugin might continue to run until in-progress operations complete and it can be safely stopped. Use the get snapshot lifecycle management status API to see if SLM is running.
client.slm.stop({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
.
-
snapshot
editcleanup_repository
editClean up the snapshot repository. Trigger the review of the contents of a snapshot repository and delete any stale data not referenced by existing snapshots.
client.snapshot.cleanupRepository({ repository })
Arguments
edit-
Request (object):
-
repository
(string): Snapshot repository to clean up. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. -
timeout
(Optional, string | -1 | 0): Period to wait for a response.
-
clone
editClone a snapshot. Clone part of all of a snapshot into another snapshot in the same repository.
client.snapshot.clone({ repository, snapshot, target_snapshot, indices })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
snapshot
(string): The name of the snapshot to clone from -
target_snapshot
(string): The name of the cloned snapshot to create -
indices
(string) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node
-
create
editCreate a snapshot. Take a snapshot of a cluster or of data streams and indices.
client.snapshot.create({ repository, snapshot })
Arguments
edit-
Request (object):
-
repository
(string): Repository for the snapshot. -
snapshot
(string): Name of the snapshot. Must be unique in the repository. -
ignore_unavailable
(Optional, boolean): Iftrue
, the request ignores data streams and indices inindices
that are missing or closed. Iffalse
, the request returns an error for any data stream or index that is missing or closed. -
include_global_state
(Optional, boolean): Iftrue
, the current cluster state is included in the snapshot. The cluster state includes persistent cluster settings, composable index templates, legacy index templates, ingest pipelines, and ILM policies. It also includes data stored in system indices, such as Watches and task records (configurable viafeature_states
). -
indices
(Optional, string | string[]): Data streams and indices to include in the snapshot. Supports multi-target syntax. Includes all data streams and indices by default. -
feature_states
(Optional, string[]): Feature states to include in the snapshot. Each feature state includes one or more system indices containing related data. You can view a list of eligible features using the get features API. Ifinclude_global_state
istrue
, all current feature states are included by default. Ifinclude_global_state
isfalse
, no feature states are included by default. -
metadata
(Optional, Record<string, User-defined value>): Optional metadata for the snapshot. May have any contents. Must be less than 1024 bytes. This map is not automatically generated by Elasticsearch. -
partial
(Optional, boolean): Iftrue
, allows restoring a partial snapshot of indices with unavailable shards. Only shards that were successfully included in the snapshot will be restored. All missing shards will be recreated as empty. Iffalse
, the entire restore operation will fail if one or more indices included in the snapshot do not have all primary shards available. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_completion
(Optional, boolean): Iftrue
, the request returns a response when the snapshot is complete. Iffalse
, the request returns a response when the snapshot initializes.
-
create_repository
editCreate or update a snapshot repository.
IMPORTANT: If you are migrating searchable snapshots, the repository name must be identical in the source and destination clusters.
To register a snapshot repository, the cluster’s global metadata must be writeable.
Ensure there are no cluster blocks (for example, cluster.blocks.read_only
and clsuter.blocks.read_only_allow_delete
settings) that prevent write access.
client.snapshot.createRepository({ repository })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
timeout
(Optional, string | -1 | 0): Explicit operation timeout -
verify
(Optional, boolean): Whether to verify the repository after creation
-
delete
editDelete snapshots.
client.snapshot.delete({ repository, snapshot })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
snapshot
(string): A list of snapshot names -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node
-
delete_repository
editDelete snapshot repositories. When a repository is unregistered, Elasticsearch removes only the reference to the location where the repository is storing the snapshots. The snapshots themselves are left untouched and in place.
client.snapshot.deleteRepository({ repository })
Arguments
edit-
Request (object):
-
repository
(string | string[]): Name of the snapshot repository to unregister. Wildcard (*
) patterns are supported. -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
get
editGet snapshot information.
client.snapshot.get({ repository, snapshot })
Arguments
edit-
Request (object):
-
repository
(string): List of snapshot repository names used to limit the request. Wildcard (*) expressions are supported. -
snapshot
(string | string[]): List of snapshot names to retrieve. Also accepts wildcards (*).- To get information about all snapshots in a registered repository, use a wildcard (*) or _all.
- To get information about any snapshots that are currently running, use _current.
-
ignore_unavailable
(Optional, boolean): If false, the request returns an error for any snapshots that are unavailable. -
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
verbose
(Optional, boolean): If true, returns additional information about each snapshot such as the version of Elasticsearch which took the snapshot, the start and end times of the snapshot, and the number of shards snapshotted. -
index_details
(Optional, boolean): If true, returns additional information about each index in the snapshot comprising the number of shards in the index, the total size of the index in bytes, and the maximum number of segments per shard in the index. Defaults to false, meaning that this information is omitted. -
index_names
(Optional, boolean): If true, returns the name of each index in each snapshot. -
include_repository
(Optional, boolean): If true, returns the repository name in each snapshot. -
sort
(Optional, Enum("start_time" | "duration" | "name" | "index_count" | "repository" | "shard_count" | "failed_shard_count")): Allows setting a sort order for the result. Defaults to start_time, i.e. sorting by snapshot start time stamp. -
size
(Optional, number): Maximum number of snapshots to return. Defaults to 0 which means return all that match the request without limit. -
order
(Optional, Enum("asc" | "desc")): Sort order. Valid values are asc for ascending and desc for descending order. Defaults to asc, meaning ascending order. -
after
(Optional, string): Offset identifier to start pagination from as returned by the next field in the response body. -
offset
(Optional, number): Numeric offset to start pagination from based on the snapshots matching this request. Using a non-zero value for this parameter is mutually exclusive with using the after parameter. Defaults to 0. -
from_sort_value
(Optional, string): Value of the current sort column at which to start retrieval. Can either be a string snapshot- or repository name when sorting by snapshot or repository name, a millisecond time value or a number when sorting by index- or shard count. -
slm_policy_filter
(Optional, string): Filter snapshots by a list of SLM policy names that snapshots belong to. Also accepts wildcards (*) and combinations of wildcards followed by exclude patterns starting with -. To include snapshots not created by an SLM policy you can use the special pattern _none that will match all snapshots without an SLM policy.
-
get_repository
editGet snapshot repository information.
client.snapshot.getRepository({ ... })
Arguments
edit-
Request (object):
-
repository
(Optional, string | string[]): A list of repository names -
local
(Optional, boolean): Return local information, do not retrieve the state from master node (default: false) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node
-
repository_analyze
editAnalyze a snapshot repository. Analyze the performance characteristics and any incorrect behaviour found in a repository.
The response exposes implementation details of the analysis which may change from version to version. The response body format is therefore not considered stable and may be different in newer versions.
There are a large number of third-party storage systems available, not all of which are suitable for use as a snapshot repository by Elasticsearch. Some storage systems behave incorrectly, or perform poorly, especially when accessed concurrently by multiple clients as the nodes of an Elasticsearch cluster do. This API performs a collection of read and write operations on your repository which are designed to detect incorrect behaviour and to measure the performance characteristics of your storage system.
The default values for the parameters are deliberately low to reduce the impact of running an analysis inadvertently and to provide a sensible starting point for your investigations.
Run your first analysis with the default parameter values to check for simple problems.
If successful, run a sequence of increasingly large analyses until you encounter a failure or you reach a blob_count
of at least 2000
, a max_blob_size
of at least 2gb
, a max_total_data_size
of at least 1tb
, and a register_operation_count
of at least 100
.
Always specify a generous timeout, possibly 1h
or longer, to allow time for each analysis to run to completion.
Perform the analyses using a multi-node cluster of a similar size to your production cluster so that it can detect any problems that only arise when the repository is accessed by many nodes at once.
If the analysis fails, Elasticsearch detected that your repository behaved unexpectedly. This usually means you are using a third-party storage system with an incorrect or incompatible implementation of the API it claims to support. If so, this storage system is not suitable for use as a snapshot repository. You will need to work with the supplier of your storage system to address the incompatibilities that Elasticsearch detects.
If the analysis is successful, the API returns details of the testing process, optionally including how long each operation took. You can use this information to determine the performance of your storage system. If any operation fails or returns an incorrect result, the API returns an error. If the API returns an error, it may not have removed all the data it wrote to the repository. The error will indicate the location of any leftover data and this path is also recorded in the Elasticsearch logs. You should verify that this location has been cleaned up correctly. If there is still leftover data at the specified location, you should manually remove it.
If the connection from your client to Elasticsearch is closed while the client is waiting for the result of the analysis, the test is cancelled. Some clients are configured to close their connection if no response is received within a certain timeout. An analysis takes a long time to complete so you might need to relax any such client-side timeouts. On cancellation the analysis attempts to clean up the data it was writing, but it may not be able to remove it all. The path to the leftover data is recorded in the Elasticsearch logs. You should verify that this location has been cleaned up correctly. If there is still leftover data at the specified location, you should manually remove it.
If the analysis is successful then it detected no incorrect behaviour, but this does not mean that correct behaviour is guaranteed. The analysis attempts to detect common bugs but it does not offer 100% coverage. Additionally, it does not test the following:
- Your repository must perform durable writes. Once a blob has been written it must remain in place until it is deleted, even after a power loss or similar disaster.
- Your repository must not suffer from silent data corruption. Once a blob has been written, its contents must remain unchanged until it is deliberately modified or deleted.
- Your repository must behave correctly even if connectivity from the cluster is disrupted. Reads and writes may fail in this case, but they must not return incorrect results.
An analysis writes a substantial amount of data to your repository and then reads it back again.
This consumes bandwidth on the network between the cluster and the repository, and storage space and I/O bandwidth on the repository itself.
You must ensure this load does not affect other users of these systems.
Analyses respect the repository settings max_snapshot_bytes_per_sec
and max_restore_bytes_per_sec
if available and the cluster setting indices.recovery.max_bytes_per_sec
which you can use to limit the bandwidth they consume.
This API is intended for exploratory use by humans. You should expect the request parameters and the response format to vary in future versions.
Different versions of Elasticsearch may perform different checks for repository compatibility, with newer versions typically being stricter than older ones. A storage system that passes repository analysis with one version of Elasticsearch may fail with a different version. This indicates it behaves incorrectly in ways that the former version did not detect. You must work with the supplier of your storage system to address the incompatibilities detected by the repository analysis API in any version of Elasticsearch.
This API may not work correctly in a mixed-version cluster.
Implementation details
This section of documentation describes how the repository analysis API works in this version of Elasticsearch, but you should expect the implementation to vary between versions. The request parameters and response format depend on details of the implementation so may also be different in newer versions.
The analysis comprises a number of blob-level tasks, as set by the blob_count
parameter and a number of compare-and-exchange operations on linearizable registers, as set by the register_operation_count
parameter.
These tasks are distributed over the data and master-eligible nodes in the cluster for execution.
For most blob-level tasks, the executing node first writes a blob to the repository and then instructs some of the other nodes in the cluster to attempt to read the data it just wrote.
The size of the blob is chosen randomly, according to the max_blob_size
and max_total_data_size
parameters.
If any of these reads fails then the repository does not implement the necessary read-after-write semantics that Elasticsearch requires.
For some blob-level tasks, the executing node will instruct some of its peers to attempt to read the data before the writing process completes. These reads are permitted to fail, but must not return partial data. If any read returns partial data then the repository does not implement the necessary atomicity semantics that Elasticsearch requires.
For some blob-level tasks, the executing node will overwrite the blob while its peers are reading it. In this case the data read may come from either the original or the overwritten blob, but the read operation must not return partial data or a mix of data from the two blobs. If any of these reads returns partial data or a mix of the two blobs then the repository does not implement the necessary atomicity semantics that Elasticsearch requires for overwrites.
The executing node will use a variety of different methods to write the blob. For instance, where applicable, it will use both single-part and multi-part uploads. Similarly, the reading nodes will use a variety of different methods to read the data back again. For instance they may read the entire blob from start to end or may read only a subset of the data.
For some blob-level tasks, the executing node will cancel the write before it is complete. In this case, it still instructs some of the other nodes in the cluster to attempt to read the blob but all of these reads must fail to find the blob.
Linearizable registers are special blobs that Elasticsearch manipulates using an atomic compare-and-exchange operation. This operation ensures correct and strongly-consistent behavior even when the blob is accessed by multiple nodes at the same time. The detailed implementation of the compare-and-exchange operation on linearizable registers varies by repository type. Repository analysis verifies that that uncontended compare-and-exchange operations on a linearizable register blob always succeed. Repository analysis also verifies that contended operations either succeed or report the contention but do not return incorrect results. If an operation fails due to contention, Elasticsearch retries the operation until it succeeds. Most of the compare-and-exchange operations performed by repository analysis atomically increment a counter which is represented as an 8-byte blob. Some operations also verify the behavior on small blobs with sizes other than 8 bytes.
client.snapshot.repositoryAnalyze({ repository })
Arguments
edit-
Request (object):
-
repository
(string): The name of the repository. -
blob_count
(Optional, number): The total number of blobs to write to the repository during the test. For realistic experiments, you should set it to at least2000
. -
concurrency
(Optional, number): The number of operations to run concurrently during the test. -
detailed
(Optional, boolean): Indicates whether to return detailed results, including timing information for every operation performed during the analysis. If false, it returns only a summary of the analysis. -
early_read_node_count
(Optional, number): The number of nodes on which to perform an early read operation while writing each blob. Early read operations are only rarely performed. -
max_blob_size
(Optional, number | string): The maximum size of a blob to be written during the test. For realistic experiments, you should set it to at least2gb
. -
max_total_data_size
(Optional, number | string): An upper limit on the total size of all the blobs written during the test. For realistic experiments, you should set it to at least1tb
. -
rare_action_probability
(Optional, number): The probability of performing a rare action such as an early read, an overwrite, or an aborted write on each blob. -
rarely_abort_writes
(Optional, boolean): Indicates whether to rarely cancel writes before they complete. -
read_node_count
(Optional, number): The number of nodes on which to read a blob after writing. -
register_operation_count
(Optional, number): The minimum number of linearizable register operations to perform in total. For realistic experiments, you should set it to at least100
. -
seed
(Optional, number): The seed for the pseudo-random number generator used to generate the list of operations performed during the test. To repeat the same set of operations in multiple experiments, use the same seed in each experiment. Note that the operations are performed concurrently so might not always happen in the same order on each run. -
timeout
(Optional, string | -1 | 0): The period of time to wait for the test to complete. If no response is received before the timeout expires, the test is cancelled and returns an error.
-
restore
editRestore a snapshot. Restore a snapshot of a cluster or data streams and indices.
You can restore a snapshot only to a running cluster with an elected master node. The snapshot repository must be registered and available to the cluster. The snapshot and cluster versions must be compatible.
To restore a snapshot, the cluster’s global metadata must be writable. Ensure there are’t any cluster blocks that prevent writes. The restore operation ignores index blocks.
Before you restore a data stream, ensure the cluster contains a matching index template with data streams enabled. To check, use the index management feature in Kibana or the get index template API:
GET _index_template/*?filter_path=index_templates.name,index_templates.index_template.index_patterns,index_templates.index_template.data_stream
If no such template exists, you can create one or restore a cluster state that contains one. Without a matching index template, a data stream can’t roll over or create backing indices.
If your snapshot contains data from App Search or Workplace Search, you must restore the Enterprise Search encryption key before you restore the snapshot.
client.snapshot.restore({ repository, snapshot })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
snapshot
(string): A snapshot name -
feature_states
(Optional, string[]) -
ignore_index_settings
(Optional, string[]) -
ignore_unavailable
(Optional, boolean) -
include_aliases
(Optional, boolean) -
include_global_state
(Optional, boolean) -
index_settings
(Optional, { index, mode, routing_path, soft_deletes, sort, number_of_shards, number_of_replicas, number_of_routing_shards, check_on_startup, codec, routing_partition_size, load_fixed_bitset_filters_eagerly, hidden, auto_expand_replicas, merge, search, refresh_interval, max_result_window, max_inner_result_window, max_rescore_window, max_docvalue_fields_search, max_script_fields, max_ngram_diff, max_shingle_diff, blocks, max_refresh_listeners, analyze, highlight, max_terms_count, max_regex_length, routing, gc_deletes, default_pipeline, final_pipeline, lifecycle, provided_name, creation_date, creation_date_string, uuid, version, verified_before_close, format, max_slices_per_scroll, translog, query_string, priority, top_metrics_max_size, analysis, settings, time_series, queries, similarity, mapping, indexing.slowlog, indexing_pressure, store }) -
indices
(Optional, string | string[]) -
partial
(Optional, boolean) -
rename_pattern
(Optional, string) -
rename_replacement
(Optional, string) -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
wait_for_completion
(Optional, boolean): Should this request wait until the operation has completed before returning
-
status
editGet the snapshot status. Get a detailed description of the current state for each shard participating in the snapshot. Note that this API should be used only to obtain detailed shard-level information for ongoing snapshots. If this detail is not needed or you want to obtain information about one or more existing snapshots, use the get snapshot API.
Using the API to return the status of any snapshots other than currently running snapshots can be expensive. The API requires a read from the repository for each shard in each snapshot. For example, if you have 100 snapshots with 1,000 shards each, an API request that includes all snapshots will require 100,000 reads (100 snapshots x 1,000 shards).
Depending on the latency of your storage, such requests can take an extremely long time to return results. These requests can also tax machine resources and, when using cloud storage, incur high processing costs.
client.snapshot.status({ ... })
Arguments
edit-
Request (object):
-
repository
(Optional, string): A repository name -
snapshot
(Optional, string | string[]): A list of snapshot names -
ignore_unavailable
(Optional, boolean): Whether to ignore unavailable snapshots, defaults to false which means a SnapshotMissingException is thrown -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node
-
verify_repository
editVerify a snapshot repository. Check for common misconfigurations in a snapshot repository.
client.snapshot.verifyRepository({ repository })
Arguments
edit-
Request (object):
-
repository
(string): A repository name -
master_timeout
(Optional, string | -1 | 0): Explicit operation timeout for connection to master node -
timeout
(Optional, string | -1 | 0): Explicit operation timeout
-
sql
editclear_cursor
editClear an SQL search cursor.
client.sql.clearCursor({ cursor })
Arguments
edit-
Request (object):
-
cursor
(string): Cursor to clear.
-
delete_async
editDelete an async SQL search. Delete an async SQL search or a stored synchronous SQL search. If the search is still running, the API cancels it.
If the Elasticsearch security features are enabled, only the following users can use this API to delete a search:
-
Users with the
cancel_task
cluster privilege. - The user who first submitted the search.
client.sql.deleteAsync({ id })
Arguments
edit-
Request (object):
-
id
(string): The identifier for the search.
-
get_async
editGet async SQL search results. Get the current status and available results for an async SQL search or stored synchronous SQL search.
If the Elasticsearch security features are enabled, only the user who first submitted the SQL search can retrieve the search using this API.
client.sql.getAsync({ id })
Arguments
edit-
Request (object):
-
id
(string): The identifier for the search. -
delimiter
(Optional, string): The separator for CSV results. The API supports this parameter only for CSV responses. -
format
(Optional, string): The format for the response. You must specify a format using this parameter or theAccept
HTTP header. If you specify both, the API uses this parameter. -
keep_alive
(Optional, string | -1 | 0): The retention period for the search and its results. It defaults to thekeep_alive
period for the original SQL search. -
wait_for_completion_timeout
(Optional, string | -1 | 0): The period to wait for complete results. It defaults to no timeout, meaning the request waits for complete search results.
-
get_async_status
editGet the async SQL search status. Get the current status of an async SQL search or a stored synchronous SQL search.
client.sql.getAsyncStatus({ id })
Arguments
edit-
Request (object):
-
id
(string): The identifier for the search.
-
query
editGet SQL search results. Run an SQL request.
client.sql.query({ ... })
Arguments
edit-
Request (object):
-
allow_partial_search_results
(Optional, boolean): Iftrue
, the response has partial results when there are shard request timeouts or shard failures. Iffalse
, the API returns an error with no partial results. -
catalog
(Optional, string): The default catalog (cluster) for queries. If unspecified, the queries execute on the data in the local cluster only. -
columnar
(Optional, boolean): Iftrue
, the results are in a columnar fashion: one row represents all the values of a certain column from the current page of results. The API supports this parameter only for CBOR, JSON, SMILE, and YAML responses. -
cursor
(Optional, string): The cursor used to retrieve a set of paginated results. If you specify a cursor, the API only uses thecolumnar
andtime_zone
request body parameters. It ignores other request body parameters. -
fetch_size
(Optional, number): The maximum number of rows (or entries) to return in one response. -
field_multi_value_leniency
(Optional, boolean): Iffalse
, the API returns an exception when encountering multiple values for a field. Iftrue
, the API is lenient and returns the first value from the array with no guarantee of consistent results. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The Elasticsearch query DSL for additional filtering. -
index_using_frozen
(Optional, boolean): Iftrue
, the search can run on frozen indices. -
keep_alive
(Optional, string | -1 | 0): The retention period for an async or saved synchronous search. -
keep_on_completion
(Optional, boolean): Iftrue
, Elasticsearch stores synchronous searches if you also specify thewait_for_completion_timeout
parameter. Iffalse
, Elasticsearch only stores async searches that don’t finish before thewait_for_completion_timeout
. -
page_timeout
(Optional, string | -1 | 0): The minimum retention period for the scroll cursor. After this time period, a pagination request might fail because the scroll cursor is no longer available. Subsequent scroll requests prolong the lifetime of the scroll cursor by the duration ofpage_timeout
in the scroll request. -
params
(Optional, Record<string, User-defined value>): The values for parameters in the query. -
query
(Optional, string): The SQL query to run. -
request_timeout
(Optional, string | -1 | 0): The timeout before the request fails. -
runtime_mappings
(Optional, Record<string, { fields, fetch_fields, format, input_field, target_field, target_index, script, type }>): One or more runtime fields for the search request. These fields take precedence over mapped fields with the same name. -
time_zone
(Optional, string): The ISO-8601 time zone ID for the search. -
wait_for_completion_timeout
(Optional, string | -1 | 0): The period to wait for complete results. It defaults to no timeout, meaning the request waits for complete search results. If the search doesn’t finish within this period, the search becomes async.
-
To save a synchronous search, you must specify this parameter and the keep_on_completion
parameter.
* *format
(Optional, Enum("csv" | "json" | "tsv" | "txt" | "yaml" | "cbor" | "smile")): The format for the response.
You can also specify a format using the Accept
HTTP header.
If you specify both this parameter and the Accept
HTTP header, this parameter takes precedence.
translate
editTranslate SQL into Elasticsearch queries.
Translate an SQL search into a search API request containing Query DSL.
It accepts the same request body parameters as the SQL search API, excluding cursor
.
client.sql.translate({ query })
Arguments
edit-
Request (object):
-
query
(string): The SQL query to run. -
fetch_size
(Optional, number): The maximum number of rows (or entries) to return in one response. -
filter
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): The Elasticsearch query DSL for additional filtering. -
time_zone
(Optional, string): The ISO-8601 time zone ID for the search.
-
ssl
editcertificates
editGet SSL certificates.
Get information about the X.509 certificates that are used to encrypt communications in the cluster. The API returns a list that includes certificates from all TLS contexts including:
- Settings for transport and HTTP interfaces
- TLS settings that are used within authentication realms
- TLS settings for remote monitoring exporters
The list includes certificates that are used for configuring trust, such as those configured in the xpack.security.transport.ssl.truststore
and xpack.security.transport.ssl.certificate_authorities
settings.
It also includes certificates that are used for configuring server identity, such as xpack.security.http.ssl.keystore
and xpack.security.http.ssl.certificate settings
.
The list does not include certificates that are sourced from the default SSL context of the Java Runtime Environment (JRE), even if those certificates are in use within Elasticsearch.
When a PKCS#11 token is configured as the truststore of the JRE, the API returns all the certificates that are included in the PKCS#11 token irrespective of whether these are used in the Elasticsearch TLS configuration.
If Elasticsearch is configured to use a keystore or truststore, the API output includes all certificates in that store, even though some of the certificates might not be in active use within the cluster.
client.ssl.certificates()
synonyms
editdelete_synonym
editDelete a synonym set.
You can only delete a synonyms set that is not in use by any index analyzer.
Synonyms sets can be used in synonym graph token filters and synonym token filters. These synonym filters can be used as part of search analyzers.
Analyzers need to be loaded when an index is restored (such as when a node starts, or the index becomes open). Even if the analyzer is not used on any field mapping, it still needs to be loaded on the index recovery phase.
If any analyzers cannot be loaded, the index becomes unavailable and the cluster status becomes red or yellow as index shards are not available. To prevent that, synonyms sets that are used in analyzers can’t be deleted. A delete request in this case will return a 400 response code.
To remove a synonyms set, you must first remove all indices that contain analyzers using it. You can migrate an index by creating a new index that does not contain the token filter with the synonyms set, and use the reindex API in order to copy over the index data. Once finished, you can delete the index. When the synonyms set is not used in analyzers, you will be able to delete it.
client.synonyms.deleteSynonym({ id })
Arguments
edit-
Request (object):
-
id
(string): The synonyms set identifier to delete.
-
delete_synonym_rule
editDelete a synonym rule. Delete a synonym rule from a synonym set.
client.synonyms.deleteSynonymRule({ set_id, rule_id })
Arguments
edit-
Request (object):
-
set_id
(string): The ID of the synonym set to update. -
rule_id
(string): The ID of the synonym rule to delete.
-
get_synonym
editGet a synonym set.
client.synonyms.getSynonym({ id })
Arguments
edit-
Request (object):
-
id
(string): The synonyms set identifier to retrieve. -
from
(Optional, number): The starting offset for query rules to retrieve. -
size
(Optional, number): The max number of query rules to retrieve.
-
get_synonym_rule
editGet a synonym rule. Get a synonym rule from a synonym set.
client.synonyms.getSynonymRule({ set_id, rule_id })
Arguments
edit-
Request (object):
-
set_id
(string): The ID of the synonym set to retrieve the synonym rule from. -
rule_id
(string): The ID of the synonym rule to retrieve.
-
get_synonyms_sets
editGet all synonym sets. Get a summary of all defined synonym sets.
client.synonyms.getSynonymsSets({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): The starting offset for synonyms sets to retrieve. -
size
(Optional, number): The maximum number of synonyms sets to retrieve.
-
put_synonym
editCreate or update a synonym set. Synonyms sets are limited to a maximum of 10,000 synonym rules per set. If you need to manage more synonym rules, you can create multiple synonym sets.
When an existing synonyms set is updated, the search analyzers that use the synonyms set are reloaded automatically for all indices. This is equivalent to invoking the reload search analyzers API for all indices that use the synonyms set.
client.synonyms.putSynonym({ id, synonyms_set })
Arguments
edit-
Request (object):
-
id
(string): The ID of the synonyms set to be created or updated. -
synonyms_set
({ id, synonyms } | { id, synonyms }[]): The synonym rules definitions for the synonyms set.
-
put_synonym_rule
editCreate or update a synonym rule. Create or update a synonym rule in a synonym set.
If any of the synonym rules included is invalid, the API returns an error.
When you update a synonym rule, all analyzers using the synonyms set will be reloaded automatically to reflect the new rule.
client.synonyms.putSynonymRule({ set_id, rule_id, synonyms })
Arguments
edit-
Request (object):
-
set_id
(string): The ID of the synonym set. -
rule_id
(string): The ID of the synonym rule to be updated or created. -
synonyms
(string): The synonym rule information definition, which must be in Solr format.
-
tasks
editcancel
editCancel a task.
The task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible.
A task may continue to run for some time after it has been cancelled because it may not be able to safely stop its current activity straight away. It is also possible that Elasticsearch must complete its work on other tasks before it can process the cancellation. The get task information API will continue to list these cancelled tasks until they complete. The cancelled flag in the response indicates that the cancellation command has been processed and the task will stop as soon as possible.
To troubleshoot why a cancelled task does not complete promptly, use the get task information API with the ?detailed
parameter to identify the other tasks the system is running.
You can also use the node hot threads API to obtain detailed information about the work the system is doing instead of completing the cancelled task.
client.tasks.cancel({ ... })
Arguments
edit-
Request (object):
-
task_id
(Optional, string | number): The task identifier. -
actions
(Optional, string | string[]): A list or wildcard expression of actions that is used to limit the request. -
nodes
(Optional, string[]): A list of node IDs or names that is used to limit the request. -
parent_task_id
(Optional, string): A parent task ID that is used to limit the tasks. -
wait_for_completion
(Optional, boolean): If true, the request blocks until all found tasks are complete.
-
get
editGet task information. Get information about a task currently running in the cluster.
The task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible.
If the task identifier is not found, a 404 response code indicates that there are no resources that match the request.
client.tasks.get({ task_id })
Arguments
edit-
Request (object):
-
task_id
(string): The task identifier. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the task has completed.
-
list
editGet all tasks. Get information about the tasks currently running on one or more nodes in the cluster.
The task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible.
Identifying running tasks
The X-Opaque-Id header
, when provided on the HTTP request header, is going to be returned as a header in the response as well as in the headers field for in the task information.
This enables you to track certain calls or associate certain tasks with the client that started them.
For example:
curl -i -H "X-Opaque-Id: 123456" "http://localhost:9200/_tasks?group_by=parents"
The API returns the following result:
HTTP/1.1 200 OK X-Opaque-Id: 123456 content-type: application/json; charset=UTF-8 content-length: 831 { "tasks" : { "u5lcZHqcQhu-rUoFaqDphA:45" : { "node" : "u5lcZHqcQhu-rUoFaqDphA", "id" : 45, "type" : "transport", "action" : "cluster:monitor/tasks/lists", "start_time_in_millis" : 1513823752749, "running_time_in_nanos" : 293139, "cancellable" : false, "headers" : { "X-Opaque-Id" : "123456" }, "children" : [ { "node" : "u5lcZHqcQhu-rUoFaqDphA", "id" : 46, "type" : "direct", "action" : "cluster:monitor/tasks/lists[n]", "start_time_in_millis" : 1513823752750, "running_time_in_nanos" : 92133, "cancellable" : false, "parent_task_id" : "u5lcZHqcQhu-rUoFaqDphA:45", "headers" : { "X-Opaque-Id" : "123456" } } ] } } }
In this example, X-Opaque-Id: 123456
is the ID as a part of the response header.
The X-Opaque-Id
in the task headers
is the ID for the task that was initiated by the REST request.
The X-Opaque-Id
in the children headers
is the child task of the task that was initiated by the REST request.
client.tasks.list({ ... })
Arguments
edit-
Request (object):
-
actions
(Optional, string | string[]): A list or wildcard expression of actions used to limit the request. For example, you can usecluser:*
to retrieve all cluster-related tasks. -
detailed
(Optional, boolean): Iftrue
, the response includes detailed information about the running tasks. This information is useful to distinguish tasks from each other but is more costly to run. -
group_by
(Optional, Enum("nodes" | "parents" | "none")): A key that is used to group tasks in the response. The task lists can be grouped either by nodes or by parent tasks. -
nodes
(Optional, string | string[]): A list of node IDs or names that is used to limit the returned information. -
parent_task_id
(Optional, string): A parent task identifier that is used to limit returned information. To return all tasks, omit this parameter or use a value of-1
. If the parent task is not found, the API does not return a 404 response code. -
timeout
(Optional, string | -1 | 0): The period to wait for each node to respond. If a node does not respond before its timeout expires, the response does not include its information. However, timed out nodes are included in thenode_failures
property. -
wait_for_completion
(Optional, boolean): Iftrue
, the request blocks until the operation is complete.
-
text_structure
editfind_field_structure
editFind the structure of a text field. Find the structure of a text field in an Elasticsearch index.
This API provides a starting point for extracting further information from log messages already ingested into Elasticsearch.
For example, if you have ingested data into a very simple index that has just @timestamp
and message fields, you can use this API to see what common structure exists in the message field.
The response from the API contains:
- Sample messages.
- Statistics that reveal the most common values for all fields detected within the text and basic numeric statistics for numeric fields.
- Information about the structure of the text, which is useful when you write ingest configurations to index it or similarly formatted text.
- Appropriate mappings for an Elasticsearch index, which you could use to ingest the text.
All this information can be calculated by the structure finder with no guidance. However, you can optionally override some of the decisions about the text structure by specifying one or more query parameters.
If the structure finder produces unexpected results, specify the explain
query parameter and an explanation will appear in the response.
It helps determine why the returned structure was chosen.
client.textStructure.findFieldStructure({ field, index })
Arguments
edit-
Request (object):
-
field
(string): The field that should be analyzed. -
index
(string): The name of the index that contains the analyzed field. -
column_names
(Optional, string): Ifformat
is set todelimited
, you can specify the column names in a list. If this parameter is not specified, the structure finder uses the column names from the header row of the text. If the text does not have a header row, columns are named "column1", "column2", "column3", for example. -
delimiter
(Optional, string): If you have setformat
todelimited
, you can specify the character used to delimit the values in each row. Only a single character is supported; the delimiter cannot have multiple characters. By default, the API considers the following possibilities: comma, tab, semi-colon, and pipe (|
). In this default scenario, all rows must have the same number of fields for the delimited format to be detected. If you specify a delimiter, up to 10% of the rows can have a different number of columns than the first row. -
documents_to_sample
(Optional, number): The number of documents to include in the structural analysis. The minimum value is 2. -
ecs_compatibility
(Optional, Enum("disabled" | "v1")): The mode of compatibility with ECS compliant Grok patterns. Use this parameter to specify whether to use ECS Grok patterns instead of legacy ones when the structure finder creates a Grok pattern. This setting primarily has an impact when a whole message Grok pattern such as%{CATALINALOG}
matches the input. If the structure finder identifies a common structure but has no idea of the meaning then generic field names such aspath
,ipaddress
,field1
, andfield2
are used in thegrok_pattern
output. The intention in that situation is that a user who knows the meanings will rename the fields before using them. -
explain
(Optional, boolean): Iftrue
, the response includes a field namedexplanation
, which is an array of strings that indicate how the structure finder produced its result. -
format
(Optional, Enum("delimited" | "ndjson" | "semi_structured_text" | "xml")): The high level structure of the text. By default, the API chooses the format. In this default scenario, all rows must have the same number of fields for a delimited format to be detected. If the format is set to delimited and the delimiter is not set, however, the API tolerates up to 5% of rows that have a different number of columns than the first row. -
grok_pattern
(Optional, string): If the format issemi_structured_text
, you can specify a Grok pattern that is used to extract fields from every message in the text. The name of the timestamp field in the Grok pattern must match what is specified in thetimestamp_field
parameter. If that parameter is not specified, the name of the timestamp field in the Grok pattern must match "timestamp". Ifgrok_pattern
is not specified, the structure finder creates a Grok pattern. -
quote
(Optional, string): If the format isdelimited
, you can specify the character used to quote the values in each row if they contain newlines or the delimiter character. Only a single character is supported. If this parameter is not specified, the default value is a double quote ("
). If your delimited text format does not use quoting, a workaround is to set this argument to a character that does not appear anywhere in the sample. -
should_trim_fields
(Optional, boolean): If the format isdelimited
, you can specify whether values between delimiters should have whitespace trimmed from them. If this parameter is not specified and the delimiter is pipe (|
), the default value is true. Otherwise, the default value isfalse
. -
timeout
(Optional, string | -1 | 0): The maximum amount of time that the structure analysis can take. If the analysis is still running when the timeout expires, it will be stopped. -
timestamp_field
(Optional, string): The name of the field that contains the primary timestamp of each record in the text. In particular, if the text was ingested into an index, this is the field that would be used to populate the@timestamp
field.
-
If the format is semi_structured_text
, this field must match the name of the appropriate extraction in the grok_pattern
.
Therefore, for semi-structured text, it is best not to specify this parameter unless grok_pattern
is also specified.
For structured text, if you specify this parameter, the field must exist within the text.
If this parameter is not specified, the structure finder makes a decision about which field (if any) is the primary timestamp field.
For structured text, it is not compulsory to have a timestamp in the text.
* *timestamp_format
(Optional, string): The Java time format of the timestamp field in the text.
Only a subset of Java time format letter groups are supported:
-
a
-
d
-
dd
-
EEE
-
EEEE
-
H
-
HH
-
h
-
M
-
MM
-
MMM
-
MMMM
-
mm
-
ss
-
XX
-
XXX
-
yy
-
yyyy
-
zzz
Additionally S
letter groups (fractional seconds) of length one to nine are supported providing they occur after ss
and are separated from the ss
by a period (.
), comma (,
), or colon (:
).
Spacing and punctuation is also permitted with the exception a question mark (?
), newline, and carriage return, together with literal text enclosed in single quotes.
For example, MM/dd HH.mm.ss,SSSSSS 'in' yyyy
is a valid override format.
One valuable use case for this parameter is when the format is semi-structured text, there are multiple timestamp formats in the text, and you know which format corresponds to the primary timestamp, but you do not want to specify the full grok_pattern
.
Another is when the timestamp format is one that the structure finder does not consider by default.
If this parameter is not specified, the structure finder chooses the best format from a built-in set.
If the special value null
is specified, the structure finder will not look for a primary timestamp in the text.
When the format is semi-structured text, this will result in the structure finder treating the text as single-line messages.
find_message_structure
editFind the structure of text messages. Find the structure of a list of text messages. The messages must contain data that is suitable to be ingested into Elasticsearch.
This API provides a starting point for ingesting data into Elasticsearch in a format that is suitable for subsequent use with other Elastic Stack functionality. Use this API rather than the find text structure API if your input text has already been split up into separate messages by some other process.
The response from the API contains:
- Sample messages.
- Statistics that reveal the most common values for all fields detected within the text and basic numeric statistics for numeric fields.
- Information about the structure of the text, which is useful when you write ingest configurations to index it or similarly formatted text. Appropriate mappings for an Elasticsearch index, which you could use to ingest the text.
All this information can be calculated by the structure finder with no guidance. However, you can optionally override some of the decisions about the text structure by specifying one or more query parameters.
If the structure finder produces unexpected results, specify the explain
query parameter and an explanation will appear in the response.
It helps determine why the returned structure was chosen.
client.textStructure.findMessageStructure({ messages })
Arguments
edit-
Request (object):
-
messages
(string[]): The list of messages you want to analyze. -
column_names
(Optional, string): If the format isdelimited
, you can specify the column names in a list. If this parameter is not specified, the structure finder uses the column names from the header row of the text. If the text does not have a header role, columns are named "column1", "column2", "column3", for example. -
delimiter
(Optional, string): If you the format isdelimited
, you can specify the character used to delimit the values in each row. Only a single character is supported; the delimiter cannot have multiple characters. By default, the API considers the following possibilities: comma, tab, semi-colon, and pipe (|
). In this default scenario, all rows must have the same number of fields for the delimited format to be detected. If you specify a delimiter, up to 10% of the rows can have a different number of columns than the first row. -
ecs_compatibility
(Optional, Enum("disabled" | "v1")): The mode of compatibility with ECS compliant Grok patterns. Use this parameter to specify whether to use ECS Grok patterns instead of legacy ones when the structure finder creates a Grok pattern. This setting primarily has an impact when a whole message Grok pattern such as%{CATALINALOG}
matches the input. If the structure finder identifies a common structure but has no idea of meaning then generic field names such aspath
,ipaddress
,field1
, andfield2
are used in thegrok_pattern
output, with the intention that a user who knows the meanings rename these fields before using it. -
explain
(Optional, boolean): If this parameter is set to true, the response includes a field namedexplanation
, which is an array of strings that indicate how the structure finder produced its result. -
format
(Optional, Enum("delimited" | "ndjson" | "semi_structured_text" | "xml")): The high level structure of the text. By default, the API chooses the format. In this default scenario, all rows must have the same number of fields for a delimited format to be detected. If the format isdelimited
and the delimiter is not set, however, the API tolerates up to 5% of rows that have a different number of columns than the first row. -
grok_pattern
(Optional, string): If the format issemi_structured_text
, you can specify a Grok pattern that is used to extract fields from every message in the text. The name of the timestamp field in the Grok pattern must match what is specified in thetimestamp_field
parameter. If that parameter is not specified, the name of the timestamp field in the Grok pattern must match "timestamp". Ifgrok_pattern
is not specified, the structure finder creates a Grok pattern. -
quote
(Optional, string): If the format isdelimited
, you can specify the character used to quote the values in each row if they contain newlines or the delimiter character. Only a single character is supported. If this parameter is not specified, the default value is a double quote ("
). If your delimited text format does not use quoting, a workaround is to set this argument to a character that does not appear anywhere in the sample. -
should_trim_fields
(Optional, boolean): If the format isdelimited
, you can specify whether values between delimiters should have whitespace trimmed from them. If this parameter is not specified and the delimiter is pipe (|
), the default value is true. Otherwise, the default value isfalse
. -
timeout
(Optional, string | -1 | 0): The maximum amount of time that the structure analysis can take. If the analysis is still running when the timeout expires, it will be stopped. -
timestamp_field
(Optional, string): The name of the field that contains the primary timestamp of each record in the text. In particular, if the text was ingested into an index, this is the field that would be used to populate the@timestamp
field.
-
If the format is semi_structured_text
, this field must match the name of the appropriate extraction in the grok_pattern
.
Therefore, for semi-structured text, it is best not to specify this parameter unless grok_pattern
is also specified.
For structured text, if you specify this parameter, the field must exist within the text.
If this parameter is not specified, the structure finder makes a decision about which field (if any) is the primary timestamp field.
For structured text, it is not compulsory to have a timestamp in the text.
* *timestamp_format
(Optional, string): The Java time format of the timestamp field in the text.
Only a subset of Java time format letter groups are supported:
-
a
-
d
-
dd
-
EEE
-
EEEE
-
H
-
HH
-
h
-
M
-
MM
-
MMM
-
MMMM
-
mm
-
ss
-
XX
-
XXX
-
yy
-
yyyy
-
zzz
Additionally S
letter groups (fractional seconds) of length one to nine are supported providing they occur after ss
and are separated from the ss
by a period (.
), comma (,
), or colon (:
).
Spacing and punctuation is also permitted with the exception a question mark (?
), newline, and carriage return, together with literal text enclosed in single quotes.
For example, MM/dd HH.mm.ss,SSSSSS 'in' yyyy
is a valid override format.
One valuable use case for this parameter is when the format is semi-structured text, there are multiple timestamp formats in the text, and you know which format corresponds to the primary timestamp, but you do not want to specify the full grok_pattern
.
Another is when the timestamp format is one that the structure finder does not consider by default.
If this parameter is not specified, the structure finder chooses the best format from a built-in set.
If the special value null
is specified, the structure finder will not look for a primary timestamp in the text.
When the format is semi-structured text, this will result in the structure finder treating the text as single-line messages.
find_structure
editFind the structure of a text file. The text file must contain data that is suitable to be ingested into Elasticsearch.
This API provides a starting point for ingesting data into Elasticsearch in a format that is suitable for subsequent use with other Elastic Stack functionality. Unlike other Elasticsearch endpoints, the data that is posted to this endpoint does not need to be UTF-8 encoded and in JSON format. It must, however, be text; binary text formats are not currently supported. The size is limited to the Elasticsearch HTTP receive buffer size, which defaults to 100 Mb.
The response from the API contains:
- A couple of messages from the beginning of the text.
- Statistics that reveal the most common values for all fields detected within the text and basic numeric statistics for numeric fields.
- Information about the structure of the text, which is useful when you write ingest configurations to index it or similarly formatted text.
- Appropriate mappings for an Elasticsearch index, which you could use to ingest the text.
All this information can be calculated by the structure finder with no guidance. However, you can optionally override some of the decisions about the text structure by specifying one or more query parameters.
client.textStructure.findStructure({ ... })
Arguments
edit-
Request (object):
-
text_files
(Optional, TJsonDocument[]) -
charset
(Optional, string): The text’s character set. It must be a character set that is supported by the JVM that Elasticsearch uses. For example,UTF-8
,UTF-16LE
,windows-1252
, orEUC-JP
. If this parameter is not specified, the structure finder chooses an appropriate character set. -
column_names
(Optional, string): If you have set format todelimited
, you can specify the column names in a list. If this parameter is not specified, the structure finder uses the column names from the header row of the text. If the text does not have a header role, columns are named "column1", "column2", "column3", for example. -
delimiter
(Optional, string): If you have setformat
todelimited
, you can specify the character used to delimit the values in each row. Only a single character is supported; the delimiter cannot have multiple characters. By default, the API considers the following possibilities: comma, tab, semi-colon, and pipe (|
). In this default scenario, all rows must have the same number of fields for the delimited format to be detected. If you specify a delimiter, up to 10% of the rows can have a different number of columns than the first row. -
ecs_compatibility
(Optional, string): The mode of compatibility with ECS compliant Grok patterns. Use this parameter to specify whether to use ECS Grok patterns instead of legacy ones when the structure finder creates a Grok pattern. Valid values aredisabled
andv1
. This setting primarily has an impact when a whole message Grok pattern such as%{CATALINALOG}
matches the input. If the structure finder identifies a common structure but has no idea of meaning then generic field names such aspath
,ipaddress
,field1
, andfield2
are used in thegrok_pattern
output, with the intention that a user who knows the meanings rename these fields before using it. -
explain
(Optional, boolean): If this parameter is set totrue
, the response includes a field named explanation, which is an array of strings that indicate how the structure finder produced its result. If the structure finder produces unexpected results for some text, use this query parameter to help you determine why the returned structure was chosen. -
format
(Optional, string): The high level structure of the text. Valid values arendjson
,xml
,delimited
, andsemi_structured_text
. By default, the API chooses the format. In this default scenario, all rows must have the same number of fields for a delimited format to be detected. If the format is set todelimited
and the delimiter is not set, however, the API tolerates up to 5% of rows that have a different number of columns than the first row. -
grok_pattern
(Optional, string): If you have setformat
tosemi_structured_text
, you can specify a Grok pattern that is used to extract fields from every message in the text. The name of the timestamp field in the Grok pattern must match what is specified in thetimestamp_field
parameter. If that parameter is not specified, the name of the timestamp field in the Grok pattern must match "timestamp". Ifgrok_pattern
is not specified, the structure finder creates a Grok pattern. -
has_header_row
(Optional, boolean): If you have setformat
todelimited
, you can use this parameter to indicate whether the column names are in the first row of the text. If this parameter is not specified, the structure finder guesses based on the similarity of the first row of the text to other rows. -
line_merge_size_limit
(Optional, number): The maximum number of characters in a message when lines are merged to form messages while analyzing semi-structured text. If you have extremely long messages you may need to increase this, but be aware that this may lead to very long processing times if the way to group lines into messages is misdetected. -
lines_to_sample
(Optional, number): The number of lines to include in the structural analysis, starting from the beginning of the text. The minimum is 2. If the value of this parameter is greater than the number of lines in the text, the analysis proceeds (as long as there are at least two lines in the text) for all of the lines.
-
The number of lines and the variation of the lines affects the speed of the analysis.
For example, if you upload text where the first 1000 lines are all variations on the same message, the analysis will find more commonality than would be seen with a bigger sample.
If possible, however, it is more efficient to upload sample text with more variety in the first 1000 lines than to request analysis of 100000 lines to achieve some variety.
quote
(Optional, string): If you have set format
to delimited
, you can specify the character used to quote the values in each row if they contain newlines or the delimiter character.
Only a single character is supported.
If this parameter is not specified, the default value is a double quote ("
).
If your delimited text format does not use quoting, a workaround is to set this argument to a character that does not appear anywhere in the sample.
should_trim_fields
(Optional, boolean): If you have set format
to delimited
, you can specify whether values between delimiters should have whitespace trimmed from them.
If this parameter is not specified and the delimiter is pipe (|
), the default value is true
.
Otherwise, the default value is false
.
timeout
(Optional, string | -1 | 0): The maximum amount of time that the structure analysis can take.
If the analysis is still running when the timeout expires then it will be stopped.
timestamp_field
(Optional, string): The name of the field that contains the primary timestamp of each record in the text.
In particular, if the text were ingested into an index, this is the field that would be used to populate the @timestamp
field.
If the format
is semi_structured_text
, this field must match the name of the appropriate extraction in the grok_pattern
.
Therefore, for semi-structured text, it is best not to specify this parameter unless grok_pattern
is also specified.
For structured text, if you specify this parameter, the field must exist within the text.
If this parameter is not specified, the structure finder makes a decision about which field (if any) is the primary timestamp field.
For structured text, it is not compulsory to have a timestamp in the text.
* *timestamp_format
(Optional, string): The Java time format of the timestamp field in the text.
Only a subset of Java time format letter groups are supported:
-
a
-
d
-
dd
-
EEE
-
EEEE
-
H
-
HH
-
h
-
M
-
MM
-
MMM
-
MMMM
-
mm
-
ss
-
XX
-
XXX
-
yy
-
yyyy
-
zzz
Additionally S
letter groups (fractional seconds) of length one to nine are supported providing they occur after ss
and separated from the ss
by a .
, ,
or :
.
Spacing and punctuation is also permitted with the exception of ?
, newline and carriage return, together with literal text enclosed in single quotes.
For example, MM/dd HH.mm.ss,SSSSSS 'in' yyyy
is a valid override format.
One valuable use case for this parameter is when the format is semi-structured text, there are multiple timestamp formats in the text, and you know which format corresponds to the primary timestamp, but you do not want to specify the full grok_pattern
.
Another is when the timestamp format is one that the structure finder does not consider by default.
If this parameter is not specified, the structure finder chooses the best format from a built-in set.
If the special value null
is specified the structure finder will not look for a primary timestamp in the text.
When the format is semi-structured text this will result in the structure finder treating the text as single-line messages.
test_grok_pattern
editTest a Grok pattern. Test a Grok pattern on one or more lines of text. The API indicates whether the lines match the pattern together with the offsets and lengths of the matched substrings.
client.textStructure.testGrokPattern({ grok_pattern, text })
Arguments
edit-
Request (object):
-
grok_pattern
(string): The Grok pattern to run on the text. -
text
(string[]): The lines of text to run the Grok pattern on. -
ecs_compatibility
(Optional, string): The mode of compatibility with ECS compliant Grok patterns. Use this parameter to specify whether to use ECS Grok patterns instead of legacy ones when the structure finder creates a Grok pattern. Valid values aredisabled
andv1
.
-
transform
editdelete_transform
editDelete a transform.
client.transform.deleteTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. -
force
(Optional, boolean): If this value is false, the transform must be stopped before it can be deleted. If true, the transform is deleted regardless of its current state. -
delete_dest_index
(Optional, boolean): If this value is true, the destination index is deleted together with the transform. If false, the destination index will not be deleted -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
get_node_stats
editRetrieves transform usage information for transform nodes.
client.transform.getNodeStats()
get_transform
editGet transforms. Get configuration information for transforms.
client.transform.getTransform({ ... })
Arguments
edit-
Request (object):
-
transform_id
(Optional, string | string[]): Identifier for the transform. It can be a transform identifier or a wildcard expression. You can get information for all transforms by using_all
, by specifying*
as the<transform_id>
, or by omitting the<transform_id>
. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no transforms that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
If this parameter is false, the request returns a 404 status code when
there are no matches or only partial matches.
from
(Optional, number): Skips the specified number of transforms.
size
(Optional, number): Specifies the maximum number of transforms to obtain.
* *exclude_generated
(Optional, boolean): Excludes fields that were automatically added when creating the
transform. This allows the configuration to be in an acceptable format to
be retrieved and then added to another cluster.
get_transform_stats
editGet transform stats.
Get usage information for transforms.
client.transform.getTransformStats({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string | string[]): Identifier for the transform. It can be a transform identifier or a wildcard expression. You can get information for all transforms by using_all
, by specifying*
as the<transform_id>
, or by omitting the<transform_id>
. -
allow_no_match
(Optional, boolean): Specifies what to do when the request:- Contains wildcard expressions and there are no transforms that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
-
If this parameter is false, the request returns a 404 status code when
there are no matches or only partial matches.
from
(Optional, number): Skips the specified number of transforms.
size
(Optional, number): Specifies the maximum number of transforms to obtain.
* *timeout
(Optional, string | -1 | 0): Controls the time to wait for the stats
preview_transform
editPreview a transform. Generates a preview of the results that you will get when you create a transform with the same configuration.
It returns a maximum of 100 results. The calculations are based on all the current data in the source index. It also generates a list of mappings and settings for the destination index. These values are determined based on the field types of the source index and the transform aggregations.
client.transform.previewTransform({ ... })
Arguments
edit-
Request (object):
-
transform_id
(Optional, string): Identifier for the transform to preview. If you specify this path parameter, you cannot provide transform configuration details in the request body. -
dest
(Optional, { index, op_type, pipeline, routing, version_type }): The destination for the transform. -
description
(Optional, string): Free text description of the transform. -
frequency
(Optional, string | -1 | 0): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. -
pivot
(Optional, { aggregations, group_by }): The pivot method transforms the data by aggregating and grouping it. These objects define the group by fields and the aggregation to reduce the data. -
source
(Optional, { index, query, remote, size, slice, sort, _source, runtime_mappings }): The source of the data for the transform. -
settings
(Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, unattended }): Defines optional transform settings. -
sync
(Optional, { time }): Defines the properties transforms require to run continuously. -
retention_policy
(Optional, { time }): Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index. -
latest
(Optional, { sort, unique_key }): The latest method transforms the data by finding the latest document for each unique key. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
put_transform
editCreate a transform. Creates a transform.
A transform copies data from source indices, transforms it, and persists it into an entity-centric destination index. You can also think of the destination index as a two-dimensional tabular data structure (known as a data frame). The ID for each document in the data frame is generated from a hash of the entity, so there is a unique row per entity.
You must choose either the latest or pivot method for your transform; you cannot use both in a single transform. If
you choose to use the pivot method for your transform, the entities are defined by the set of group_by
fields in
the pivot object. If you choose to use the latest method, the entities are defined by the unique_key
field values
in the latest object.
You must have create_index
, index
, and read
privileges on the destination index and read
and
view_index_metadata
privileges on the source indices. When Elasticsearch security features are enabled, the
transform remembers which roles the user that created it had at the time of creation and uses those same roles. If
those roles do not have the required privileges on the source and destination indices, the transform fails when it
attempts unauthorized operations.
You must use Kibana or this API to create a transform. Do not add a transform directly into any
.transform-internal*
indices using the Elasticsearch index API. If Elasticsearch security features are enabled, do
not give users any privileges on .transform-internal*
indices. If you used transforms prior to 7.5, also do not
give users any privileges on .data-frame-internal*
indices.
client.transform.putTransform({ transform_id, dest, source })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It has a 64 character limit and must start and end with alphanumeric characters. -
dest
({ index, op_type, pipeline, routing, version_type }): The destination for the transform. -
source
({ index, query, remote, size, slice, sort, _source, runtime_mappings }): The source of the data for the transform. -
description
(Optional, string): Free text description of the transform. -
frequency
(Optional, string | -1 | 0): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is1s
and the maximum is1h
. -
latest
(Optional, { sort, unique_key }): The latest method transforms the data by finding the latest document for each unique key. -
_meta
(Optional, Record<string, User-defined value>): Defines optional transform metadata. -
pivot
(Optional, { aggregations, group_by }): The pivot method transforms the data by aggregating and grouping it. These objects define the group by fields and the aggregation to reduce the data. -
retention_policy
(Optional, { time }): Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index. -
settings
(Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, unattended }): Defines optional transform settings. -
sync
(Optional, { time }): Defines the properties transforms require to run continuously. -
defer_validation
(Optional, boolean): When the transform is created, a series of validations occur to ensure its success. For example, there is a check for the existence of the source indices and a check that the destination index is not part of the source index pattern. You can use this parameter to skip the checks, for example when the source index does not exist until after the transform is created. The validations are always run when you start the transform, however, with the exception of privilege checks. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
reset_transform
editReset a transform.
Before you can reset it, you must stop it; alternatively, use the force
query parameter.
If the destination index was created by the transform, it is deleted.
client.transform.resetTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It has a 64 character limit and must start and end with alphanumeric characters. -
force
(Optional, boolean): If this value istrue
, the transform is reset regardless of its current state. If it’sfalse
, the transform must be stopped before it can be reset. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
schedule_now_transform
editSchedule a transform to start now.
Instantly run a transform to process data.
If you run this API, the transform will process the new data instantly,
without waiting for the configured frequency interval. After the API is called,
the transform will be processed again at now + frequency
unless the API
is called again in the meantime.
client.transform.scheduleNowTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. -
timeout
(Optional, string | -1 | 0): Controls the time to wait for the scheduling to take place
-
start_transform
editStart a transform.
When you start a transform, it creates the destination index if it does not already exist. The number_of_shards
is
set to 1
and the auto_expand_replicas
is set to 0-1
. If it is a pivot transform, it deduces the mapping
definitions for the destination index from the source indices and the transform aggregations. If fields in the
destination index are derived from scripts (as in the case of scripted_metric
or bucket_script
aggregations),
the transform uses dynamic mappings unless an index template exists. If it is a latest transform, it does not deduce
mapping definitions; it uses dynamic mappings. To use explicit mappings, create the destination index before you
start the transform. Alternatively, you can create an index template, though it does not affect the deduced mappings
in a pivot transform.
When the transform starts, a series of validations occur to ensure its success. If you deferred validation when you created the transform, they occur when you start the transform—with the exception of privilege checks. When Elasticsearch security features are enabled, the transform remembers which roles the user that created it had at the time of creation and uses those same roles. If those roles do not have the required privileges on the source and destination indices, the transform fails when it attempts unauthorized operations.
client.transform.startTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. -
from
(Optional, string): Restricts the set of transformed entities to those changed after this time. Relative times like now-30d are supported. Only applicable for continuous transforms.
-
stop_transform
editStop transforms. Stops one or more transforms.
client.transform.stopTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. To stop multiple transforms, use a list or a wildcard expression. To stop all transforms, use_all
or*
as the identifier. -
allow_no_match
(Optional, boolean): Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the_all
string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches.
-
If it is true, the API returns a successful acknowledgement message when there are no matches. When there are only partial matches, the API stops the appropriate transforms.
If it is false, the request returns a 404 status code when there are no matches or only partial matches.
force
(Optional, boolean): If it is true, the API forcefully stops the transforms.
timeout
(Optional, string | -1 | 0): Period to wait for a response when wait_for_completion
is true
. If no response is received before the
timeout expires, the request returns a timeout exception. However, the request continues processing and
eventually moves the transform to a STOPPED state.
wait_for_checkpoint
(Optional, boolean): If it is true, the transform does not completely stop until the current checkpoint is completed. If it is false,
the transform stops as soon as possible.
wait_for_completion
(Optional, boolean): If it is true, the API blocks until the indexer state completely stops. If it is false, the API returns
immediately and the indexer is stopped asynchronously in the background.
update_transform
editUpdate a transform. Updates certain properties of a transform.
All updated properties except description
do not take effect until after the transform starts the next checkpoint,
thus there is data consistency in each checkpoint. To use this API, you must have read
and view_index_metadata
privileges for the source indices. You must also have index
and read
privileges for the destination index. When
Elasticsearch security features are enabled, the transform remembers which roles the user who updated it had at the
time of update and runs with those privileges.
client.transform.updateTransform({ transform_id })
Arguments
edit-
Request (object):
-
transform_id
(string): Identifier for the transform. -
dest
(Optional, { index, op_type, pipeline, routing, version_type }): The destination for the transform. -
description
(Optional, string): Free text description of the transform. -
frequency
(Optional, string | -1 | 0): The interval between checks for changes in the source indices when the transform is running continuously. Also determines the retry interval in the event of transient failures while the transform is searching or indexing. The minimum value is 1s and the maximum is 1h. -
_meta
(Optional, Record<string, User-defined value>): Defines optional transform metadata. -
source
(Optional, { index, query, remote, size, slice, sort, _source, runtime_mappings }): The source of the data for the transform. -
settings
(Optional, { align_checkpoints, dates_as_epoch_millis, deduce_mappings, docs_per_second, max_page_search_size, unattended }): Defines optional transform settings. -
sync
(Optional, { time }): Defines the properties transforms require to run continuously. -
retention_policy
(Optional, { time } | null): Defines a retention policy for the transform. Data that meets the defined criteria is deleted from the destination index. -
defer_validation
(Optional, boolean): When true, deferrable validations are not run. This behavior may be desired if the source index does not exist until after the transform is created. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
upgrade_transforms
editUpgrade all transforms.
Transforms are compatible across minor versions and between supported major versions. However, over time, the format of transform configuration information may change. This API identifies transforms that have a legacy configuration format and upgrades them to the latest version. It also cleans up the internal data structures that store the transform state and checkpoints. The upgrade does not affect the source and destination indices. The upgrade also does not affect the roles that transforms use when Elasticsearch security features are enabled; the role used to read source data and write to the destination index remains unchanged.
If a transform upgrade step fails, the upgrade stops and an error is returned about the underlying issue. Resolve the issue then re-run the process again. A summary is returned when the upgrade is finished.
To ensure continuous transforms remain running during a major version upgrade of the cluster – for example, from 7.16 to 8.0 – it is recommended to upgrade transforms before upgrading the cluster. You may want to perform a recent cluster backup prior to the upgrade.
client.transform.upgradeTransforms({ ... })
Arguments
edit-
Request (object):
-
dry_run
(Optional, boolean): When true, the request checks for updates but does not run them. -
timeout
(Optional, string | -1 | 0): Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
watcher
editack_watch
editAcknowledge a watch. Acknowledging a watch enables you to manually throttle the execution of the watch’s actions.
The acknowledgement state of an action is stored in the status.actions.<id>.ack.state
structure.
If the specified watch is currently being executed, this API will return an error The reason for this behavior is to prevent overwriting the watch status from a watch execution.
Acknowledging an action throttles further executions of that action until its ack.state
is reset to awaits_successful_execution
.
This happens when the condition of the watch is not met (the condition evaluates to false).
client.watcher.ackWatch({ watch_id })
Arguments
edit-
Request (object):
-
watch_id
(string): The watch identifier. -
action_id
(Optional, string | string[]): A list of the action identifiers to acknowledge. If you omit this parameter, all of the actions of the watch are acknowledged.
-
activate_watch
editActivate a watch. A watch can be either active or inactive.
client.watcher.activateWatch({ watch_id })
Arguments
edit-
Request (object):
-
watch_id
(string): The watch identifier.
-
deactivate_watch
editDeactivate a watch. A watch can be either active or inactive.
client.watcher.deactivateWatch({ watch_id })
Arguments
edit-
Request (object):
-
watch_id
(string): The watch identifier.
-
delete_watch
editDelete a watch.
When the watch is removed, the document representing the watch in the .watches
index is gone and it will never be run again.
Deleting a watch does not delete any watch execution records related to this watch from the watch history.
Deleting a watch must be done by using only this API.
Do not delete the watch directly from the .watches
index using the Elasticsearch delete document API
When Elasticsearch security features are enabled, make sure no write privileges are granted to anyone for the .watches
index.
client.watcher.deleteWatch({ id })
Arguments
edit-
Request (object):
-
id
(string): The watch identifier.
-
execute_watch
editRun a watch. This API can be used to force execution of the watch outside of its triggering logic or to simulate the watch execution for debugging purposes.
For testing and debugging purposes, you also have fine-grained control on how the watch runs. You can run the watch without running all of its actions or alternatively by simulating them. You can also force execution by ignoring the watch condition and control whether a watch record would be written to the watch history after it runs.
You can use the run watch API to run watches that are not yet registered by specifying the watch definition inline. This serves as great tool for testing and debugging your watches prior to adding them to Watcher.
When Elasticsearch security features are enabled on your cluster, watches are run with the privileges of the user that stored the watches.
If your user is allowed to read index a
, but not index b
, then the exact same set of rules will apply during execution of a watch.
When using the run watch API, the authorization data of the user that called the API will be used as a base, instead of the information who stored the watch.
client.watcher.executeWatch({ ... })
Arguments
edit-
Request (object):
-
id
(Optional, string): The watch identifier. -
action_modes
(Optional, Record<string, Enum("simulate" | "force_simulate" | "execute" | "force_execute" | "skip")>): Determines how to handle the watch actions as part of the watch execution. -
alternative_input
(Optional, Record<string, User-defined value>): When present, the watch uses this object as a payload instead of executing its own input. -
ignore_condition
(Optional, boolean): When set totrue
, the watch execution uses the always condition. This can also be specified as an HTTP parameter. -
record_execution
(Optional, boolean): When set totrue
, the watch record representing the watch execution result is persisted to the.watcher-history
index for the current time. In addition, the status of the watch is updated, possibly throttling subsequent runs. This can also be specified as an HTTP parameter. -
simulated_actions
(Optional, { actions, all, use_all }) -
trigger_data
(Optional, { scheduled_time, triggered_time }): This structure is parsed as the data of the trigger event that will be used during the watch execution. -
watch
(Optional, { actions, condition, input, metadata, status, throttle_period, throttle_period_in_millis, transform, trigger }): When present, this watch is used instead of the one specified in the request. This watch is not persisted to the index andrecord_execution
cannot be set. -
debug
(Optional, boolean): Defines whether the watch runs in debug mode.
-
get_settings
editGet Watcher index settings.
Get settings for the Watcher internal index (.watches
).
Only a subset of settings are shown, for example index.auto_expand_replicas
and index.number_of_replicas
.
client.watcher.getSettings({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
get_watch
editGet a watch.
client.watcher.getWatch({ id })
Arguments
edit-
Request (object):
-
id
(string): The watch identifier.
-
put_watch
editCreate or update a watch.
When a watch is registered, a new document that represents the watch is added to the .watches
index and its trigger is immediately registered with the relevant trigger engine.
Typically for the schedule
trigger, the scheduler is the trigger engine.
You must use Kibana or this API to create a watch.
Do not add a watch directly to the .watches
index by using the Elasticsearch index API.
If Elasticsearch security features are enabled, do not give users write privileges on the .watches
index.
When you add a watch you can also define its initial active state by setting the active parameter.
When Elasticsearch security features are enabled, your watch can index or search only on indices for which the user that stored the watch has privileges.
If the user is able to read index a
, but not index b
, the same will apply when the watch runs.
client.watcher.putWatch({ id })
Arguments
edit-
Request (object):
-
id
(string): The identifier for the watch. -
actions
(Optional, Record<string, { add_backing_index, remove_backing_index }>): The list of actions that will be run if the condition matches. -
condition
(Optional, { always, array_compare, compare, never, script }): The condition that defines if the actions should be run. -
input
(Optional, { chain, http, search, simple }): The input that defines the input that loads the data for the watch. -
metadata
(Optional, Record<string, User-defined value>): Metadata JSON that will be copied into the history entries. -
throttle_period
(Optional, string | -1 | 0): The minimum time between actions being run. The default is 5 seconds. This default can be changed in the config file with the settingxpack.watcher.throttle.period.default_period
. If both this value and thethrottle_period_in_millis
parameter are specified, Watcher uses the last parameter included in the request. -
throttle_period_in_millis
(Optional, Unit): Minimum time in milliseconds between actions being run. Defaults to 5000. If both this value and the throttle_period parameter are specified, Watcher uses the last parameter included in the request. -
transform
(Optional, { chain, script, search }): The transform that processes the watch payload to prepare it for the watch actions. -
trigger
(Optional, { schedule }): The trigger that defines when the watch should run. -
active
(Optional, boolean): The initial state of the watch. The default value istrue
, which means the watch is active by default. -
if_primary_term
(Optional, number): only update the watch if the last operation that has changed the watch has the specified primary term -
if_seq_no
(Optional, number): only update the watch if the last operation that has changed the watch has the specified sequence number -
version
(Optional, number): Explicit version number for concurrency control
-
query_watches
editQuery watches. Get all registered watches in a paginated manner and optionally filter watches by a query.
Note that only the _id
and metadata.*
fields are queryable or sortable.
client.watcher.queryWatches({ ... })
Arguments
edit-
Request (object):
-
from
(Optional, number): The offset from the first result to fetch. It must be non-negative. -
size
(Optional, number): The number of hits to return. It must be non-negative. -
query
(Optional, { bool, boosting, common, combined_fields, constant_score, dis_max, distance_feature, exists, function_score, fuzzy, geo_bounding_box, geo_distance, geo_grid, geo_polygon, geo_shape, has_child, has_parent, ids, intervals, knn, match, match_all, match_bool_prefix, match_none, match_phrase, match_phrase_prefix, more_like_this, multi_match, nested, parent_id, percolate, pinned, prefix, query_string, range, rank_feature, regexp, rule, script, script_score, semantic, shape, simple_query_string, span_containing, span_field_masking, span_first, span_multi, span_near, span_not, span_or, span_term, span_within, sparse_vector, term, terms, terms_set, text_expansion, weighted_tokens, wildcard, wrapper, type }): A query that filters the watches to be returned. -
sort
(Optional, string | { _score, _doc, _geo_distance, _script } | string | { _score, _doc, _geo_distance, _script }[]): One or more fields used to sort the search results. -
search_after
(Optional, number | number | string | boolean | null | User-defined value[]): Retrieve the next page of hits using a set of sort values from the previous page.
-
start
editStart the watch service. Start the Watcher service if it is not already running.
client.watcher.start({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): Period to wait for a connection to the master node.
-
stats
editGet Watcher statistics. This API always returns basic metrics. You retrieve more metrics by using the metric parameter.
client.watcher.stats({ ... })
Arguments
edit-
Request (object):
-
metric
(Optional, Enum("_all" | "queued_watches" | "current_watches" | "pending_watches") | Enum("_all" | "queued_watches" | "current_watches" | "pending_watches")[]): Defines which additional metrics are included in the response. -
emit_stacktraces
(Optional, boolean): Defines whether stack traces are generated for each watch that is running.
-
stop
editStop the watch service. Stop the Watcher service if it is running.
client.watcher.stop({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
.
-
update_settings
editUpdate Watcher index settings.
Update settings for the Watcher internal index (.watches
).
Only a subset of settings can be modified.
This includes index.auto_expand_replicas
, index.number_of_replicas
, index.routing.allocation.exclude.*
,
index.routing.allocation.include.*
and index.routing.allocation.require.*
.
Modification of index.routing.allocation.include._tier_preference
is an exception and is not allowed as the
Watcher shards must always be in the data_content
tier.
client.watcher.updateSettings({ ... })
Arguments
edit-
Request (object):
-
index.auto_expand_replicas
(Optional, string) -
index.number_of_replicas
(Optional, number) -
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. -
timeout
(Optional, string | -1 | 0): The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
xpack
editinfo
editGet information. The information provided by the API includes:
- Build information including the build number and timestamp.
- License information about the currently installed license.
- Feature information for the features that are currently enabled and available under the current license.
client.xpack.info({ ... })
Arguments
edit-
Request (object):
-
categories
(Optional, Enum("build" | "features" | "license")[]): A list of the information categories to include in the response. For example,build,license,features
. -
accept_enterprise
(Optional, boolean): If this param is used it must be set to true -
human
(Optional, boolean): Defines whether additional human-readable information is included in the response. In particular, it adds descriptions and a tag line.
-
usage
editGet usage information. Get information about the features that are currently enabled and available under the current license. The API also provides some usage statistics.
client.xpack.usage({ ... })
Arguments
edit-
Request (object):
-
master_timeout
(Optional, string | -1 | 0): The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to-1
.
-
On this page
- Arguments
- resolve_index
- Arguments
- rollover
- Arguments
- segments
- Arguments
- shard_stores
- Arguments
- shrink
- Arguments
- simulate_index_template
- Arguments
- simulate_template
- Arguments
- split
- Arguments
- stats
- Arguments
- unfreeze
- Arguments
- update_aliases
- Arguments
- validate_query
- Arguments
- inference
- chat_completion_unified
- Arguments
- completion
- Arguments
- delete
- Arguments
- get
- Arguments
- inference
- Arguments
- put
- Arguments
- put_alibabacloud
- Arguments
- put_amazonbedrock
- Arguments
- put_anthropic
- Arguments
- put_azureaistudio
- Arguments
- put_azureopenai
- Arguments
- put_cohere
- Arguments
- put_elasticsearch
- Arguments
- put_elser
- Arguments
- put_googleaistudio
- Arguments
- put_googlevertexai
- Arguments
- put_hugging_face
- Arguments
- put_jinaai
- Arguments
- put_mistral
- Arguments
- put_openai
- Arguments
- put_voyageai
- Arguments
- put_watsonx
- Arguments
- rerank
- Arguments
- sparse_embedding
- Arguments
- stream_completion
- Arguments
- text_embedding
- Arguments
- update
- Arguments
- ingest
- delete_geoip_database
- Arguments
- delete_ip_location_database
- Arguments
- delete_pipeline
- Arguments
- geo_ip_stats
- get_geoip_database
- Arguments
- get_ip_location_database
- Arguments
- get_pipeline
- Arguments
- processor_grok
- put_geoip_database
- Arguments
- put_ip_location_database
- Arguments
- put_pipeline
- Arguments
- simulate
- Arguments
- license
- delete
- Arguments
- get
- Arguments
- get_basic_status
- get_trial_status
- post
- Arguments
- post_start_basic
- Arguments
- post_start_trial
- Arguments
- logstash
- delete_pipeline
- Arguments
- get_pipeline
- Arguments
- put_pipeline
- Arguments
- migration
- deprecations
- Arguments
- get_feature_upgrade_status
- post_feature_upgrade
- ml
- clear_trained_model_deployment_cache
- Arguments
- close_job
- Arguments
- delete_calendar
- Arguments
- delete_calendar_event
- Arguments
- delete_calendar_job
- Arguments
- delete_data_frame_analytics
- Arguments
- delete_datafeed
- Arguments
- delete_expired_data
- Arguments
- delete_filter
- Arguments
- delete_forecast
- Arguments
- delete_job
- Arguments
- delete_model_snapshot
- Arguments
- delete_trained_model
- Arguments
- delete_trained_model_alias
- Arguments
- estimate_model_memory
- Arguments
- evaluate_data_frame
- Arguments
- explain_data_frame_analytics
- Arguments
- flush_job
- Arguments
- forecast
- Arguments
- get_buckets
- Arguments
- get_calendar_events
- Arguments
- get_calendars
- Arguments
- get_categories
- Arguments
- get_data_frame_analytics
- Arguments
- get_data_frame_analytics_stats
- Arguments
- get_datafeed_stats
- Arguments
- get_datafeeds
- Arguments
- get_filters
- Arguments
- get_influencers
- Arguments
- get_job_stats
- Arguments
- get_jobs
- Arguments
- get_memory_stats
- Arguments
- get_model_snapshot_upgrade_stats
- Arguments
- get_model_snapshots
- Arguments
- get_overall_buckets
- Arguments
- get_records
- Arguments
- get_trained_models
- Arguments
- get_trained_models_stats
- Arguments
- infer_trained_model
- Arguments
- info
- open_job
- Arguments
- post_calendar_events
- Arguments
- post_data
- Arguments
- preview_data_frame_analytics
- Arguments
- preview_datafeed
- Arguments
- put_calendar
- Arguments
- put_calendar_job
- Arguments
- put_data_frame_analytics
- Arguments
- put_datafeed
- Arguments
- put_filter
- Arguments
- put_job
- Arguments
- put_trained_model
- Arguments
- put_trained_model_alias
- Arguments
- put_trained_model_definition_part
- Arguments
- put_trained_model_vocabulary
- Arguments
- reset_job
- Arguments
- revert_model_snapshot
- Arguments
- set_upgrade_mode
- Arguments
- start_data_frame_analytics
- Arguments
- start_datafeed
- Arguments
- start_trained_model_deployment
- Arguments
- stop_data_frame_analytics
- Arguments
- stop_datafeed
- Arguments
- stop_trained_model_deployment
- Arguments
- update_data_frame_analytics
- Arguments
- update_datafeed
- Arguments
- update_filter
- Arguments
- update_job
- Arguments
- update_model_snapshot
- Arguments
- update_trained_model_deployment
- Arguments
- upgrade_job_snapshot
- Arguments
- nodes
- clear_repositories_metering_archive
- Arguments
- get_repositories_metering_info
- Arguments
- hot_threads
- Arguments
- info
- Arguments
- reload_secure_settings
- Arguments
- stats
- Arguments
- usage
- Arguments
- query_rules
- delete_rule
- Arguments
- delete_ruleset
- Arguments
- get_rule
- Arguments
- get_ruleset
- Arguments
- list_rulesets
- Arguments
- put_rule
- Arguments
- put_ruleset
- Arguments
- test
- Arguments
- rollup
- delete_job
- Arguments
- get_jobs
- Arguments
- get_rollup_caps
- Arguments
- get_rollup_index_caps
- Arguments
- put_job
- Arguments
- rollup_search
- Arguments
- start_job
- Arguments
- stop_job
- Arguments
- search_application
- delete
- Arguments
- delete_behavioral_analytics
- Arguments
- get
- Arguments
- get_behavioral_analytics
- Arguments
- list
- Arguments
- post_behavioral_analytics_event
- Arguments
- put
- Arguments
- put_behavioral_analytics
- Arguments
- render_query
- Arguments
- search
- Arguments
- searchable_snapshots
- cache_stats
- Arguments
- clear_cache
- Arguments
- mount
- Arguments
- stats
- Arguments
- security
- activate_user_profile
- Arguments
- authenticate
- bulk_delete_role
- Arguments
- bulk_put_role
- Arguments
- bulk_update_api_keys
- Arguments
- change_password
- Arguments
- clear_api_key_cache
- Arguments
- clear_cached_privileges
- Arguments
- clear_cached_realms
- Arguments
- clear_cached_roles
- Arguments
- clear_cached_service_tokens
- Arguments
- create_api_key
- Arguments
- create_cross_cluster_api_key
- Arguments
- create_service_token
- Arguments
- delegate_pki
- Arguments
- delete_privileges
- Arguments
- delete_role
- Arguments
- delete_role_mapping
- Arguments
- delete_service_token
- Arguments
- delete_user
- Arguments
- disable_user
- Arguments
- disable_user_profile
- Arguments
- enable_user
- Arguments
- enable_user_profile
- Arguments
- enroll_kibana
- enroll_node
- get_api_key
- Arguments
- get_builtin_privileges
- get_privileges
- Arguments
- get_role
- Arguments
- get_role_mapping
- Arguments
- get_service_accounts
- Arguments
- get_service_credentials
- Arguments
- get_settings
- Arguments
- get_token
- Arguments
- get_user
- Arguments
- get_user_privileges
- Arguments
- get_user_profile
- Arguments
- grant_api_key
- Arguments
- has_privileges
- Arguments
- has_privileges_user_profile
- Arguments
- invalidate_api_key
- Arguments
- invalidate_token
- Arguments
- oidc_authenticate
- Arguments
- oidc_logout
- Arguments
- oidc_prepare_authentication
- Arguments
- put_privileges
- Arguments
- put_role
- Arguments
- put_role_mapping
- Arguments
- put_user
- Arguments
- query_api_keys
- Arguments
- query_role
- Arguments
- query_user
- Arguments
- saml_authenticate
- Arguments
- saml_complete_logout
- Arguments
- saml_invalidate
- Arguments
- saml_logout
- Arguments
- saml_prepare_authentication
- Arguments
- saml_service_provider_metadata
- Arguments
- suggest_user_profiles
- Arguments
- update_api_key
- Arguments
- update_cross_cluster_api_key
- Arguments
- update_settings
- Arguments
- update_user_profile_data
- Arguments
- shutdown
- delete_node
- Arguments
- get_node
- Arguments
- put_node
- Arguments
- simulate
- ingest
- Arguments
- slm
- delete_lifecycle
- Arguments
- execute_lifecycle
- Arguments
- execute_retention
- Arguments
- get_lifecycle
- Arguments
- get_stats
- Arguments
- get_status
- Arguments
- put_lifecycle
- Arguments
- start
- Arguments
- stop
- Arguments
- snapshot
- cleanup_repository
- Arguments
- clone
- Arguments
- create
- Arguments
- create_repository
- Arguments
- delete
- Arguments
- delete_repository
- Arguments
- get
- Arguments
- get_repository
- Arguments
- repository_analyze
- Arguments
- restore
- Arguments
- status
- Arguments
- verify_repository
- Arguments
- sql
- clear_cursor
- Arguments
- delete_async
- Arguments
- get_async
- Arguments
- get_async_status
- Arguments
- query
- Arguments
- translate
- Arguments
- ssl
- certificates
- synonyms
- delete_synonym
- Arguments
- delete_synonym_rule
- Arguments
- get_synonym
- Arguments
- get_synonym_rule
- Arguments
- get_synonyms_sets
- Arguments
- put_synonym
- Arguments
- put_synonym_rule
- Arguments
- tasks
- cancel
- Arguments
- get
- Arguments
- list
- Arguments
- text_structure
- find_field_structure
- Arguments
- find_message_structure
- Arguments
- find_structure
- Arguments
- test_grok_pattern
- Arguments
- transform
- delete_transform
- Arguments
- get_node_stats
- get_transform
- Arguments
- get_transform_stats
- Arguments
- preview_transform
- Arguments
- put_transform
- Arguments
- reset_transform
- Arguments
- schedule_now_transform
- Arguments
- start_transform
- Arguments
- stop_transform
- Arguments
- update_transform
- Arguments
- upgrade_transforms
- Arguments
- watcher
- ack_watch
- Arguments
- activate_watch
- Arguments
- deactivate_watch
- Arguments
- delete_watch
- Arguments
- execute_watch
- Arguments
- get_settings
- Arguments
- get_watch
- Arguments
- put_watch
- Arguments
- query_watches
- Arguments
- start
- Arguments
- stats
- Arguments
- stop
- Arguments
- update_settings
- Arguments
- xpack
- info
- Arguments
- usage
- Arguments