Get data frame analytics jobs Added in 7.7.0
Get configuration and usage information about data frame analytics jobs.
IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.
Query parameters
-
allow_no_match boolean
Whether to ignore if a wildcard expression matches no configs. (This includes
_all
string or when no configs have been specified) -
bytes string
The unit in which to display byte values
Values are
b
,kb
,mb
,gb
,tb
, orpb
. -
h string | array[string]
Comma-separated list of column names to display.
-
s string | array[string]
Comma-separated list of column names or column aliases used to sort the response.
-
time string
Unit used to display time values.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
.
curl \
--request GET http://api.example.com/_cat/ml/data_frame/analytics
id create_time type state
classifier_job_1 2020-02-12T11:49:09.594Z classification stopped
classifier_job_2 2020-02-12T11:49:14.479Z classification stopped
classifier_job_3 2020-02-12T11:49:16.928Z classification stopped
classifier_job_4 2020-02-12T11:49:19.127Z classification stopped
classifier_job_5 2020-02-12T11:49:21.349Z classification stopped
Get data frame analytics jobs Added in 7.7.0
Get configuration and usage information about data frame analytics jobs.
IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.
Path parameters
-
The ID of the data frame analytics to fetch
Query parameters
-
allow_no_match boolean
Whether to ignore if a wildcard expression matches no configs. (This includes
_all
string or when no configs have been specified) -
bytes string
The unit in which to display byte values
Values are
b
,kb
,mb
,gb
,tb
, orpb
. -
h string | array[string]
Comma-separated list of column names to display.
-
s string | array[string]
Comma-separated list of column names or column aliases used to sort the response.
-
time string
Unit used to display time values.
Values are
nanos
,micros
,ms
,s
,m
,h
, ord
.
curl \
--request GET http://api.example.com/_cat/ml/data_frame/analytics/{id}
id create_time type state
classifier_job_1 2020-02-12T11:49:09.594Z classification stopped
classifier_job_2 2020-02-12T11:49:14.479Z classification stopped
classifier_job_3 2020-02-12T11:49:16.928Z classification stopped
classifier_job_4 2020-02-12T11:49:19.127Z classification stopped
classifier_job_5 2020-02-12T11:49:21.349Z classification stopped
Delete a connector Beta
Removes a connector and associated sync jobs. This is a destructive action that is not recoverable. NOTE: This action doesn’t delete any API keys, ingest pipelines, or data indices associated with the connector. These need to be removed manually.
Path parameters
-
The unique identifier of the connector to be deleted
Query parameters
-
delete_sync_jobs boolean
A flag indicating if associated sync jobs should be also removed. Defaults to false.
-
hard boolean
A flag indicating if the connector should be hard deleted.
curl \
--request DELETE http://api.example.com/_connector/{connector_id}
{
"acknowledged": true
}
Update the connector filtering Beta
Update the draft filtering configuration of a connector and marks the draft validation state as edited. The filtering draft is activated once validated by the running Elastic connector service. The filtering property is used to configure sync rules (both basic and advanced) for a connector.
Path parameters
-
The unique identifier of the connector to be updated
Body Required
-
filtering array[object]
-
rules array[object]
-
advanced_snippet object
Additional properties are allowed.
curl \
--request PUT http://api.example.com/_connector/{connector_id}/_filtering \
--header "Content-Type: application/json" \
--data '"{\n \"rules\": [\n {\n \"field\": \"file_extension\",\n \"id\": \"exclude-txt-files\",\n \"order\": 0,\n \"policy\": \"exclude\",\n \"rule\": \"equals\",\n \"value\": \"txt\"\n },\n {\n \"field\": \"_\",\n \"id\": \"DEFAULT\",\n \"order\": 1,\n \"policy\": \"include\",\n \"rule\": \"regex\",\n \"value\": \".*\"\n }\n ]\n}"'
{
"rules": [
{
"field": "file_extension",
"id": "exclude-txt-files",
"order": 0,
"policy": "exclude",
"rule": "equals",
"value": "txt"
},
{
"field": "_",
"id": "DEFAULT",
"order": 1,
"policy": "include",
"rule": "regex",
"value": ".*"
}
]
}
{
"advanced_snippet": {
"value": [{
"tables": [
"users",
"orders"
],
"query": "SELECT users.id AS id, orders.order_id AS order_id FROM users JOIN orders ON users.id = orders.user_id"
}]
}
}
{
"result": "updated"
}
Update the connector draft filtering validation Technical preview
Update the draft filtering validation info for a connector.
Path parameters
-
The unique identifier of the connector to be updated
Body Required
-
Additional properties are allowed.
curl \
--request PUT http://api.example.com/_connector/{connector_id}/_filtering/_validation \
--header "Content-Type: application/json" \
--data '{"validation":{"errors":[{"ids":["string"],"messages":["string"]}],"state":"edited"}}'
Get multiple documents Added in 1.3.0
Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.
Filter source fields
By default, the _source
field is returned for every document (if stored).
Use the _source
and _source_include
or source_exclude
attributes to filter what fields are returned for a particular document.
You can include the _source
, _source_includes
, and _source_excludes
query parameters in the request URI to specify the defaults to use when there are no per-document instructions.
Get stored fields
Use the stored_fields
attribute to specify the set of stored fields you want to retrieve.
Any requested fields that are not stored are ignored.
You can include the stored_fields
query parameter in the request URI to specify the defaults to use when there are no per-document instructions.
Path parameters
-
Name of the index to retrieve documents from when
ids
are specified, or when a document in thedocs
array does not specify an index.
Query parameters
-
preference string
Specifies the node or shard the operation should be performed on. Random by default.
-
realtime boolean
If
true
, the request is real-time as opposed to near-real-time. -
refresh boolean
If
true
, the request refreshes relevant shards before retrieving documents. -
routing string
Custom value used to route operations to a specific shard.
-
_source boolean | string | array[string]
True or false to return the
_source
field or not, or a list of fields to return. -
_source_excludes string | array[string]
A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in
_source_includes
query parameter. -
_source_includes string | array[string]
A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the
_source_excludes
query parameter. If the_source
parameter isfalse
, this parameter is ignored. -
stored_fields string | array[string]
If
true
, retrieves the document fields stored in the index rather than the document_source
.
Body Required
curl \
--request POST http://api.example.com/{index}/_mget \
--header "Content-Type: application/json" \
--data '"{\n \"docs\": [\n {\n \"_id\": \"1\"\n },\n {\n \"_id\": \"2\"\n }\n ]\n}"'
{
"docs": [
{
"_id": "1"
},
{
"_id": "2"
}
]
}
{
"docs": [
{
"_index": "test",
"_id": "1",
"_source": false
},
{
"_index": "test",
"_id": "2",
"_source": [ "field3", "field4" ]
},
{
"_index": "test",
"_id": "3",
"_source": {
"include": [ "user" ],
"exclude": [ "user.location" ]
}
}
]
}
{
"docs": [
{
"_index": "test",
"_id": "1",
"stored_fields": [ "field1", "field2" ]
},
{
"_index": "test",
"_id": "2",
"stored_fields": [ "field3", "field4" ]
}
]
}
{
"docs": [
{
"_index": "test",
"_id": "1",
"routing": "key2"
},
{
"_index": "test",
"_id": "2"
}
]
}
Delete an async EQL search Added in 7.9.0
Delete an async EQL search or a stored synchronous EQL search. The API also deletes results for the search.
Path parameters
-
Identifier for the search to delete. A search ID is provided in the EQL search API's response for an async search. A search ID is also provided if the request’s
keep_on_completion
parameter istrue
.
curl \
--request DELETE http://api.example.com/_eql/search/{id}
Graph explore
The graph explore API enables you to extract and summarize information about the documents and terms in an Elasticsearch data stream or index.
Path parameters
-
Comma-separated list of data streams, indices, and aliases. Supports wildcards (
*
).
Query parameters
-
allow_no_indices boolean
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards string | array[string]
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
flat_settings boolean
If
true
, returns settings in flat format. -
include_defaults boolean
If
true
, return all default settings in the response. -
local boolean
If
true
, the request retrieves information from the local node only.
curl \
--request HEAD http://api.example.com/{index}
Path parameters
-
Comma-separated list of data streams or indices used to limit the request. Supports wildcards (
*
). -
Comma-separated list of aliases to remove. Supports wildcards (
*
). To remove all aliases, use*
or_all
.
Query parameters
-
master_timeout string
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
timeout string
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request DELETE http://api.example.com/{index}/_aliases/{name}
Path parameters
-
Comma-separated list of aliases to check. Supports wildcards (
*
).
Query parameters
-
allow_no_indices boolean
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards string | array[string]
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
master_timeout string
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
curl \
--request HEAD http://api.example.com/_alias/{name}
Refresh an index
A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.
By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds.
You can change this default interval with the index.refresh_interval
setting.
Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for
query parameter option.
This option ensures the indexing operation waits for a periodic refresh before running the search.
Query parameters
-
allow_no_indices boolean
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
expand_wildcards string | array[string]
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
.
curl \
--request GET http://api.example.com/_refresh
Roll over to a new index Added in 5.0.0
TIP: It is recommended to use the index lifecycle rollover action to automate rollovers.
The rollover API creates a new index for a data stream or index alias. The API behavior depends on the rollover target.
Roll over a data stream
If you roll over a data stream, the API creates a new write index for the stream. The stream's previous write index becomes a regular backing index. A rollover also increments the data stream's generation.
Roll over an index alias with a write index
TIP: Prior to Elasticsearch 7.9, you'd typically use an index alias with a write index to manage time series data. Data streams replace this functionality, require less maintenance, and automatically integrate with data tiers.
If an index alias points to multiple indices, one of the indices must be a write index.
The rollover API creates a new write index for the alias with is_write_index
set to true
.
The API also sets is_write_index
to false
for the previous write index.
Roll over an index alias with one index
If you roll over an index alias that points to only one index, the API creates a new index for the alias and removes the original index from the alias.
NOTE: A rollover creates a new index and is subject to the wait_for_active_shards
setting.
Increment index names for an alias
When you roll over an index alias, you can specify a name for the new index.
If you don't specify a name and the current index ends with -
and a number, such as my-index-000001
or my-index-3
, the new index name increments that number.
For example, if you roll over an alias with a current index of my-index-000001
, the rollover creates a new index named my-index-000002
.
This number is always six characters and zero-padded, regardless of the previous index's name.
If you use an index alias for time series data, you can use date math in the index name to track the rollover date.
For example, you can create an alias that points to an index named <my-index-{now/d}-000001>
.
If you create the index on May 6, 2099, the index's name is my-index-2099.05.06-000001
.
If you roll over the alias on May 7, 2099, the new index's name is my-index-2099.05.07-000002
.
Path parameters
-
Name of the data stream or index alias to roll over.
-
Name of the index to create. Supports date math. Data streams do not support this parameter.
Query parameters
-
dry_run boolean
If
true
, checks whether the current index satisfies the specified conditions but does not perform a rollover. -
master_timeout string
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
-
timeout string
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
wait_for_active_shards number | string
The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (
number_of_replicas+1
).
Body
-
aliases object
Aliases for the target index. Data streams do not support this parameter.
-
conditions object
Additional properties are allowed.
-
mappings object
Additional properties are allowed.
-
settings object
Configuration options for the index. Data streams do not support this parameter.
curl \
--request POST http://api.example.com/{alias}/_rollover/{new_index} \
--header "Content-Type: application/json" \
--data '"{\n \"conditions\": {\n \"max_age\": \"7d\",\n \"max_docs\": 1000,\n \"max_primary_shard_size\": \"50gb\",\n \"max_primary_shard_docs\": \"2000\"\n }\n}"'
{
"conditions": {
"max_age": "7d",
"max_docs": 1000,
"max_primary_shard_size": "50gb",
"max_primary_shard_docs": "2000"
}
}
{
"_shards": {},
"indices": {
"test": {
"shards": {
"0": [
{
"routing": {
"node": "zDC_RorJQCao9xf9pg3Fvw",
"state": "STARTED",
"primary": true
},
"segments": {
"_0": {
"search": true,
"version": "7.0.0",
"compound": true,
"num_docs": 1,
"committed": false,
"attributes": {},
"generation": 0,
"deleted_docs": 0,
"size_in_bytes": 3800
}
},
"num_search_segments": 1,
"num_committed_segments": 0
}
]
}
}
}
}
Query parameters
-
allow_no_indices boolean
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
all_shards boolean
If
true
, the validation is executed on all shards instead of one random shard per index. -
analyzer string
Analyzer to use for the query string. This parameter can only be used when the
q
query string parameter is specified. -
analyze_wildcard boolean
If
true
, wildcard and prefix queries are analyzed. -
default_operator string
The default operator for query string query:
AND
orOR
.Values are
and
,AND
,or
, orOR
. -
df string
Field to use as default where no field prefix is given in the query string. This parameter can only be used when the
q
query string parameter is specified. -
expand_wildcards string | array[string]
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
explain boolean
If
true
, the response returns detailed information if an error has occurred. -
lenient boolean
If
true
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
rewrite boolean
If
true
, returns a more detailed explanation showing the actual Lucene query that will be executed. -
q string
Query in the Lucene query string syntax.
curl \
--request GET http://api.example.com/_validate/query \
--header "Content-Type: application/json" \
--data '{"query":{}}'
Query parameters
-
allow_no_indices boolean
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
all_shards boolean
If
true
, the validation is executed on all shards instead of one random shard per index. -
analyzer string
Analyzer to use for the query string. This parameter can only be used when the
q
query string parameter is specified. -
analyze_wildcard boolean
If
true
, wildcard and prefix queries are analyzed. -
default_operator string
The default operator for query string query:
AND
orOR
.Values are
and
,AND
,or
, orOR
. -
df string
Field to use as default where no field prefix is given in the query string. This parameter can only be used when the
q
query string parameter is specified. -
expand_wildcards string | array[string]
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. Valid values are:all
,open
,closed
,hidden
,none
. -
explain boolean
If
true
, the response returns detailed information if an error has occurred. -
lenient boolean
If
true
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. -
rewrite boolean
If
true
, returns a more detailed explanation showing the actual Lucene query that will be executed. -
q string
Query in the Lucene query string syntax.
curl \
--request POST http://api.example.com/_validate/query \
--header "Content-Type: application/json" \
--data '{"query":{}}'
Get overall bucket results Added in 6.1.0
Retrievs overall bucket results that summarize the bucket results of multiple anomaly detection jobs.
The overall_score
is calculated by combining the scores of all the
buckets within the overall bucket span. First, the maximum
anomaly_score
per anomaly detection job in the overall bucket is
calculated. Then the top_n
of those scores are averaged to result in
the overall_score
. This means that you can fine-tune the
overall_score
so that it is more or less sensitive to the number of
jobs that detect an anomaly at the same time. For example, if you set
top_n
to 1
, the overall_score
is the maximum bucket score in the
overall bucket. Alternatively, if you set top_n
to the number of jobs,
the overall_score
is high only when all jobs detect anomalies in that
overall bucket. If you set the bucket_span
parameter (to a value
greater than its default), the overall_score
is the maximum
overall_score
of the overall buckets that have a span equal to the
jobs' largest bucket span.
Path parameters
-
Identifier for the anomaly detection job. It can be a job identifier, a group name, a comma-separated list of jobs or groups, or a wildcard expression.
You can summarize the bucket results for all anomaly detection jobs by using
_all
or by specifying*
as the<job_id>
.
Query parameters
-
allow_no_match boolean
Specifies what to do when the request:
- Contains wildcard expressions and there are no jobs that match.
- Contains the
_all
string or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
If
true
, the request returns an emptyjobs
array when there are no matches and the subset of results when there are partial matches. If this parameter isfalse
, the request returns a404
status code when there are no matches or only partial matches. -
bucket_span string
The span of the overall buckets. Must be greater or equal to the largest bucket span of the specified anomaly detection jobs, which is the default value.
By default, an overall bucket has a span equal to the largest bucket span of the specified anomaly detection jobs. To override that behavior, use the optional
bucket_span
parameter. -
end string | number
Returns overall buckets with timestamps earlier than this time.
-
exclude_interim boolean
If
true
, the output excludes interim results. -
overall_score number | string
Returns overall buckets with overall scores greater than or equal to this value.
-
start string | number
Returns overall buckets with timestamps after this time.
-
top_n number
The number of top anomaly detection job bucket scores to be used in the
overall_score
calculation.
Body
-
allow_no_match boolean
Refer to the description for the
allow_no_match
query parameter. -
bucket_span string
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. end string | number
A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.
-
exclude_interim boolean
Refer to the description for the
exclude_interim
query parameter. overall_score number | string
Refer to the description for the
overall_score
query parameter.start string | number
A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.
-
top_n number
Refer to the description for the
top_n
query parameter.
curl \
--request POST http://api.example.com/_ml/anomaly_detectors/{job_id}/results/overall_buckets \
--header "Content-Type: application/json" \
--data '{"allow_no_match":true,"bucket_span":"string","":"string","exclude_interim":true,"overall_score":42.0,"top_n":42.0}'
Update a filter Added in 6.4.0
Updates the description of a filter, adds items, or removes items from the list.
Path parameters
-
A string that uniquely identifies a filter.
Body Required
-
add_items array[string]
The items to add to the filter.
-
description string
A description for the filter.
-
remove_items array[string]
The items to remove from the filter.
curl \
--request POST http://api.example.com/_ml/filters/{filter_id}/_update \
--header "Content-Type: application/json" \
--data '{"add_items":["string"],"description":"string","remove_items":["string"]}'
Get trained models usage info Added in 7.10.0
You can get usage information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.
Query parameters
-
allow_no_match boolean
Specifies what to do when the request:
- Contains wildcard expressions and there are no models that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
If true, it returns an empty array when there are no matches and the subset of results when there are partial matches.
-
from number
Skips the specified number of models.
-
size number
Specifies the maximum number of models to obtain.
curl \
--request GET http://api.example.com/_ml/trained_models/_stats
Clear a scrolling search
Clear the search context and results for a scrolling search.
curl \
--request DELETE http://api.example.com/_search/scroll \
--header "Content-Type: application/json" \
--data '"{\n \"scroll_id\": \"DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==\"\n}"'
{
"scroll_id": "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="
}
Count search results
Get the number of documents matching a query.
The query can be provided either by using a simple query string as a parameter, or by defining Query DSL within the request body.
The query is optional. When no query is provided, the API uses match_all
to count all the documents.
The count API supports multi-target syntax. You can run a single count API search across multiple data streams and indices.
The operation is broadcast across all shards. For each shard ID group, a replica is chosen and the search is run against it. This means that replicas increase the scalability of the count.
Query parameters
-
allow_no_indices boolean
If
false
, the request returns an error if any wildcard expression, index alias, or_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
analyzer string
The analyzer to use for the query string. This parameter can be used only when the
q
query string parameter is specified. -
analyze_wildcard boolean
If
true
, wildcard and prefix queries are analyzed. This parameter can be used only when theq
query string parameter is specified. -
default_operator string
The default operator for query string query:
AND
orOR
. This parameter can be used only when theq
query string parameter is specified.Values are
and
,AND
,or
, orOR
. -
df string
The field to use as a default when no field prefix is given in the query string. This parameter can be used only when the
q
query string parameter is specified. -
expand_wildcards string | array[string]
The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as
open,hidden
. -
If
true
, concrete, expanded, or aliased indices are ignored when frozen. -
lenient boolean
If
true
, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theq
query string parameter is specified. -
min_score number
The minimum
_score
value that documents must have to be included in the result. -
preference string
The node or shard the operation should be performed on. By default, it is random.
-
routing string
A custom value used to route operations to a specific shard.
-
terminate_after number
The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.
IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.
-
q string
The query in Lucene query string syntax. This parameter cannot be used with a request body.
curl \
--request POST http://api.example.com/_count \
--header "Content-Type: application/json" \
--data '"{\n \"query\" : {\n \"term\" : { \"user.id\" : \"kimchy\" }\n }\n}"'
{
"query" : {
"term" : { "user.id" : "kimchy" }
}
}
{
"count": 1,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
}
}
Get the field capabilities Added in 5.4.0
Get information about the capabilities of fields among multiple indices.
For data streams, the API returns field capabilities among the stream’s backing indices.
It returns runtime fields like any other field.
For example, a runtime field with a type of keyword is returned the same as any other field that belongs to the keyword
family.
Query parameters
-
allow_no_indices boolean
If false, the request returns an error if any wildcard expression, index alias, or
_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts with foo but no index starts with bar. -
expand_wildcards string | array[string]
The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden
. -
fields string | array[string]
A comma-separated list of fields to retrieve capabilities for. Wildcard (
*
) expressions are supported. -
include_unmapped boolean
If true, unmapped fields are included in the response.
-
filters string
A comma-separated list of filters to apply to the response.
-
types array[string]
A comma-separated list of field types to include. Any fields that do not match one of these types will be excluded from the results. It defaults to empty, meaning that all field types are returned.
-
include_empty_fields boolean
If false, empty fields are not included in the response.
Body
-
fields string | array[string]
-
index_filter object
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
Additional properties are allowed.
-
runtime_mappings object
curl \
--request GET http://api.example.com/_field_caps \
--header "Content-Type: application/json" \
--data '"{\n \"index_filter\": {\n \"range\": {\n \"@timestamp\": {\n \"gte\": \"2018\"\n }\n }\n }\n}"'
{
"index_filter": {
"range": {
"@timestamp": {
"gte": "2018"
}
}
}
}
{
"indices": [ "index1", "index2", "index3", "index4", "index5" ],
"fields": {
"rating": {
"long": {
"metadata_field": false,
"searchable": true,
"aggregatable": false,
"indices": [ "index1", "index2" ],
"non_aggregatable_indices": [ "index1" ]
},
"keyword": {
"metadata_field": false,
"searchable": false,
"aggregatable": true,
"indices": [ "index3", "index4" ],
"non_searchable_indices": [ "index4" ]
}
},
"title": {
"text": {
"metadata_field": false,
"searchable": true,
"aggregatable": false
}
}
}
}
{
"indices": [ "index1", "index2", "index3", "index4", "index5" ],
"fields": {
"rating": {
"long": {
"metadata_field": false,
"searchable": true,
"aggregatable": false,
"indices": [ "index1", "index2" ],
"non_aggregatable_indices": [ "index1" ]
},
"keyword": {
"metadata_field": false,
"searchable": false,
"aggregatable": true,
"indices": [ "index3", "index4" ],
"non_searchable_indices": [ "index4" ]
}
},
"title": {
"text": {
"metadata_field": false,
"searchable": true,
"aggregatable": false
}
}
}
}
Check user privileges Added in 6.4.0
Determine whether the specified user has a specified list of privileges. All users can use this API, but only to determine their own privileges. To check the privileges of other users, you must use the run as feature.
Path parameters
-
Username
Body Required
-
application array[object]
-
cluster array[string]
A list of the cluster privileges that you want to check.
-
index array[object]
curl \
--request GET http://api.example.com/_security/user/{user}/_has_privileges \
--header "Content-Type: application/json" \
--data '{"application":[{"application":"string","privileges":["string"],"resources":["string"]}],"cluster":["string"],"index":[{"names":"string","privileges":["string"],"allow_restricted_indices":true}]}'
Get task information Technical preview
Get information about a task currently running in the cluster.
WARNING: The task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible.
If the task identifier is not found, a 404 response code indicates that there are no resources that match the request.
Path parameters
-
The task identifier.
Query parameters
-
timeout string
The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
-
wait_for_completion boolean
If
true
, the request blocks until the task has completed.
curl \
--request GET http://api.example.com/_tasks/{task_id}
Create a transform Added in 7.2.0
Creates a transform.
A transform copies data from source indices, transforms it, and persists it into an entity-centric destination index. You can also think of the destination index as a two-dimensional tabular data structure (known as a data frame). The ID for each document in the data frame is generated from a hash of the entity, so there is a unique row per entity.
You must choose either the latest or pivot method for your transform; you cannot use both in a single transform. If
you choose to use the pivot method for your transform, the entities are defined by the set of group_by
fields in
the pivot object. If you choose to use the latest method, the entities are defined by the unique_key
field values
in the latest object.
You must have create_index
, index
, and read
privileges on the destination index and read
and
view_index_metadata
privileges on the source indices. When Elasticsearch security features are enabled, the
transform remembers which roles the user that created it had at the time of creation and uses those same roles. If
those roles do not have the required privileges on the source and destination indices, the transform fails when it
attempts unauthorized operations.
NOTE: You must use Kibana or this API to create a transform. Do not add a transform directly into any
.transform-internal*
indices using the Elasticsearch index API. If Elasticsearch security features are enabled, do
not give users any privileges on .transform-internal*
indices. If you used transforms prior to 7.5, also do not
give users any privileges on .data-frame-internal*
indices.
Path parameters
-
Identifier for the transform. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It has a 64 character limit and must start and end with alphanumeric characters.
Query parameters
-
defer_validation boolean
When the transform is created, a series of validations occur to ensure its success. For example, there is a check for the existence of the source indices and a check that the destination index is not part of the source index pattern. You can use this parameter to skip the checks, for example when the source index does not exist until after the transform is created. The validations are always run when you start the transform, however, with the exception of privilege checks.
-
timeout string
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
Body Required
-
Additional properties are allowed.
-
description string
Free text description of the transform.
-
frequency string
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
latest object
Additional properties are allowed.
-
_meta object
-
pivot object
Additional properties are allowed.
-
retention_policy object
Additional properties are allowed.
-
settings object
Additional properties are allowed.
-
Additional properties are allowed.
-
sync object
Additional properties are allowed.
curl \
--request PUT http://api.example.com/_transform/{transform_id} \
--header "Content-Type: application/json" \
--data '{"dest":{"index":"kibana_sample_data_ecommerce_transform1","pipeline":"add_timestamp_pipeline"},"sync":{"time":{"delay":"60s","field":"order_date"}},"pivot":{"group_by":{"customer_id":{"terms":{"field":"customer_id","missing_bucket":true}}},"aggregations":{"max_price":{"max":{"field":"taxful_total_price"}}}},"source":{"index":"kibana_sample_data_ecommerce","query":{"term":{"geoip.continent_name":{"value":"Asia"}}}},"frequency":"5m","description":"Maximum priced ecommerce data by customer_id in Asia","retention_policy":{"time":{"field":"order_date","max_age":"30d"}}}'
{
"dest": {
"index": "kibana_sample_data_ecommerce_transform1",
"pipeline": "add_timestamp_pipeline"
},
"sync": {
"time": {
"delay": "60s",
"field": "order_date"
}
},
"pivot": {
"group_by": {
"customer_id": {
"terms": {
"field": "customer_id",
"missing_bucket": true
}
}
},
"aggregations": {
"max_price": {
"max": {
"field": "taxful_total_price"
}
}
}
},
"source": {
"index": "kibana_sample_data_ecommerce",
"query": {
"term": {
"geoip.continent_name": {
"value": "Asia"
}
}
}
},
"frequency": "5m",
"description": "Maximum priced ecommerce data by customer_id in Asia",
"retention_policy": {
"time": {
"field": "order_date",
"max_age": "30d"
}
}
}
{
"dest": {
"index": "kibana_sample_data_ecommerce_transform2"
},
"sync": {
"time": {
"delay": "60s",
"field": "order_date"
}
},
"latest": {
"sort": "order_date",
"unique_key": [
"customer_id"
]
},
"source": {
"index": "kibana_sample_data_ecommerce"
},
"frequency": "5m",
"description": "Latest order for each customer"
}
{
"acknowledged": true
}