Get an autoscaling policy Added in 7.11.0

GET /_autoscaling/policy/{name}

NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.

Path parameters

  • name string Required

    the name of the autoscaling policy

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
GET /_autoscaling/policy/{name}
curl \
 --request GET http://api.example.com/_autoscaling/policy/{name}
Response examples (200)
This may be a response to `GET /_autoscaling/policy/my_autoscaling_policy`.
{
   "roles": <roles>,
   "deciders": <deciders>
}




Delete an autoscaling policy Added in 7.11.0

DELETE /_autoscaling/policy/{name}

NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.

Path parameters

  • name string Required

    the name of the autoscaling policy

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_autoscaling/policy/{name}
curl \
 --request DELETE http://api.example.com/_autoscaling/policy/{name}
Response examples (200)
This may be a response to either `DELETE /_autoscaling/policy/my_autoscaling_policy` or `DELETE /_autoscaling/policy/*`.
{
  "acknowledged": true
}
















Get behavioral analytics collections Technical preview

GET /_application/analytics

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties

      Additional properties are allowed.

      Hide * attribute Show * attribute object
      • event_data_stream object Required

        Additional properties are allowed.

        Hide event_data_stream attribute Show event_data_stream attribute object
GET /_application/analytics
curl \
 --request GET http://api.example.com/_application/analytics
Response examples (200)
A successful response from `GET _application/analytics/my*`
{
  "my_analytics_collection": {
      "event_data_stream": {
          "name": "behavioral_analytics-events-my_analytics_collection"
      }
  },
  "my_analytics_collection2": {
      "event_data_stream": {
          "name": "behavioral_analytics-events-my_analytics_collection2"
      }
  }
}

























































Get index information

GET /_cat/indices/{index}

Get high-level information about indices in a cluster, including backing indices for data streams.

Use this request to get the following information for each index in a cluster:

  • shard count
  • document count
  • deleted document count
  • primary store size
  • total store size of all shards, including shard replicas

These metrics are retrieved directly from Lucene, which Elasticsearch uses internally to power indexing and search. As a result, all document counts include hidden nested documents. To get an accurate count of Elasticsearch documents, use the cat count or count APIs.

CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use an index endpoint.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match.

  • health string

    The health status used to limit returned indices. By default, the response includes indices of any health status.

    Values are green, GREEN, yellow, YELLOW, red, or RED.

  • If true, the response includes information from segments that are not loaded into memory.

  • pri boolean

    If true, the response only includes information from primary shards.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • Period to wait for a connection to the master node.

Responses

GET /_cat/indices/{index}
curl \
 --request GET http://api.example.com/_cat/indices/{index}
Response examples (200)
[
  {
    "health": "string",
    "status": "string",
    "index": "string",
    "uuid": "string",
    "pri": "string",
    "rep": "string",
    "docs.count": "string",
    "docs.deleted": "string",
    "creation.date": "string",
    "creation.date.string": "string",
    "store.size": "string",
    "pri.store.size": "string",
    "dataset.size": "string",
    "completion.size": "string",
    "pri.completion.size": "string",
    "fielddata.memory_size": "string",
    "pri.fielddata.memory_size": "string",
    "fielddata.evictions": "string",
    "pri.fielddata.evictions": "string",
    "query_cache.memory_size": "string",
    "pri.query_cache.memory_size": "string",
    "query_cache.evictions": "string",
    "pri.query_cache.evictions": "string",
    "request_cache.memory_size": "string",
    "pri.request_cache.memory_size": "string",
    "request_cache.evictions": "string",
    "pri.request_cache.evictions": "string",
    "request_cache.hit_count": "string",
    "pri.request_cache.hit_count": "string",
    "request_cache.miss_count": "string",
    "pri.request_cache.miss_count": "string",
    "flush.total": "string",
    "pri.flush.total": "string",
    "flush.total_time": "string",
    "pri.flush.total_time": "string",
    "get.current": "string",
    "pri.get.current": "string",
    "get.time": "string",
    "pri.get.time": "string",
    "get.total": "string",
    "pri.get.total": "string",
    "get.exists_time": "string",
    "pri.get.exists_time": "string",
    "get.exists_total": "string",
    "pri.get.exists_total": "string",
    "get.missing_time": "string",
    "pri.get.missing_time": "string",
    "get.missing_total": "string",
    "pri.get.missing_total": "string",
    "indexing.delete_current": "string",
    "pri.indexing.delete_current": "string",
    "indexing.delete_time": "string",
    "pri.indexing.delete_time": "string",
    "indexing.delete_total": "string",
    "pri.indexing.delete_total": "string",
    "indexing.index_current": "string",
    "pri.indexing.index_current": "string",
    "indexing.index_time": "string",
    "pri.indexing.index_time": "string",
    "indexing.index_total": "string",
    "pri.indexing.index_total": "string",
    "indexing.index_failed": "string",
    "pri.indexing.index_failed": "string",
    "merges.current": "string",
    "pri.merges.current": "string",
    "merges.current_docs": "string",
    "pri.merges.current_docs": "string",
    "merges.current_size": "string",
    "pri.merges.current_size": "string",
    "merges.total": "string",
    "pri.merges.total": "string",
    "merges.total_docs": "string",
    "pri.merges.total_docs": "string",
    "merges.total_size": "string",
    "pri.merges.total_size": "string",
    "merges.total_time": "string",
    "pri.merges.total_time": "string",
    "refresh.total": "string",
    "pri.refresh.total": "string",
    "refresh.time": "string",
    "pri.refresh.time": "string",
    "refresh.external_total": "string",
    "pri.refresh.external_total": "string",
    "refresh.external_time": "string",
    "pri.refresh.external_time": "string",
    "refresh.listeners": "string",
    "pri.refresh.listeners": "string",
    "search.fetch_current": "string",
    "pri.search.fetch_current": "string",
    "search.fetch_time": "string",
    "pri.search.fetch_time": "string",
    "search.fetch_total": "string",
    "pri.search.fetch_total": "string",
    "search.open_contexts": "string",
    "pri.search.open_contexts": "string",
    "search.query_current": "string",
    "pri.search.query_current": "string",
    "search.query_time": "string",
    "pri.search.query_time": "string",
    "search.query_total": "string",
    "pri.search.query_total": "string",
    "search.scroll_current": "string",
    "pri.search.scroll_current": "string",
    "search.scroll_time": "string",
    "pri.search.scroll_time": "string",
    "search.scroll_total": "string",
    "pri.search.scroll_total": "string",
    "segments.count": "string",
    "pri.segments.count": "string",
    "segments.memory": "string",
    "pri.segments.memory": "string",
    "segments.index_writer_memory": "string",
    "pri.segments.index_writer_memory": "string",
    "segments.version_map_memory": "string",
    "pri.segments.version_map_memory": "string",
    "segments.fixed_bitset_memory": "string",
    "pri.segments.fixed_bitset_memory": "string",
    "warmer.current": "string",
    "pri.warmer.current": "string",
    "warmer.total": "string",
    "pri.warmer.total": "string",
    "warmer.total_time": "string",
    "pri.warmer.total_time": "string",
    "suggest.current": "string",
    "pri.suggest.current": "string",
    "suggest.time": "string",
    "pri.suggest.time": "string",
    "suggest.total": "string",
    "pri.suggest.total": "string",
    "memory.total": "string",
    "pri.memory.total": "string",
    "search.throttled": "string",
    "bulk.total_operations": "string",
    "pri.bulk.total_operations": "string",
    "bulk.total_time": "string",
    "pri.bulk.total_time": "string",
    "bulk.total_size_in_bytes": "string",
    "pri.bulk.total_size_in_bytes": "string",
    "bulk.avg_time": "string",
    "pri.bulk.avg_time": "string",
    "bulk.avg_size_in_bytes": "string",
    "pri.bulk.avg_size_in_bytes": "string"
  }
]

Get master node information

GET /_cat/master

Get information about the master node, including the ID, bound IP address, and name.

IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Query parameters

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • Period to wait for a connection to the master node.

Responses

GET /_cat/master
curl \
 --request GET http://api.example.com/_cat/master
Response examples (200)
[
  {
    "id": "string",
    "host": "string",
    "ip": "string",
    "node": "string"
  }
]

Get data frame analytics jobs Added in 7.7.0

GET /_cat/ml/data_frame/analytics

Get configuration and usage information about data frame analytics jobs.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get data frame analytics jobs statistics API.

Query parameters

  • Whether to ignore if a wildcard expression matches no configs. (This includes _all string or when no configs have been specified)

  • bytes string

    The unit in which to display byte values

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/ml/data_frame/analytics
curl \
 --request GET http://api.example.com/_cat/ml/data_frame/analytics
Response examples (200)
[
  {
    "id": "string",
    "type": "string",
    "create_time": "string",
    "version": "string",
    "source_index": "string",
    "dest_index": "string",
    "description": "string",
    "model_memory_limit": "string",
    "state": "string",
    "failure_reason": "string",
    "progress": "string",
    "assignment_explanation": "string",
    "node.id": "string",
    "node.name": "string",
    "node.ephemeral_id": "string",
    "node.address": "string"
  }
]












Get anomaly detection jobs Added in 7.7.0

GET /_cat/ml/anomaly_detectors

Get configuration and usage information for anomaly detection jobs. This API returns a maximum of 10,000 jobs. If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage cluster privileges to use this API.

IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get anomaly detection job statistics API.

Query parameters

  • Specifies what to do when the request:

    • Contains wildcard expressions and there are no jobs that match.
    • Contains the _all string or no identifiers and there are no matches.
    • Contains wildcard expressions and there are only partial matches.

    If true, the API returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If false, the API returns a 404 status code when there are no matches or only partial matches.

  • bytes string

    The unit used to display byte values.

    Values are b, kb, mb, gb, tb, or pb.

  • h string | array[string]

    Comma-separated list of column names to display.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

  • Hide response attributes Show response attributes object
    • id string
    • state string

      Values are closing, closed, opened, failed, or opening.

    • For open jobs only, the amount of time the job has been opened.

    • For open anomaly detection jobs only, contains messages relating to the selection of a node to run the job.

    • The number of input documents that have been processed by the anomaly detection job. This value includes documents with missing fields, since they are nonetheless analyzed. If you use datafeeds and have aggregations in your search query, the processed_record_count is the number of aggregation results processed, not the number of Elasticsearch documents.

    • The total number of fields in all the documents that have been processed by the anomaly detection job. Only fields that are specified in the detector configuration object contribute to this count. The timestamp is not included in this count.

    • The number of input documents posted to the anomaly detection job.

    • The total number of fields in input documents posted to the anomaly detection job. This count includes fields that are not used in the analysis. However, be aware that if you are using a datafeed, it extracts only the required fields from the documents it retrieves before posting them to the job.

    • The number of input documents with either a missing date field or a date that could not be parsed.

    • The number of input documents that are missing a field that the anomaly detection job is configured to analyze. Input documents with missing fields are still processed because it is possible that not all fields are missing. If you are using datafeeds or posting data to the job in JSON format, a high missing_field_count is often not an indication of data issues. It is not necessarily a cause for concern.

    • The number of input documents that have a timestamp chronologically preceding the start of the current anomaly detection bucket offset by the latency window. This information is applicable only when you provide data to the anomaly detection job by using the post data API. These out of order documents are discarded, since jobs require time series data to be in ascending chronological order.

    • The number of buckets which did not contain any data. If your data contains many empty buckets, consider increasing your bucket_span or using functions that are tolerant to gaps in data such as mean, non_null_sum or non_zero_count.

    • The number of buckets that contained few data points compared to the expected number of data points. If your data contains many sparse buckets, consider using a longer bucket_span.

    • The total number of buckets processed.

    • The timestamp of the earliest chronologically input document.

    • The timestamp of the latest chronologically input document.

    • The timestamp at which data was last analyzed, according to server time.

    • The timestamp of the last bucket that did not contain any data.

    • The timestamp of the last bucket that was considered sparse.

    • Values are ok, soft_limit, or hard_limit.

    • The upper limit for model memory usage, checked on increasing values.

    • The number of by field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of over field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of partition field values that were analyzed by the models. This value is cumulative for all detectors in the job.

    • The number of buckets for which new entities in incoming data were not processed due to insufficient model memory. This situation is also signified by a hard_limit: memory_status property value.

    • Values are ok or warn.

    • The number of documents that have had a field categorized.

    • The number of categories created by categorization.

    • The number of categories that match more than 1% of categorized documents.

    • The number of categories that match just one categorized document.

    • The number of categories created by categorization that will never be assigned again because another category’s definition makes it a superset of the dead category. Dead categories are a side effect of the way categorization has no prior training.

    • The number of times that categorization wanted to create a new category but couldn’t because the job had hit its model_memory_limit. This count does not track which specific categories failed to be created. Therefore you cannot use this value to determine the number of unique categories that were missed.

    • The timestamp when the model stats were gathered, according to server time.

    • The timestamp of the last record when the model stats were gathered.

    • The number of individual forecasts currently available for the job. A value of one or more indicates that forecasts exist.

    • The minimum memory usage in bytes for forecasts related to the anomaly detection job.

    • The maximum memory usage in bytes for forecasts related to the anomaly detection job.

    • The average memory usage in bytes for forecasts related to the anomaly detection job.

    • The total memory usage in bytes for forecasts related to the anomaly detection job.

    • The minimum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The maximum number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The average number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The total number of model_forecast documents written for forecasts related to the anomaly detection job.

    • The minimum runtime in milliseconds for forecasts related to the anomaly detection job.

    • The maximum runtime in milliseconds for forecasts related to the anomaly detection job.

    • The average runtime in milliseconds for forecasts related to the anomaly detection job.

    • The total runtime in milliseconds for forecasts related to the anomaly detection job.

    • node.id string
    • The name of the assigned node.

    • The network address of the assigned node.

    • The number of bucket results produced by the job.

    • The sum of all bucket processing times, in milliseconds.

    • The minimum of all bucket processing times, in milliseconds.

    • The maximum of all bucket processing times, in milliseconds.

    • The exponential moving average of all bucket processing times, in milliseconds.

    • The exponential moving average of bucket processing times calculated in a one hour time window, in milliseconds.

GET /_cat/ml/anomaly_detectors
curl \
 --request GET http://api.example.com/_cat/ml/anomaly_detectors
Response examples (200)
[
  {
    "id": "string",
    "state": "closing",
    "opened_time": "string",
    "assignment_explanation": "string",
    "data.processed_records": "string",
    "data.processed_fields": "string",
    "": 42.0,
    "data.input_records": "string",
    "data.input_fields": "string",
    "data.invalid_dates": "string",
    "data.missing_fields": "string",
    "data.out_of_order_timestamps": "string",
    "data.empty_buckets": "string",
    "data.sparse_buckets": "string",
    "data.buckets": "string",
    "data.earliest_record": "string",
    "data.latest_record": "string",
    "data.last": "string",
    "data.last_empty_bucket": "string",
    "data.last_sparse_bucket": "string",
    "model.memory_status": "ok",
    "model.memory_limit": "string",
    "model.by_fields": "string",
    "model.over_fields": "string",
    "model.partition_fields": "string",
    "model.bucket_allocation_failures": "string",
    "model.categorization_status": "ok",
    "model.categorized_doc_count": "string",
    "model.total_category_count": "string",
    "model.frequent_category_count": "string",
    "model.rare_category_count": "string",
    "model.dead_category_count": "string",
    "model.failed_category_count": "string",
    "model.log_time": "string",
    "model.timestamp": "string",
    "forecasts.total": "string",
    "forecasts.memory.min": "string",
    "forecasts.memory.max": "string",
    "forecasts.memory.avg": "string",
    "forecasts.memory.total": "string",
    "forecasts.records.min": "string",
    "forecasts.records.max": "string",
    "forecasts.records.avg": "string",
    "forecasts.records.total": "string",
    "forecasts.time.min": "string",
    "forecasts.time.max": "string",
    "forecasts.time.avg": "string",
    "forecasts.time.total": "string",
    "node.id": "string",
    "node.name": "string",
    "node.ephemeral_id": "string",
    "node.address": "string",
    "buckets.count": "string",
    "buckets.time.total": "string",
    "buckets.time.min": "string",
    "buckets.time.max": "string",
    "buckets.time.exp_avg": "string",
    "buckets.time.exp_avg_hour": "string"
  }
]




















Get pending task information

GET /_cat/pending_tasks

Get information about cluster-level changes that have not yet taken effect. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the pending cluster tasks API.

Query parameters

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • Period to wait for a connection to the master node.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

Responses

GET /_cat/pending_tasks
curl \
 --request GET http://api.example.com/_cat/pending_tasks
Response examples (200)
[
  {
    "insertOrder": "string",
    "timeInQueue": "string",
    "priority": "string",
    "source": "string"
  }
]








































Get task information Technical preview

GET /_cat/tasks

Get information about tasks currently running in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.

Query parameters

  • actions array[string]

    The task action names, which are used to limit the response.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

  • nodes array[string]

    Unique node identifiers, which are used to limit the response.

  • The parent task identifier, which is used to limit the response.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

  • If true, the request blocks until the task has completed.

Responses

GET /_cat/tasks
curl \
 --request GET http://api.example.com/_cat/tasks
Response examples (200)
[
  {
    "id": "string",
    "action": "string",
    "task_id": "string",
    "parent_task_id": "string",
    "type": "string",
    "start_time": "string",
    "timestamp": "string",
    "running_time_ns": "string",
    "running_time": "string",
    "node_id": "string",
    "ip": "string",
    "port": "string",
    "node": "string",
    "version": "string",
    "x_opaque_id": "string",
    "description": "string"
  }
]












Get thread pool statistics

GET /_cat/thread_pool/{thread_pool_patterns}

Get thread pool statistics for each node in a cluster. Returned information includes all built-in thread pools and custom thread pools. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.

Path parameters

  • thread_pool_patterns string | array[string] Required

    A comma-separated list of thread pool names used to limit the request. Accepts wildcard expressions.

Query parameters

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • local boolean

    If true, the request computes the list of selected nodes from the local cluster state. If false the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.

  • Period to wait for a connection to the master node.

Responses

GET /_cat/thread_pool/{thread_pool_patterns}
curl \
 --request GET http://api.example.com/_cat/thread_pool/{thread_pool_patterns}
Response examples (200)
[
  {
    "node_name": "string",
    "node_id": "string",
    "ephemeral_node_id": "string",
    "pid": "string",
    "host": "string",
    "ip": "string",
    "port": "string",
    "name": "string",
    "type": "string",
    "active": "string",
    "pool_size": "string",
    "queue": "string",
    "queue_size": "string",
    "rejected": "string",
    "largest": "string",
    "completed": "string",
    "core": "string",
    "max": "string",
    "size": "string",
    "keep_alive": "string"
  }
]




Get transform information Added in 7.7.0

GET /_cat/transforms/{transform_id}

Get configuration and usage information about transforms.

CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get transform statistics API.

Path parameters

  • transform_id string Required

    A transform identifier or a wildcard expression. If you do not specify one of these options, the API returns information for all transforms.

Query parameters

  • Specifies what to do when the request: contains wildcard expressions and there are no transforms that match; contains the _all string or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. If true, it returns an empty transforms array when there are no matches and the subset of results when there are partial matches. If false, the request returns a 404 status code when there are no matches or only partial matches.

  • from number

    Skips the specified number of transforms.

  • h string | array[string]

    Comma-separated list of column names to display.

  • s string | array[string]

    Comma-separated list of column names or column aliases used to sort the response.

  • time string

    The unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • size number

    The maximum number of transforms to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • state string

      The status of the transform. Returned values include: aborting: The transform is aborting. failed: The transform failed. For more information about the failure, check thereasonfield. indexing: The transform is actively processing data and creating new documents. started: The transform is running but not actively indexing data. stopped: The transform is stopped. stopping`: The transform is stopping.

    • The sequence number for the checkpoint.

    • The number of documents that have been processed from the source index of the transform.

    • checkpoint_progress string | null

      The progress of the next checkpoint that is currently in progress.

    • last_search_time string | null

      The timestamp of the last search in the source indices. This field is shown only if the transform is running.

    • changes_last_detection_time string | null

      The timestamp when changes were last detected in the source indices.

    • The time the transform was created.

    • version string
    • The source indices for the transform.

    • The destination index for the transform.

    • pipeline string

      The unique identifier for the ingest pipeline.

    • The description of the transform.

    • The type of transform: batch or continuous.

    • The interval between checks for changes in the source indices when the transform is running continuously.

    • The initial page size that is used for the composite aggregation for each checkpoint.

    • The number of input documents per second.

    • reason string

      If a transform has a failed state, these details describe the reason for failure.

    • The total number of search operations on the source index for the transform.

    • The total number of search failures.

    • The total amount of search time, in milliseconds.

    • The total number of index operations done by the transform.

    • The total number of indexing failures.

    • The total time spent indexing documents, in milliseconds.

    • The number of documents that have been indexed into the destination index for the transform.

    • The total time spent deleting documents, in milliseconds.

    • The number of documents deleted from the destination index due to the retention policy for the transform.

    • The number of times the transform has been triggered by the scheduler. For example, the scheduler triggers the transform indexer to check for updates or ingest new data at an interval specified in the frequency property.

    • The number of search or bulk index operations processed. Documents are processed in batches instead of individually.

    • The total time spent processing results, in milliseconds.

    • The exponential moving average of the duration of the checkpoint, in milliseconds.

    • The exponential moving average of the number of new documents that have been indexed.

    • The exponential moving average of the number of documents that have been processed.

GET /_cat/transforms/{transform_id}
curl \
 --request GET http://api.example.com/_cat/transforms/{transform_id}
Response examples (200)
A successful response from `GET /_cat/transforms?v=true&format=json`.
[
  {
    "id" : "ecommerce_transform",
    "state" : "started",
    "checkpoint" : "1",
    "documents_processed" : "705",
    "checkpoint_progress" : "100.00",
    "changes_last_detection_time" : null
  }
]

Explain the shard allocations Added in 5.0.0

GET /_cluster/allocation/explain

Get explanations for shard allocations in the cluster. For unassigned shards, it provides an explanation for why the shard is unassigned. For assigned shards, it provides an explanation for why the shard is remaining on its current node and has not moved or rebalanced to another node. This API can be very useful when attempting to diagnose why a shard is unassigned or why a shard continues to remain on its current node when you might expect otherwise.

Query parameters

application/json

Body

  • Specifies the node ID or the name of the node to only explain a shard that is currently located on the specified node.

  • index string
  • primary boolean

    If true, returns explanation for the primary shard for the given shard ID.

  • shard number

    Specifies the ID of the shard that you would like an explanation for.

Responses

GET /_cluster/allocation/explain
curl \
 --request GET http://api.example.com/_cluster/allocation/explain \
 --header "Content-Type: application/json" \
 --data '"{\n  \"index\": \"my-index-000001\",\n  \"shard\": 0,\n  \"primary\": false,\n  \"current_node\": \"my-node\"\n}"'
Request example
Run `GET _cluster/allocation/explain` to get an explanation for a shard's current allocation.
{
  "index": "my-index-000001",
  "shard": 0,
  "primary": false,
  "current_node": "my-node"
}
Response examples (200)
{
  "allocate_explanation": "string",
  "allocation_delay": "string",
  "": 42.0,
  "can_allocate": "yes",
  "can_move_to_other_node": "yes",
  "can_rebalance_cluster": "yes",
  "can_rebalance_cluster_decisions": [
    {
      "decider": "string",
      "decision": "NO",
      "explanation": "string"
    }
  ],
  "can_rebalance_to_other_node": "yes",
  "can_remain_decisions": [
    {
      "decider": "string",
      "decision": "NO",
      "explanation": "string"
    }
  ],
  "can_remain_on_current_node": "yes",
  "cluster_info": {
    "nodes": {
      "additionalProperty1": {
        "node_name": "string",
        "least_available": {
          "path": "string",
          "total_bytes": 42.0,
          "used_bytes": 42.0,
          "free_bytes": 42.0,
          "free_disk_percent": 42.0,
          "used_disk_percent": 42.0
        },
        "most_available": {
          "path": "string",
          "total_bytes": 42.0,
          "used_bytes": 42.0,
          "free_bytes": 42.0,
          "free_disk_percent": 42.0,
          "used_disk_percent": 42.0
        }
      },
      "additionalProperty2": {
        "node_name": "string",
        "least_available": {
          "path": "string",
          "total_bytes": 42.0,
          "used_bytes": 42.0,
          "free_bytes": 42.0,
          "free_disk_percent": 42.0,
          "used_disk_percent": 42.0
        },
        "most_available": {
          "path": "string",
          "total_bytes": 42.0,
          "used_bytes": 42.0,
          "free_bytes": 42.0,
          "free_disk_percent": 42.0,
          "used_disk_percent": 42.0
        }
      }
    },
    "shard_sizes": {
      "additionalProperty1": 42.0,
      "additionalProperty2": 42.0
    },
    "shard_data_set_sizes": {
      "additionalProperty1": "string",
      "additionalProperty2": "string"
    },
    "shard_paths": {
      "additionalProperty1": "string",
      "additionalProperty2": "string"
    },
    "reserved_sizes": [
      {
        "node_id": "string",
        "path": "string",
        "total": 42.0,
        "shards": [
          "string"
        ]
      }
    ]
  },
  "configured_delay": "string",
  "current_node": {
    "id": "string",
    "name": "string",
    "roles": [
      "master"
    ],
    "attributes": {
      "additionalProperty1": "string",
      "additionalProperty2": "string"
    },
    "transport_address": "string",
    "weight_ranking": 42.0
  },
  "current_state": "string",
  "index": "string",
  "move_explanation": "string",
  "node_allocation_decisions": [
    {
      "deciders": [
        {
          "decider": "string",
          "decision": "NO",
          "explanation": "string"
        }
      ],
      "node_attributes": {
        "additionalProperty1": "string",
        "additionalProperty2": "string"
      },
      "node_decision": "yes",
      "node_id": "string",
      "node_name": "string",
      "roles": [
        "master"
      ],
      "store": {
        "allocation_id": "string",
        "found": true,
        "in_sync": true,
        "matching_size_in_bytes": 42.0,
        "matching_sync_id": true,
        "store_exception": "string"
      },
      "transport_address": "string",
      "weight_ranking": 42.0
    }
  ],
  "primary": true,
  "rebalance_explanation": "string",
  "remaining_delay": "string",
  "shard": 42.0,
  "unassigned_info": {
    "": "string",
    "last_allocation_status": "string",
    "reason": "INDEX_CREATED",
    "details": "string",
    "failed_allocation_attempts": 42.0,
    "delayed": true,
    "allocation_status": "string"
  },
  "note": "string"
}












Get cluster-wide settings

GET /_cluster/settings

By default, it returns only settings that have been explicitly defined.

Query parameters

  • If true, returns settings in flat format.

  • If true, returns default cluster settings from the local node.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • persistent object Required
      Hide persistent attribute Show persistent attribute object
      • * object Additional properties

        Additional properties are allowed.

    • transient object Required
      Hide transient attribute Show transient attribute object
      • * object Additional properties

        Additional properties are allowed.

    • defaults object
      Hide defaults attribute Show defaults attribute object
      • * object Additional properties

        Additional properties are allowed.

GET /_cluster/settings
curl \
 --request GET http://api.example.com/_cluster/settings
Response examples (200)
{
  "persistent": {
    "additionalProperty1": {},
    "additionalProperty2": {}
  },
  "transient": {
    "additionalProperty1": {},
    "additionalProperty2": {}
  },
  "defaults": {
    "additionalProperty1": {},
    "additionalProperty2": {}
  }
}
























Reroute the cluster Added in 5.0.0

POST /_cluster/reroute

Manually change the allocation of individual shards in the cluster. For example, a shard can be moved from one node to another explicitly, an allocation can be canceled, and an unassigned shard can be explicitly allocated to a specific node.

It is important to note that after processing any reroute commands Elasticsearch will perform rebalancing as normal (respecting the values of settings such as cluster.routing.rebalance.enable) in order to remain in a balanced state. For example, if the requested allocation includes moving a shard from node1 to node2 then this may cause a shard to be moved from node2 back to node1 to even things out.

The cluster can be set to disable allocations using the cluster.routing.allocation.enable setting. If allocations are disabled then the only allocations that will be performed are explicit ones given using the reroute command, and consequent allocations due to rebalancing.

The cluster will attempt to allocate a shard a maximum of index.allocation.max_retries times in a row (defaults to 5), before giving up and leaving the shard unallocated. This scenario can be caused by structural problems such as having an analyzer which refers to a stopwords file which doesn’t exist on all nodes.

Once the problem has been corrected, allocation can be manually retried by calling the reroute API with the ?retry_failed URI query parameter, which will attempt a single retry round for these shards.

Query parameters

  • dry_run boolean

    If true, then the request simulates the operation. It will calculate the result of applying the commands to the current cluster state and return the resulting cluster state after the commands (and rebalancing) have been applied; it will not actually perform the requested changes.

  • explain boolean

    If true, then the response contains an explanation of why the commands can or cannot run.

  • metric string | array[string]

    Limits the information returned to the specified metrics.

  • If true, then retries allocation of shards that are blocked due to too many subsequent allocation failures.

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

application/json

Body

  • commands array[object]

    Defines the commands to perform.

    Hide commands attributes Show commands attributes object
    • cancel object

      Additional properties are allowed.

      Hide cancel attributes Show cancel attributes object
    • move object

      Additional properties are allowed.

      Hide move attributes Show move attributes object
    • Additional properties are allowed.

      Hide allocate_replica attributes Show allocate_replica attributes object
    • Additional properties are allowed.

      Hide allocate_stale_primary attributes Show allocate_stale_primary attributes object
      • index string Required
      • shard number Required
      • node string Required
      • accept_data_loss boolean Required

        If a node which has a copy of the data rejoins the cluster later on, that data will be deleted. To ensure that these implications are well-understood, this command requires the flag accept_data_loss to be explicitly set to true

    • Additional properties are allowed.

      Hide allocate_empty_primary attributes Show allocate_empty_primary attributes object
      • index string Required
      • shard number Required
      • node string Required
      • accept_data_loss boolean Required

        If a node which has a copy of the data rejoins the cluster later on, that data will be deleted. To ensure that these implications are well-understood, this command requires the flag accept_data_loss to be explicitly set to true

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • acknowledged boolean Required
    • explanations array[object]
      Hide explanations attributes Show explanations attributes object
    • state object

      There aren't any guarantees on the output/structure of the raw cluster state. Here you will find the internal representation of the cluster, which can differ from the external representation.

      Additional properties are allowed.

POST /_cluster/reroute
curl \
 --request POST http://api.example.com/_cluster/reroute \
 --header "Content-Type: application/json" \
 --data '"{\n  \"commands\": [\n    {\n      \"move\": {\n        \"index\": \"test\", \"shard\": 0,\n        \"from_node\": \"node1\", \"to_node\": \"node2\"\n      }\n    },\n    {\n      \"allocate_replica\": {\n        \"index\": \"test\", \"shard\": 1,\n        \"node\": \"node3\"\n      }\n    }\n  ]\n}"'
Request example
Run `POST /_cluster/reroute?metric=none` to changes the allocation of shards in a cluster.
{
  "commands": [
    {
      "move": {
        "index": "test", "shard": 0,
        "from_node": "node1", "to_node": "node2"
      }
    },
    {
      "allocate_replica": {
        "index": "test", "shard": 1,
        "node": "node3"
      }
    }
  ]
}
Response examples (200)
{
  "acknowledged": true,
  "explanations": [
    {
      "command": "string",
      "decisions": [
        {
          "decider": "string",
          "decision": "string",
          "explanation": "string"
        }
      ],
      "parameters": {
        "allow_primary": true,
        "index": "string",
        "node": "string",
        "shard": 42.0,
        "from_node": "string",
        "to_node": "string"
      }
    }
  ],
  "state": {}
}
































Get the hot threads for nodes

GET /_nodes/hot_threads

Get a breakdown of the hot threads on each selected node in the cluster. The output is plain text with a breakdown of the top hot threads for each node.

Query parameters

  • If true, known idle threads (e.g. waiting in a socket select, or to get a task from an empty queue) are filtered out.

  • interval string

    The interval to do the second sampling of threads.

  • Number of samples of thread stacktrace.

  • threads number

    Specifies the number of hot threads to provide information for.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

  • type string

    The type to sample.

    Values are cpu, wait, block, gpu, or mem.

  • sort string

    The sort order for 'cpu' type (default: total)

    Values are cpu, wait, block, gpu, or mem.

Responses

  • 200 application/json

    Additional properties are allowed.

GET /_nodes/hot_threads
curl \
 --request GET http://api.example.com/_nodes/hot_threads
Response examples (200)
{}












Get node information Added in 1.3.0

GET /_nodes/{metric}

By default, the API returns all attributes and core settings for cluster nodes.

Path parameters

  • metric string | array[string] Required

    Limits the information returned to the specific metrics. Supports a comma-separated list, such as http,ingest.

Query parameters

  • If true, returns settings in flat format.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

GET /_nodes/{metric}
curl \
 --request GET http://api.example.com/_nodes/{metric}
Response examples (200)
An abbreviated response when requesting cluster nodes information.
{
    "_nodes": {},
    "cluster_name": "elasticsearch",
    "nodes": {
      "USpTGYaBSIKbgSUJR2Z9lg": {
        "name": "node-0",
        "transport_address": "192.168.17:9300",
        "host": "node-0.elastic.co",
        "ip": "192.168.17",
        "version": "{version}",
        "transport_version": 100000298,
        "index_version": 100000074,
        "component_versions": {
          "ml_config_version": 100000162,
          "transform_config_version": 100000096
        },
        "build_flavor": "default",
        "build_type": "{build_type}",
        "build_hash": "587409e",
        "roles": [
          "master",
          "data",
          "ingest"
        ],
        "attributes": {},
        "plugins": [
          {
            "name": "analysis-icu",
            "version": "{version}",
            "description": "The ICU Analysis plugin integrates Lucene ICU
  module into elasticsearch, adding ICU relates analysis components.",
            "classname":
  "org.elasticsearch.plugin.analysis.icu.AnalysisICUPlugin",
            "has_native_controller": false
          }
        ],
        "modules": [
          {
            "name": "lang-painless",
            "version": "{version}",
            "description": "An easy, safe and fast scripting language for
  Elasticsearch",
            "classname": "org.elasticsearch.painless.PainlessPlugin",
            "has_native_controller": false
          }
        ]
      }
    }
}




Reload the keystore on nodes in the cluster Added in 6.5.0

POST /_nodes/reload_secure_settings

Secure settings are stored in an on-disk keystore. Certain of these settings are reloadable. That is, you can change them on disk and reload them without restarting any nodes in the cluster. When you have updated reloadable secure settings in your keystore, you can use this API to reload those settings on each node.

When the Elasticsearch keystore is password protected and not simply obfuscated, you must provide the password for the keystore when you reload the secure settings. Reloading the settings for the whole cluster assumes that the keystores for all nodes are protected with the same password; this method is allowed only when inter-node communications are encrypted. Alternatively, you can reload the secure settings on each node by locally accessing the API and passing the node-specific Elasticsearch keystore password.

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

application/json

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Additional properties are allowed.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason string

          A human-readable explanation of the error, in English.

        • The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • Additional properties are allowed.

        • root_cause array[object]

          Additional properties are allowed.

        • suppressed array[object]

          Additional properties are allowed.

      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
POST /_nodes/reload_secure_settings
curl \
 --request POST http://api.example.com/_nodes/reload_secure_settings \
 --header "Content-Type: application/json" \
 --data '"{\n  \"secure_settings_password\": \"keystore-password\"\n}"'
Request example
Run `POST _nodes/reload_secure_settings` to reload the keystore on nodes in the cluster.
{
  "secure_settings_password": "keystore-password"
}
Response examples (200)
A successful response when reloading keystore on nodes in your cluster.
{
  "_nodes": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "cluster_name": "my_cluster",
  "nodes": {
    "pQHNt5rXTTWNvUgOrdynKg": {
      "name": "node-0"
    }
  }
}
































Get feature usage information Added in 6.0.0

GET /_nodes/{node_id}/usage

Path parameters

  • node_id string | array[string] Required

    A comma-separated list of node IDs or names to limit the returned information; use _local to return information from the node you're connecting to, leave empty to get information from all nodes

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Additional properties are allowed.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason string

          A human-readable explanation of the error, in English.

        • The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • Additional properties are allowed.

        • root_cause array[object]

          Additional properties are allowed.

        • suppressed array[object]

          Additional properties are allowed.

      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties

        Additional properties are allowed.

        Hide * attributes Show * attributes object
        • rest_actions object Required
          Hide rest_actions attribute Show rest_actions attribute object
          • * number Additional properties
        • since number

          Time unit for milliseconds

        • Time unit for milliseconds

        • aggregations object Required
          Hide aggregations attribute Show aggregations attribute object
          • * object Additional properties

            Additional properties are allowed.

GET /_nodes/{node_id}/usage
curl \
 --request GET http://api.example.com/_nodes/{node_id}/usage
Response examples (200)
{
  "_nodes": {
    "failures": [
      {
        "type": "string",
        "reason": "string",
        "stack_trace": "string",
        "caused_by": {},
        "root_cause": [
          {}
        ],
        "suppressed": [
          {}
        ]
      }
    ],
    "total": 42.0,
    "successful": 42.0,
    "failed": 42.0
  },
  "cluster_name": "string",
  "nodes": {
    "additionalProperty1": {
      "rest_actions": {
        "additionalProperty1": 42.0,
        "additionalProperty2": 42.0
      },
      "": 42.0,
      "aggregations": {
        "additionalProperty1": {},
        "additionalProperty2": {}
      }
    },
    "additionalProperty2": {
      "rest_actions": {
        "additionalProperty1": 42.0,
        "additionalProperty2": 42.0
      },
      "": 42.0,
      "aggregations": {
        "additionalProperty1": {},
        "additionalProperty2": {}
      }
    }
  }
}




Get feature usage information Added in 6.0.0

GET /_nodes/{node_id}/usage/{metric}

Path parameters

  • node_id string | array[string] Required

    A comma-separated list of node IDs or names to limit the returned information; use _local to return information from the node you're connecting to, leave empty to get information from all nodes

  • metric string | array[string] Required

    Limits the information returned to the specific metrics. A comma-separated list of the following options: _all, rest_actions.

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object

      Additional properties are allowed.

      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
        • type string Required

          The type of error

        • reason string

          A human-readable explanation of the error, in English.

        • The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • Additional properties are allowed.

        • root_cause array[object]

          Additional properties are allowed.

        • suppressed array[object]

          Additional properties are allowed.

      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties

        Additional properties are allowed.

        Hide * attributes Show * attributes object
        • rest_actions object Required
          Hide rest_actions attribute Show rest_actions attribute object
          • * number Additional properties
        • since number

          Time unit for milliseconds

        • Time unit for milliseconds

        • aggregations object Required
          Hide aggregations attribute Show aggregations attribute object
          • * object Additional properties

            Additional properties are allowed.

GET /_nodes/{node_id}/usage/{metric}
curl \
 --request GET http://api.example.com/_nodes/{node_id}/usage/{metric}
Response examples (200)
{
  "_nodes": {
    "failures": [
      {
        "type": "string",
        "reason": "string",
        "stack_trace": "string",
        "caused_by": {},
        "root_cause": [
          {}
        ],
        "suppressed": [
          {}
        ]
      }
    ],
    "total": 42.0,
    "successful": 42.0,
    "failed": 42.0
  },
  "cluster_name": "string",
  "nodes": {
    "additionalProperty1": {
      "rest_actions": {
        "additionalProperty1": 42.0,
        "additionalProperty2": 42.0
      },
      "": 42.0,
      "aggregations": {
        "additionalProperty1": {},
        "additionalProperty2": {}
      }
    },
    "additionalProperty2": {
      "rest_actions": {
        "additionalProperty1": 42.0,
        "additionalProperty2": 42.0
      },
      "": 42.0,
      "aggregations": {
        "additionalProperty1": {},
        "additionalProperty2": {}
      }
    }
  }
}

Get the cluster health Added in 8.7.0

GET /_health_report

Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.

Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.

The cluster’s status is controlled by the worst indicator status.

In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.

Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.

NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.

Query parameters

  • timeout string

    Explicit operation timeout.

  • verbose boolean

    Opt-in for more information about the health of the system.

  • size number

    Limit the number of affected resources the health report API returns.

Responses

GET /_health_report
curl \
 --request GET http://api.example.com/_health_report
Response examples (200)
{
  "cluster_name": "string",
  "indicators": {
    "": {
      "status": "green",
      "symptom": "string",
      "impacts": [
        {
          "description": "string",
          "id": "string",
          "impact_areas": [
            "search"
          ],
          "severity": 42.0
        }
      ],
      "diagnosis": [
        {
          "id": "string",
          "action": "string",
          "affected_resources": {},
          "cause": "string",
          "help_url": "string"
        }
      ],
      "details": {
        "failure_streak": 42.0,
        "most_recent_failure": "string"
      }
    }
  },
  "status": "green"
}

Get the cluster health Added in 8.7.0

GET /_health_report/{feature}

Get a report with the health status of an Elasticsearch cluster. The report contains a list of indicators that compose Elasticsearch functionality.

Each indicator has a health status of: green, unknown, yellow or red. The indicator will provide an explanation and metadata describing the reason for its current health status.

The cluster’s status is controlled by the worst indicator status.

In the event that an indicator’s status is non-green, a list of impacts may be present in the indicator result which detail the functionalities that are negatively affected by the health issue. Each impact carries with it a severity level, an area of the system that is affected, and a simple description of the impact on the system.

Some health indicators can determine the root cause of a health problem and prescribe a set of steps that can be performed in order to improve the health of the system. The root cause and remediation steps are encapsulated in a diagnosis. A diagnosis contains a cause detailing a root cause analysis, an action containing a brief description of the steps to take to fix the problem, the list of affected resources (if applicable), and a detailed step-by-step troubleshooting guide to fix the diagnosed problem.

NOTE: The health indicators perform root cause analysis of non-green health statuses. This can be computationally expensive when called frequently. When setting up automated polling of the API for health status, set verbose to false to disable the more expensive analysis logic.

Path parameters

  • feature string | array[string] Required

    A feature of the cluster, as returned by the top-level health report API.

Query parameters

  • timeout string

    Explicit operation timeout.

  • verbose boolean

    Opt-in for more information about the health of the system.

  • size number

    Limit the number of affected resources the health report API returns.

Responses

GET /_health_report/{feature}
curl \
 --request GET http://api.example.com/_health_report/{feature}
Response examples (200)
{
  "cluster_name": "string",
  "indicators": {
    "": {
      "status": "green",
      "symptom": "string",
      "impacts": [
        {
          "description": "string",
          "id": "string",
          "impact_areas": [
            "search"
          ],
          "severity": 42.0
        }
      ],
      "diagnosis": [
        {
          "id": "string",
          "action": "string",
          "affected_resources": {},
          "cause": "string",
          "help_url": "string"
        }
      ],
      "details": {
        "failure_streak": 42.0,
        "most_recent_failure": "string"
      }
    }
  },
  "status": "green"
}









Create or update a connector Beta

PUT /_connector/{connector_id}

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be created or updated. ID is auto-generated if not provided.

application/json

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

    • id string Required
PUT /_connector/{connector_id}
curl \
 --request PUT http://api.example.com/_connector/{connector_id} \
 --header "Content-Type: application/json" \
 --data '"{\n  \"index_name\": \"search-google-drive\",\n  \"name\": \"My Connector\",\n  \"service_type\": \"google_drive\"\n}"'
Request examples
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "service_type": "google_drive"
}
{
  "index_name": "search-google-drive",
  "name": "My Connector",
  "description": "My Connector to sync data to Elastic index from Google Drive",
  "service_type": "google_drive",
  "language": "english"
}
Response examples (200)
{
  "id": "my-connector",
  "result": "created"
}
























Check in a connector sync job Technical preview

PUT /_connector/_sync_job/{connector_sync_job_id}/_check_in

Check in a connector sync job and set the last_seen field to the current time before updating it in the internal index.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

Responses

  • 200 application/json

    Additional properties are allowed.

PUT /_connector/_sync_job/{connector_sync_job_id}/_check_in
curl \
 --request PUT http://api.example.com/_connector/_sync_job/{connector_sync_job_id}/_check_in
Response examples (200)
{}
































Update the connector API key ID Beta

PUT /_connector/{connector_id}/_api_key_id

Update the api_key_id and api_key_secret_id fields of a connector. You can specify the ID of the API key used for authorization and the ID of the connector secret where the API key is stored. The connector secret ID is required only for Elastic managed (native) connectors. Self-managed connectors (connector clients) do not use this field.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_api_key_id
curl \
 --request PUT http://api.example.com/_connector/{connector_id}/_api_key_id \
 --header "Content-Type: application/json" \
 --data '"{\n    \"api_key_id\": \"my-api-key-id\",\n    \"api_key_secret_id\": \"my-connector-secret-id\"\n}"'
Request example
{
    "api_key_id": "my-api-key-id",
    "api_key_secret_id": "my-connector-secret-id"
}
Response examples (200)
{
  "result": "updated"
}








Update the connector features Technical preview

PUT /_connector/{connector_id}/_features

Update the connector features in the connector document. This API can be used to control the following aspects of a connector:

  • document-level security
  • incremental syncs
  • advanced sync rules
  • basic sync rules

Normally, the running connector service automatically manages these features. However, you can use this API to override the default behavior.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated.

application/json

Body Required

  • features object Required

    Additional properties are allowed.

    Hide features attributes Show features attributes object
    • Additional properties are allowed.

      Hide document_level_security attribute Show document_level_security attribute object
    • Additional properties are allowed.

      Hide incremental_sync attribute Show incremental_sync attribute object
    • Additional properties are allowed.

      Hide native_connector_api_keys attribute Show native_connector_api_keys attribute object
    • Additional properties are allowed.

      Hide sync_rules attributes Show sync_rules attributes object
      • advanced object

        Additional properties are allowed.

        Hide advanced attribute Show advanced attribute object
      • basic object

        Additional properties are allowed.

        Hide basic attribute Show basic attribute object

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_features
curl \
 --request PUT http://api.example.com/_connector/{connector_id}/_features \
 --header "Content-Type: application/json" \
 --data '"{\n  \"features\": {\n    \"document_level_security\": {\n      \"enabled\": true\n    },\n    \"incremental_sync\": {\n      \"enabled\": true\n    },\n    \"sync_rules\": {\n      \"advanced\": {\n        \"enabled\": false\n      },\n      \"basic\": {\n        \"enabled\": true\n      }\n    }\n  }\n}"'
Request examples
{
  "features": {
    "document_level_security": {
      "enabled": true
    },
    "incremental_sync": {
      "enabled": true
    },
    "sync_rules": {
      "advanced": {
        "enabled": false
      },
      "basic": {
        "enabled": true
      }
    }
  }
}
{
  "features": {
    "document_level_security": {
      "enabled": true
    }
  }
}
Response examples (200)
{
  "result": "updated"
}

Update the connector filtering Beta

PUT /_connector/{connector_id}/_filtering

Update the draft filtering configuration of a connector and marks the draft validation state as edited. The filtering draft is activated once validated by the running Elastic connector service. The filtering property is used to configure sync rules (both basic and advanced) for a connector.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • filtering array[object]
    Hide filtering attributes Show filtering attributes object
    • active object Required

      Additional properties are allowed.

      Hide active attributes Show active attributes object
      • advanced_snippet object Required

        Additional properties are allowed.

        Hide advanced_snippet attributes Show advanced_snippet attributes object
        • created_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        • updated_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        • value object Required

          Additional properties are allowed.

      • rules array[object] Required
        Hide rules attributes Show rules attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • id string Required
        • order number Required
        • policy string Required

          Values are exclude or include.

        • rule string Required

          Values are contains, ends_with, equals, regex, starts_with, >, or <.

        • value string Required
      • validation object Required

        Additional properties are allowed.

        Hide validation attributes Show validation attributes object
        • errors array[object] Required
          Hide errors attributes Show errors attributes object
        • state string Required

          Values are edited, invalid, or valid.

    • domain string
    • draft object Required

      Additional properties are allowed.

      Hide draft attributes Show draft attributes object
      • advanced_snippet object Required

        Additional properties are allowed.

        Hide advanced_snippet attributes Show advanced_snippet attributes object
        • created_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        • updated_at string | number

          A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

        • value object Required

          Additional properties are allowed.

      • rules array[object] Required
        Hide rules attributes Show rules attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • id string Required
        • order number Required
        • policy string Required

          Values are exclude or include.

        • rule string Required

          Values are contains, ends_with, equals, regex, starts_with, >, or <.

        • value string Required
      • validation object Required

        Additional properties are allowed.

        Hide validation attributes Show validation attributes object
        • errors array[object] Required
          Hide errors attributes Show errors attributes object
        • state string Required

          Values are edited, invalid, or valid.

  • rules array[object]
    Hide rules attributes Show rules attributes object
    • created_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • field string Required

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • order number Required
    • policy string Required

      Values are exclude or include.

    • rule string Required

      Values are contains, ends_with, equals, regex, starts_with, >, or <.

    • updated_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • value string Required
  • Additional properties are allowed.

    Hide advanced_snippet attributes Show advanced_snippet attributes object
    • created_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • updated_at string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • value object Required

      Additional properties are allowed.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_filtering
curl \
 --request PUT http://api.example.com/_connector/{connector_id}/_filtering \
 --header "Content-Type: application/json" \
 --data '"{\n    \"rules\": [\n         {\n            \"field\": \"file_extension\",\n            \"id\": \"exclude-txt-files\",\n            \"order\": 0,\n            \"policy\": \"exclude\",\n            \"rule\": \"equals\",\n            \"value\": \"txt\"\n        },\n        {\n            \"field\": \"_\",\n            \"id\": \"DEFAULT\",\n            \"order\": 1,\n            \"policy\": \"include\",\n            \"rule\": \"regex\",\n            \"value\": \".*\"\n        }\n    ]\n}"'
Request examples
{
    "rules": [
         {
            "field": "file_extension",
            "id": "exclude-txt-files",
            "order": 0,
            "policy": "exclude",
            "rule": "equals",
            "value": "txt"
        },
        {
            "field": "_",
            "id": "DEFAULT",
            "order": 1,
            "policy": "include",
            "rule": "regex",
            "value": ".*"
        }
    ]
}
{
    "advanced_snippet": {
        "value": [{
            "tables": [
                "users",
                "orders"
            ],
            "query": "SELECT users.id AS id, orders.order_id AS order_id FROM users JOIN orders ON users.id = orders.user_id"
        }]
    }
}
Response examples (200)
{
  "result": "updated"
}








Update the connector name and description Beta

PUT /_connector/{connector_id}/_name

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_name
curl \
 --request PUT http://api.example.com/_connector/{connector_id}/_name \
 --header "Content-Type: application/json" \
 --data '"{\n    \"name\": \"Custom connector\",\n    \"description\": \"This is my customized connector\"\n}"'
Request example
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
Response examples (200)
{
  "result": "updated"
}













































Forget a follower Added in 6.7.0

POST /{index}/_ccr/forget_follower

Remove the cross-cluster replication follower retention leases from the leader.

A following index takes out retention leases on its leader index. These leases are used to increase the likelihood that the shards of the leader index retain the history of operations that the shards of the following index need to run replication. When a follower index is converted to a regular index by the unfollow API (either by directly calling the API or by index lifecycle management tasks), these leases are removed. However, removal of the leases can fail, for example when the remote cluster containing the leader index is unavailable. While the leases will eventually expire on their own, their extended existence can cause the leader index to hold more history than necessary and prevent index lifecycle management from performing some operations on the leader index. This API exists to enable manually removing the leases when the unfollow API is unable to do so.

NOTE: This API does not stop replication by a following index. If you use this API with a follower index that is still actively following, the following index will add back retention leases on the leader. The only purpose of this API is to handle the case of failure to remove the following retention leases after the unfollow API is invoked.

Path parameters

  • index string Required

    the name of the leader index for which specified follower retention leases should be removed

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

application/json

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • _shards object Required

      Additional properties are allowed.

      Hide _shards attributes Show _shards attributes object
      • failed number Required
      • successful number Required
      • total number Required
      • failures array[object]
        Hide failures attributes Show failures attributes object
        • index string
        • node string
        • reason object Required

          Additional properties are allowed.

          Hide reason attributes Show reason attributes object
          • type string Required

            The type of error

          • reason string

            A human-readable explanation of the error, in English.

          • The server stack trace. Present only if the error_trace=true parameter was sent with the request.

          • Additional properties are allowed.

          • root_cause array[object]

            Additional properties are allowed.

          • suppressed array[object]

            Additional properties are allowed.

        • shard number Required
        • status string
      • skipped number
POST /{index}/_ccr/forget_follower
curl \
 --request POST http://api.example.com/{index}/_ccr/forget_follower \
 --header "Content-Type: application/json" \
 --data '"{\n  \"follower_cluster\" : \"\u003cfollower_cluster\u003e\",\n  \"follower_index\" : \"\u003cfollower_index\u003e\",\n  \"follower_index_uuid\" : \"\u003cfollower_index_uuid\u003e\",\n  \"leader_remote_cluster\" : \"\u003cleader_remote_cluster\u003e\"\n}"'
Request example
Run `POST /<leader_index>/_ccr/forget_follower`.
{
  "follower_cluster" : "<follower_cluster>",
  "follower_index" : "<follower_index>",
  "follower_index_uuid" : "<follower_index_uuid>",
  "leader_remote_cluster" : "<leader_remote_cluster>"
}
Response examples (200)
A successful response for removing the follower retention leases from the leader index.
{
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0,
    "failures" : [ ]
  }
}












Resume an auto-follow pattern Added in 7.5.0

POST /_ccr/auto_follow/{name}/resume

Resume a cross-cluster replication auto-follow pattern that was paused. The auto-follow pattern will resume configuring following indices for newly created indices that match its patterns on the remote cluster. Remote indices created while the pattern was paused will also be followed unless they have been deleted or closed in the interim.

Path parameters

  • name string Required

    The name of the auto-follow pattern to resume.

Query parameters

  • The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_ccr/auto_follow/{name}/resume
curl \
 --request POST http://api.example.com/_ccr/auto_follow/{name}/resume
Response examples (200)
A successful response `POST /_ccr/auto_follow/my_auto_follow_pattern/resume`, which resumes an auto-follow pattern.
{
  "acknowledged" : true
}





























































































































Check for a document source Added in 5.4.0

HEAD /{index}/_source/{id}

Check whether a document source exists in an index. For example:

HEAD my-index-000001/_source/1

A document's source is not available if it is disabled in the mapping.

Path parameters

  • index string Required

    A comma-separated list of data streams, indices, and aliases. It supports wildcards (*).

  • id string Required

    A unique identifier for the document.

Query parameters

  • The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes the relevant shards before retrieving the document. Setting it to true should be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing).

  • routing string

    A custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    Indicates whether to return the _source field (true or false) or lists the fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude in the response.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response.

  • version number

    The version number for concurrency control. It must match the current version of the document for the request to succeed.

  • The version type.

    Values are internal, external, external_gte, or force.

Responses

HEAD /{index}/_source/{id}
curl \
 --request HEAD http://api.example.com/{index}/_source/{id}
















Get multiple documents Added in 1.3.0

POST /{index}/_mget

Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.

Filter source fields

By default, the _source field is returned for every document (if stored). Use the _source and _source_include or source_exclude attributes to filter what fields are returned for a particular document. You can include the _source, _source_includes, and _source_excludes query parameters in the request URI to specify the defaults to use when there are no per-document instructions.

Get stored fields

Use the stored_fields attribute to specify the set of stored fields you want to retrieve. Any requested fields that are not stored are ignored. You can include the stored_fields query parameter in the request URI to specify the defaults to use when there are no per-document instructions.

Path parameters

  • index string Required

    Name of the index to retrieve documents from when ids are specified, or when a document in the docs array does not specify an index.

Query parameters

  • Should this request force synthetic _source? Use this to test if the mapping supports synthetic _source and to get a sense of the worst case performance. Fetches with this enabled will be slower the enabling synthetic source natively in the index.

  • Specifies the node or shard the operation should be performed on. Random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes relevant shards before retrieving documents.

  • routing string

    Custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not, or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    If true, retrieves the document fields stored in the index rather than the document _source.

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required

      The response includes a docs array that contains the documents in the order specified in the request. The structure of the returned documents is similar to that returned by the get API. If there is a failure getting a particular document, the error is included in place of the document.

      One of:
      Hide attributes Show attributes
      • _index string Required
      • fields object

        If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

        Hide fields attribute Show fields attribute object
        • * object Additional properties

          Additional properties are allowed.

      • _ignored array[string]
      • found boolean Required

        Indicates whether the document exists.

      • _id string Required
      • The primary term assigned to the document for the indexing operation.

      • _routing string

        The explicit routing, if set.

      • _seq_no number
      • _source object

        If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

        Additional properties are allowed.

      • _version number
POST /{index}/_mget
curl \
 --request POST http://api.example.com/{index}/_mget \
 --header "Content-Type: application/json" \
 --data '"{\n  \"docs\": [\n    {\n      \"_id\": \"1\"\n    },\n    {\n      \"_id\": \"2\"\n    }\n  ]\n}"'
Run `GET /my-index-000001/_mget`. When you specify an index in the request URI, only the document IDs are required in the request body.
{
  "docs": [
    {
      "_id": "1"
    },
    {
      "_id": "2"
    }
  ]
}
Run `GET /_mget`. This request sets `_source` to `false` for document 1 to exclude the source entirely. It retrieves `field3` and `field4` from document 2. It retrieves the `user` field from document 3 but filters out the `user.location` field.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "_source": false
    },
    {
      "_index": "test",
      "_id": "2",
      "_source": [ "field3", "field4" ]
    },
    {
      "_index": "test",
      "_id": "3",
      "_source": {
        "include": [ "user" ],
        "exclude": [ "user.location" ]
      }
    }
  ]
}
Run `GET /_mget`. This request retrieves `field1` and `field2` from document 1 and `field3` and `field4` from document 2.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "stored_fields": [ "field1", "field2" ]
    },
    {
      "_index": "test",
      "_id": "2",
      "stored_fields": [ "field3", "field4" ]
    }
  ]
}
Run `GET /_mget?routing=key1`. If routing is used during indexing, you need to specify the routing value to retrieve documents. This request fetches `test/_doc/2` from the shard corresponding to routing key `key1`. It fetches `test/_doc/1` from the shard corresponding to routing key `key2`.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "routing": "key2"
    },
    {
      "_index": "test",
      "_id": "2"
    }
  ]
}
Response examples (200)
{
  "docs": [
    {
      "_index": "string",
      "fields": {
        "additionalProperty1": {},
        "additionalProperty2": {}
      },
      "_ignored": [
        "string"
      ],
      "found": true,
      "_id": "string",
      "_primary_term": 42.0,
      "_routing": "string",
      "_seq_no": 42.0,
      "_source": {},
      "_version": 42.0
    }
  ]
}




























Get term vector information

POST /{index}/_termvectors/{id}

Get information and statistics about terms in the fields of a particular document.

You can retrieve term vectors for documents stored in the index or for artificial documents passed in the body of the request. You can specify the fields you are interested in through the fields parameter or by adding the fields to the request body. For example:

GET /my-index-000001/_termvectors/1?fields=message

Fields can be specified using wildcards, similar to the multi match query.

Term vectors are real-time by default, not near real-time. This can be changed by setting realtime parameter to false.

You can request three types of values: term information, term statistics, and field statistics. By default, all term information and field statistics are returned for all fields but term statistics are excluded.

Term information

  • term frequency in the field (always returned)
  • term positions (positions: true)
  • start and end offsets (offsets: true)
  • term payloads (payloads: true), as base64 encoded bytes

If the requested information wasn't stored in the index, it will be computed on the fly if possible. Additionally, term vectors could be computed for documents not even existing in the index, but instead provided by the user.


Start and end offsets assume UTF-16 encoding is being used. If you want to use these offsets in order to get the original text that produced this token, you should make sure that the string you are taking a sub-string of is also encoded using UTF-16.

Behaviour

The term and field statistics are not accurate. Deleted documents are not taken into account. The information is only retrieved for the shard the requested document resides in. The term and field statistics are therefore only useful as relative measures whereas the absolute numbers have no meaning in this context. By default, when requesting term vectors of artificial documents, a shard to get the statistics from is randomly selected. Use routing only to hit a particular shard.

Path parameters

  • index string Required

    The name of the index that contains the document.

  • id string Required

    A unique identifier for the document.

Query parameters

  • fields string | array[string]

    A comma-separated list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the completion_fields or fielddata_fields parameters.

  • If true, the response includes:

    • The document count (how many documents contain this field).
    • The sum of document frequencies (the sum of document frequencies for all terms in this field).
    • The sum of total term frequencies (the sum of total term frequencies of each term in this field).
  • offsets boolean

    If true, the response includes term offsets.

  • payloads boolean

    If true, the response includes term payloads.

  • positions boolean

    If true, the response includes term positions.

  • The node or shard the operation should be performed on. It is random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • If true, the response includes:

    • The total term frequency (how often a term occurs in all documents).
    • The document frequency (the number of documents containing the current term).

    By default these values are not returned since term statistics can have a serious performance impact.

  • version number

    If true, returns the document version as part of a hit.

  • The version type.

    Values are internal, external, external_gte, or force.

application/json

Body

  • doc object

    An artificial document (a document not present in the index) for which you want to retrieve term vectors.

    Additional properties are allowed.

  • filter object

    Additional properties are allowed.

    Hide filter attributes Show filter attributes object
    • Ignore words which occur in more than this many docs. Defaults to unbounded.

    • The maximum number of terms that must be returned per field.

    • Ignore words with more than this frequency in the source doc. It defaults to unbounded.

    • The maximum word length above which words will be ignored. Defaults to unbounded.

    • Ignore terms which do not occur in at least this many docs.

    • Ignore words with less than this frequency in the source doc.

    • The minimum word length below which words will be ignored.

  • Override the default per-field analyzer. This is useful in order to generate term vectors in any fashion, especially when using artificial documents. When providing an analyzer for a field that already stores term vectors, the term vectors will be regenerated.

    Hide per_field_analyzer attribute Show per_field_analyzer attribute object
    • * string Additional properties

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • found boolean Required
    • _id string
    • _index string Required
    • Hide term_vectors attribute Show term_vectors attribute object
      • * object Additional properties

        Additional properties are allowed.

        Hide * attributes Show * attributes object
    • took number Required
    • _version number Required
POST /{index}/_termvectors/{id}
curl \
 --request POST http://api.example.com/{index}/_termvectors/{id} \
 --header "Content-Type: application/json" \
 --data '"{\n  \"fields\" : [\"text\"],\n  \"offsets\" : true,\n  \"payloads\" : true,\n  \"positions\" : true,\n  \"term_statistics\" : true,\n  \"field_statistics\" : true\n}"'
Run `GET /my-index-000001/_termvectors/1` to return all information and statistics for field `text` in document 1.
{
  "fields" : ["text"],
  "offsets" : true,
  "payloads" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors/1` to set per-field analyzers. A different analyzer than the one at the field may be provided by using the `per_field_analyzer` parameter.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  },
  "fields": ["fullname"],
  "per_field_analyzer" : {
    "fullname": "keyword"
  }
}
Run `GET /imdb/_termvectors` to filter the terms returned based on their tf-idf scores. It returns the three most "interesting" keywords from the artificial document having the given "plot" field value. Notice that the keyword "Tony" or any stop words are not part of the response, as their tf-idf must be too low.
{
  "doc": {
    "plot": "When wealthy industrialist Tony Stark is forced to build an armored suit after a life-threatening incident, he ultimately decides to use its technology to fight against evil."
  },
  "term_statistics": true,
  "field_statistics": true,
  "positions": false,
  "offsets": false,
  "filter": {
    "max_num_terms": 3,
    "min_term_freq": 1,
    "min_doc_freq": 1
  }
}
Run `GET /my-index-000001/_termvectors/1`. Term vectors which are not explicitly stored in the index are automatically computed on the fly. This request returns all information and statistics for the fields in document 1, even though the terms haven't been explicitly stored in the index. Note that for the field text, the terms are not regenerated.
{
  "fields" : ["text", "some_field_without_term_vectors"],
  "offsets" : true,
  "positions" : true,
  "term_statistics" : true,
  "field_statistics" : true
}
Run `GET /my-index-000001/_termvectors`. Term vectors can be generated for artificial documents, that is for documents not present in the index. If dynamic mapping is turned on (default), the document fields not in the original mapping will be dynamically created.
{
  "doc" : {
    "fullname" : "John Doe",
    "text" : "test test test"
  }
}
Response examples (200)
A successful response from `GET /my-index-000001/_termvectors/1`.
{
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "found": true,
  "took": 6,
  "term_vectors": {
    "text": {
      "field_statistics": {
        "sum_doc_freq": 4,
        "doc_count": 2,
        "sum_ttf": 6
      },
      "terms": {
        "test": {
          "doc_freq": 2,
          "ttf": 4,
          "term_freq": 3,
          "tokens": [
            {
              "position": 0,
              "start_offset": 0,
              "end_offset": 4,
              "payload": "d29yZA=="
            },
            {
              "position": 1,
              "start_offset": 5,
              "end_offset": 9,
              "payload": "d29yZA=="
            },
            {
              "position": 2,
              "start_offset": 10,
              "end_offset": 14,
              "payload": "d29yZA=="
            }
          ]
        }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with `per_field_analyzer` in the request body.
{
  "_index": "my-index-000001",
  "_version": 0,
  "found": true,
  "took": 6,
  "term_vectors": {
    "fullname": {
      "field_statistics": {
          "sum_doc_freq": 2,
          "doc_count": 4,
          "sum_ttf": 4
      },
      "terms": {
          "John Doe": {
            "term_freq": 1,
            "tokens": [
                {
                  "position": 0,
                  "start_offset": 0,
                  "end_offset": 8
                }
            ]
          }
      }
    }
  }
}
A successful response from `GET /my-index-000001/_termvectors` with a `filter` in the request body.
{
  "_index": "imdb",
  "_version": 0,
  "found": true,
  "term_vectors": {
      "plot": {
        "field_statistics": {
            "sum_doc_freq": 3384269,
            "doc_count": 176214,
            "sum_ttf": 3753460
        },
        "terms": {
            "armored": {
              "doc_freq": 27,
              "ttf": 27,
              "term_freq": 1,
              "score": 9.74725
            },
            "industrialist": {
              "doc_freq": 88,
              "ttf": 88,
              "term_freq": 1,
              "score": 8.590818
            },
            "stark": {
              "doc_freq": 44,
              "ttf": 47,
              "term_freq": 1,
              "score": 9.272792
            }
        }
      }
  }
}








Update a document

POST /{index}/_update/{id}

Update a document by running a script or passing a partial document.

If the Elasticsearch security features are enabled, you must have the index or write index privilege for the target index or index alias.

The script can update, delete, or skip modifying the document. The API also supports passing a partial document, which is merged into the existing document. To fully replace an existing document, use the index API. This operation:

  • Gets the document (collocated with the shard) from the index.
  • Runs the specified script.
  • Indexes the result.

The document must still be reindexed, but using this API removes some network roundtrips and reduces chances of version conflicts between the GET and the index operation.

The _source field must be enabled to use this API. In addition to _source, you can access the following variables through the ctx map: _index, _type, _id, _version, _routing, and _now (the current timestamp).

Path parameters

  • index string Required

    The name of the target index. By default, the index is created automatically if it doesn't exist.

  • id string Required

    A unique identifier for the document to be updated.

Query parameters

  • Only perform the operation if the document has this primary term.

  • Only perform the operation if the document has this sequence number.

  • True or false if to include the document source in the error message in case of parsing errors.

  • lang string

    The script language.

  • refresh string

    If 'true', Elasticsearch refreshes the affected shards to make this operation visible to search. If 'wait_for', it waits for a refresh to make this operation visible to search. If 'false', it does nothing with refreshes.

    Values are true, false, or wait_for.

  • If true, the destination must be an index alias.

  • The number of times the operation should be retried when a conflict occurs.

  • routing string

    A custom value used to route operations to a specific shard.

  • timeout string

    The period to wait for the following operations: dynamic mapping updates and waiting for active shards. Elasticsearch waits for at least the timeout period before failing. The actual wait time could be longer, particularly when multiple waits occur.

  • wait_for_active_shards number | string

    The number of copies of each shard that must be active before proceeding with the operation. Set to 'all' or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

  • _source boolean | string | array[string]

    If false, source retrieval is turned off. You can also specify a comma-separated list of the fields you want to retrieve.

  • _source_excludes string | array[string]

    The source fields you want to exclude.

  • _source_includes string | array[string]

    The source fields you want to retrieve.

application/json

Body Required

  • If true, the result in the response is set to noop (no operation) when there are no changes to the document.

  • doc object

    A partial update to an existing document. If both doc and script are specified, doc is ignored.

    Additional properties are allowed.

  • If true, use the contents of 'doc' as the value of 'upsert'. NOTE: Using ingest pipelines with doc_as_upsert is not supported.

  • script object

    Additional properties are allowed.

    Hide script attributes Show script attributes object
    • source string

      The script source.

    • id string
    • params object

      Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

      Hide params attribute Show params attribute object
      • * object Additional properties

        Additional properties are allowed.

    • lang string

      Any of:

      Values are painless, expression, mustache, or java.

    • options object
      Hide options attribute Show options attribute object
      • * string Additional properties
  • If true, run the script whether or not the document exists.

  • _source boolean | object

    Defines how to fetch a source. Fetching can be disabled entirely, or the source can be filtered.

    One of:
  • upsert object

    If the document does not already exist, the contents of 'upsert' are inserted as a new document. If the document exists, the 'script' is run.

    Additional properties are allowed.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
POST /{index}/_update/{id}
curl \
 --request POST http://api.example.com/{index}/_update/{id} \
 --header "Content-Type: application/json" \
 --data '"{\n  \"script\" : {\n    \"source\": \"ctx._source.counter += params.count\",\n    \"lang\": \"painless\",\n    \"params\" : {\n      \"count\" : 4\n    }\n  }\n}"'
Run `POST test/_update/1` to increment a counter by using a script.
{
  "script" : {
    "source": "ctx._source.counter += params.count",
    "lang": "painless",
    "params" : {
      "count" : 4
    }
  }
}
Run `POST test/_update/1` to use a script to add a tag to a list of tags. In this example, it is just a list, so the tag is added even it exists.
{
  "script": {
    "source": "ctx._source.tags.add(params.tag)",
    "lang": "painless",
    "params": {
      "tag": "blue"
    }
  }
}
Run `POST test/_update/1` to use a script to remove a tag from a list of tags. The Painless function to remove a tag takes the array index of the element you want to remove. To avoid a possible runtime error, you first need to make sure the tag exists. If the list contains duplicates of the tag, this script just removes one occurrence.
{
  "script": {
    "source": "if (ctx._source.tags.contains(params.tag)) { ctx._source.tags.remove(ctx._source.tags.indexOf(params.tag)) }",
    "lang": "painless",
    "params": {
      "tag": "blue"
    }
  }
}
Run `POST test/_update/1` to use a script to add a field `new_field` to the document.
{
  "script" : "ctx._source.new_field = 'value_of_new_field'"
}
Run `POST test/_update/1` to use a script to remove a field `new_field` from the document.
{
  "script" : "ctx._source.remove('new_field')"
}
Run `POST test/_update/1` to use a script to remove a subfield from an object field.
{
  "script": "ctx._source['my-object'].remove('my-subfield')"
}
Run `POST test/_update/1` to change the operation that runs from within the script. For example, this request deletes the document if the `tags` field contains `green`, otherwise it does nothing (`noop`).
{
  "script": {
    "source": "if (ctx._source.tags.contains(params.tag)) { ctx.op = 'delete' } else { ctx.op = 'noop' }",
    "lang": "painless",
    "params": {
      "tag": "green"
    }
  }
}
Run `POST test/_update/1` to do a partial update that adds a new field to the existing document.
{
  "doc": {
    "name": "new_name"
  }
}
Run `POST test/_update/1` to perfom an upsert. If the document does not already exist, the contents of the upsert element are inserted as a new document. If the document exists, the script is run.
{
  "script": {
    "source": "ctx._source.counter += params.count",
    "lang": "painless",
    "params": {
      "count": 4
    }
  },
  "upsert": {
    "counter": 1
  }
}
Run `POST test/_update/1` to perform a scripted upsert. When `scripted_upsert` is `true`, the script runs whether or not the document exists.
{
  "scripted_upsert": true,
  "script": {
    "source": """
      if ( ctx.op == 'create' ) {
        ctx._source.counter = params.count
      } else {
        ctx._source.counter += params.count
      }
    """,
    "params": {
      "count": 4
    }
  },
  "upsert": {}
}
Run `POST test/_update/1` to perform a doc as upsert. Instead of sending a partial `doc` plus an `upsert` doc, you can set `doc_as_upsert` to `true` to use the contents of `doc` as the `upsert` value.
{
  "doc": {
    "name": "new_name"
  },
  "doc_as_upsert": true
}
Response examples (200)
By default updates that don't change anything detect that they don't change anything and return `"result": "noop"`.
{
   "_shards": {
        "total": 0,
        "successful": 0,
        "failed": 0
   },
   "_index": "test",
   "_id": "1",
   "_version": 2,
   "_primary_term": 1,
   "_seq_no": 1,
   "result": "noop"
}









Get an enrich policy Added in 7.5.0

GET /_enrich/policy/{name}

Returns information about an enrich policy.

Path parameters

  • name string | array[string] Required

    Comma-separated list of enrich policy names used to limit the request. To return information for all enrich policies, omit this parameter.

Query parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • policies array[object] Required
      Hide policies attribute Show policies attribute object
      • config object Required
        Hide config attribute Show config attribute object
        • * object Additional properties

          Additional properties are allowed.

          Hide * attributes Show * attributes object
GET /_enrich/policy/{name}
curl \
 --request GET http://api.example.com/_enrich/policy/{name}
Response examples (200)
{
  "policies": [
    {
      "config": {
        "additionalProperty1": {
          "enrich_fields": "string",
          "indices": "string",
          "match_field": "string",
          "query": {},
          "name": "string",
          "elasticsearch_version": "string"
        },
        "additionalProperty2": {
          "enrich_fields": "string",
          "indices": "string",
          "match_field": "string",
          "query": {},
          "name": "string",
          "elasticsearch_version": "string"
        }
      }
    }
  ]
}





















Get async EQL search results Added in 7.9.0

GET /_eql/search/{id}

Get the current status and available results for an async EQL search or a stored synchronous EQL search.

Path parameters

  • id string Required

    Identifier for the search.

Query parameters

  • Period for which the search and its results are stored on the cluster. Defaults to the keep_alive value set by the search’s EQL search API request.

  • Timeout duration to wait for the request to finish. Defaults to no timeout, meaning the request waits for complete search results.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • id string
    • is_partial boolean

      If true, the response does not contain complete search results.

    • is_running boolean

      If true, the search request is still executing.

    • took number

      Time unit for milliseconds

    • timed_out boolean

      If true, the request timed out before completion.

    • hits object Required

      Additional properties are allowed.

      Hide hits attributes Show hits attributes object
      • total object

        Additional properties are allowed.

        Hide total attributes Show total attributes object
      • events array[object]

        Contains events matching the query. Each object represents a matching event.

        Hide events attributes Show events attributes object
        • _index string Required
        • _id string Required
        • _source object Required

          Original JSON body passed for the event at index time.

          Additional properties are allowed.

        • missing boolean

          Set to true for events in a timespan-constrained sequence that do not meet a given condition.

        • fields object
          Hide fields attribute Show fields attribute object
          • * array[object] Additional properties

            Additional properties are allowed.

      • sequences array[object]

        Contains event sequences matching the query. Each object represents a matching sequence. This parameter is only returned for EQL queries containing a sequence.

        Hide sequences attributes Show sequences attributes object
        • events array[object] Required

          Contains events matching the query. Each object represents a matching event.

          Hide events attributes Show events attributes object
          • _index string Required
          • _id string Required
          • _source object Required

            Original JSON body passed for the event at index time.

            Additional properties are allowed.

          • missing boolean

            Set to true for events in a timespan-constrained sequence that do not meet a given condition.

          • fields object
        • join_keys array[object]

          Shared field values used to constrain matches in the sequence. These are defined using the by keyword in the EQL query syntax.

          Additional properties are allowed.

    • shard_failures array[object]

      Contains information about shard failures (if any), in case allow_partial_search_results=true

      Hide shard_failures attributes Show shard_failures attributes object
      • index string
      • node string
      • reason object Required

        Additional properties are allowed.

        Hide reason attributes Show reason attributes object
        • type string Required

          The type of error

        • reason string

          A human-readable explanation of the error, in English.

        • The server stack trace. Present only if the error_trace=true parameter was sent with the request.

        • Additional properties are allowed.

        • root_cause array[object]

          Additional properties are allowed.

        • suppressed array[object]

          Additional properties are allowed.

      • shard number Required
      • status string
GET /_eql/search/{id}
curl \
 --request GET http://api.example.com/_eql/search/{id}
Response examples (200)
{
  "id": "string",
  "is_partial": true,
  "is_running": true,
  "": 42.0,
  "timed_out": true,
  "hits": {
    "total": {
      "relation": "eq",
      "value": 42.0
    },
    "events": [
      {
        "_index": "string",
        "_id": "string",
        "_source": {},
        "missing": true,
        "fields": {
          "additionalProperty1": [
            {}
          ],
          "additionalProperty2": [
            {}
          ]
        }
      }
    ],
    "sequences": [
      {
        "events": [
          {
            "_index": "string",
            "_id": "string",
            "_source": {},
            "missing": true,
            "fields": {}
          }
        ],
        "join_keys": [
          {}
        ]
      }
    ]
  },
  "shard_failures": [
    {
      "index": "string",
      "node": "string",
      "reason": {
        "type": "string",
        "reason": "string",
        "stack_trace": "string",
        "caused_by": {},
        "root_cause": [
          {}
        ],
        "suppressed": [
          {}
        ]
      },
      "shard": 42.0,
      "status": "string"
    }
  ]
}
















ES|QL

The Elasticsearch Query Language (ES|QL) provides a powerful way to filter, transform, and analyze data stored in Elasticsearch, and in the future in other runtimes.













Stop async ES|QL query Added in 8.18.0

POST /_query/async/{id}/stop

This API interrupts the query execution and returns the results so far. If the Elasticsearch security features are enabled, only the user who first submitted the ES|QL query can stop it.

Path parameters

  • id string Required

    The unique identifier of the query. A query ID is provided in the ES|QL async query API response for a query that does not complete in the designated time. A query ID is also provided when the request was submitted with the keep_on_completion parameter set to true.

Query parameters

  • Indicates whether columns that are entirely null will be removed from the columns and values portion of the results. If true, the response will include an extra section under the name all_columns which has the name of all the columns.

Responses

  • 200 application/json

    Additional properties are allowed.

POST /_query/async/{id}/stop
curl \
 --request POST http://api.example.com/_query/async/{id}/stop
Response examples (200)
{}




Features

The feature APIs enable you to introspect and manage features provided by Elasticsearch and Elasticsearch plugins.





Reset the features Technical preview

POST /_features/_reset

Clear all of the state information stored in system indices by Elasticsearch features, including the security and machine learning indices.

WARNING: Intended for development and testing use only. Do not reset features on a production cluster.

Return a cluster to the same state as a new installation by resetting the feature state for all Elasticsearch features. This deletes all state information stored in system indices.

The response code is HTTP 200 if the state is successfully reset for all features. It is HTTP 500 if the reset operation failed for any feature.

Note that select features might provide a way to reset particular system indices. Using this API resets all features, both those that are built-in and implemented as plugins.

To list the features that will be affected, use the get features API.

IMPORTANT: The features installed on the node you submit this request to are the features that will be reset. Run on the master node if you have any doubts about which plugins are installed on individual nodes.

Query parameters

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
POST /_features/_reset
curl \
 --request POST http://api.example.com/_features/_reset
Response examples (200)
A successful response for clearing state information stored in system indices by Elasticsearch features.
{
  "features" : [
    {
      "feature_name" : "security",
      "status" : "SUCCESS"
    },
    {
      "feature_name" : "tasks",
      "status" : "SUCCESS"
    }
  ]
}