Get task information Technical preview

GET /_cat/tasks

Get information about tasks currently running in the cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the task management API.

Query parameters

  • actions array[string]

    The task action names, which are used to limit the response.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

  • nodes array[string]

    Unique node identifiers, which are used to limit the response.

  • The parent task identifier, which is used to limit the response.

  • h string | array[string]

    List of columns to appear in the response. Supports simple wildcards.

  • s string | array[string]

    List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting :asc or :desc as a suffix to the column name.

  • time string

    Unit used to display time values.

    Values are nanos, micros, ms, s, m, h, or d.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

  • If true, the request blocks until the task has completed.

Responses

GET /_cat/tasks
curl \
 --request GET 'http://api.example.com/_cat/tasks' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET _cat/tasks?v=true&format=json`.
[
  {
    "action": "cluster:monitor/tasks/lists[n]",
    "task_id": "oTUltX4IQMOUUVeiohTt8A:124",
    "parent_task_id": "oTUltX4IQMOUUVeiohTt8A:123",
    "type": "direct",
    "start_time": "1458585884904",
    "timestamp": "01:48:24",
    "running_time": "44.1micros",
    "ip": "127.0.0.1:9300",
    "node": "oTUltX4IQMOUUVeiohTt8A"
  },
  {
    "action": "cluster:monitor/tasks/lists",
    "task_id": "oTUltX4IQMOUUVeiohTt8A:123",
    "parent_task_id": "-",
    "type": "transport",
    "start_time": "1458585884904",
    "timestamp": "01:48:24",
    "running_time": "186.2micros",
    "ip": "127.0.0.1:9300",
    "node": "oTUltX4IQMOUUVeiohTt8A"
  }
]





































































































































































Path parameters

  • node_id string | array[string] Required

    A comma-separated list of node IDs or names to limit the returned information; use _local to return information from the node you're connecting to, leave empty to get information from all nodes

Query parameters

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • _nodes object
      Hide _nodes attributes Show _nodes attributes object
      • failures array[object]
        Hide failures attributes Show failures attributes object
      • total number Required

        Total number of nodes selected by the request.

      • successful number Required

        Number of nodes that responded successfully to the request.

      • failed number Required

        Number of nodes that rejected the request or failed to respond. If this value is not 0, a reason for the rejection or failure is included in the response.

    • cluster_name string Required
    • nodes object Required
      Hide nodes attribute Show nodes attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • rest_actions object Required
          Hide rest_actions attribute Show rest_actions attribute object
          • * number Additional properties
        • Time unit for milliseconds

        • Time unit for milliseconds

        • aggregations object Required
          Hide aggregations attribute Show aggregations attribute object
          • * object Additional properties
GET /_nodes/{node_id}/usage
curl \
 --request GET 'http://api.example.com/_nodes/{node_id}/usage' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "_nodes": {
    "failures": [
      {
        "type": "string",
        "reason": "string",
        "stack_trace": "string",
        "caused_by": {},
        "root_cause": [
          {}
        ],
        "suppressed": [
          {}
        ]
      }
    ],
    "total": 42.0,
    "successful": 42.0,
    "failed": 42.0
  },
  "cluster_name": "string",
  "nodes": {
    "additionalProperty1": {
      "rest_actions": {
        "additionalProperty1": 42.0,
        "additionalProperty2": 42.0
      },
      "": 42.0,
      "aggregations": {
        "additionalProperty1": {},
        "additionalProperty2": {}
      }
    },
    "additionalProperty2": {
      "rest_actions": {
        "additionalProperty1": 42.0,
        "additionalProperty2": 42.0
      },
      "": 42.0,
      "aggregations": {
        "additionalProperty1": {},
        "additionalProperty2": {}
      }
    }
  }
}


























































Claim a connector sync job Technical preview

PUT /_connector/_sync_job/{connector_sync_job_id}/_claim

This action updates the job status to in_progress and sets the last_seen and started_at timestamps to the current time. Additionally, it can set the sync_cursor property for the sync job.

This API is not intended for direct connector management by users. It supports the implementation of services that utilize the connector protocol to communicate with Elasticsearch.

To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.

Path parameters

application/json

Body Required

  • The cursor object from the last incremental sync job. This should reference the sync_cursor field in the connector state for which the job runs.

  • worker_hostname string Required

    The host name of the current system that will run the job.

Responses

PUT /_connector/_sync_job/{connector_sync_job_id}/_claim
curl \
 --request PUT 'http://api.example.com/_connector/_sync_job/{connector_sync_job_id}/_claim' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"sync_cursor":{},"worker_hostname":"string"}'
Request examples
{
  "sync_cursor": {},
  "worker_hostname": "string"
}
Response examples (200)
{}




































Update the connector error field Technical preview

PUT /_connector/{connector_id}/_error

Set the error field for the connector. If the error provided in the request body is non-null, the connector’s status is updated to error. Otherwise, if the error is reset to null, the connector status is updated to connected.

Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

  • error string | null

    One of:

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_error
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_error' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"error\": \"Houston, we have a problem!\"\n}"'
Request example
{
    "error": "Houston, we have a problem!"
}
Response examples (200)
{
  "result": "updated"
}
















Path parameters

  • connector_id string Required

    The unique identifier of the connector to be updated

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • result string Required

      Values are created, updated, deleted, not_found, or noop.

PUT /_connector/{connector_id}/_name
curl \
 --request PUT 'http://api.example.com/_connector/{connector_id}/_name' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"name\": \"Custom connector\",\n    \"description\": \"This is my customized connector\"\n}"'
Request example
{
    "name": "Custom connector",
    "description": "This is my customized connector"
}
Response examples (200)
{
  "result": "updated"
}


























































































































Convert an index alias to a data stream Added in 7.9.0

POST /_data_stream/_migrate/{name}

Converts an index alias to a data stream. You must have a matching index template that is data stream enabled. The alias must meet the following criteria: The alias must have a write index; All indices for the alias must have a @timestamp field mapping of a date or date_nanos field type; The alias must not have any filters; The alias must not use custom routing. If successful, the request removes the alias and creates a data stream with the same name. The indices for the alias become hidden backing indices for the stream. The write index for the alias becomes the write index for the stream.

Path parameters

  • name string Required

    Name of the index alias to convert to a data stream.

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

POST /_data_stream/_migrate/{name}
curl \
 --request POST 'http://api.example.com/_data_stream/_migrate/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "acknowledged": true
}




































































Create or update a document in an index

POST /{index}/_doc

Add a JSON document to the specified data stream or index and make it searchable. If the target is an index and the document already exists, the request updates the document and increments its version.

NOTE: You cannot use this API to send update requests for existing documents in a data stream.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:

  • To add or overwrite a document using the PUT /<target>/_doc/<_id> request format, you must have the create, index, or write index privilege.
  • To add a document using the POST /<target>/_doc/ request format, you must have the create_doc, create, index, or write index privilege.
  • To automatically create a data stream or index with this API request, you must have the auto_configure, create_index, or manage index privilege.

Automatic data stream creation requires a matching index template with data stream enabled.

NOTE: Replica shards might not all be started when an indexing operation returns successfully. By default, only the primary is required. Set wait_for_active_shards to change this default behavior.

Automatically create data streams and indices

If the request's target doesn't exist and matches an index template with a data_stream definition, the index operation automatically creates the data stream.

If the target doesn't exist and doesn't match a data stream template, the operation automatically creates the index and applies any matching index templates.

NOTE: Elasticsearch includes several built-in index templates. To avoid naming collisions with these templates, refer to index pattern documentation.

If no mapping exists, the index operation creates a dynamic mapping. By default, new fields and objects are automatically added to the mapping if needed.

Automatic index creation is controlled by the action.auto_create_index setting. If it is true, any index can be created automatically. You can modify this setting to explicitly allow or block automatic creation of indices that match specified patterns or set it to false to turn off automatic index creation entirely. Specify a comma-separated list of patterns you want to allow or prefix each pattern with + or - to indicate whether it should be allowed or blocked. When a list is specified, the default behaviour is to disallow.

NOTE: The action.auto_create_index setting affects the automatic creation of indices only. It does not affect the creation of data streams.

Optimistic concurrency control

Index operations can be made conditional and only be performed if the last modification to the document was assigned the sequence number and primary term specified by the if_seq_no and if_primary_term parameters. If a mismatch is detected, the operation will result in a VersionConflictException and a status code of 409.

Routing

By default, shard placement — or routing — is controlled by using a hash of the document's ID value. For more explicit control, the value fed into the hash function used by the router can be directly specified on a per-operation basis using the routing parameter.

When setting up explicit mapping, you can also use the _routing field to direct the index operation to extract the routing value from the document itself. This does come at the (very minimal) cost of an additional document parsing pass. If the _routing mapping is defined and set to be required, the index operation will fail if no routing value is provided or extracted.

NOTE: Data streams do not support custom routing unless they were created with the allow_custom_routing setting enabled in the template.

Distributed

The index operation is directed to the primary shard based on its route and performed on the actual node containing this shard. After the primary shard completes the operation, if needed, the update is distributed to applicable replicas.

Active shards

To improve the resiliency of writes to the system, indexing operations can be configured to wait for a certain number of active shard copies before proceeding with the operation. If the requisite number of active shard copies are not available, then the write operation must wait and retry, until either the requisite shard copies have started or a timeout occurs. By default, write operations only wait for the primary shards to be active before proceeding (that is to say wait_for_active_shards is 1). This default can be overridden in the index settings dynamically by setting index.write.wait_for_active_shards. To alter this behavior per operation, use the wait_for_active_shards request parameter.

Valid values are all or any positive integer up to the total number of configured copies per shard in the index (which is number_of_replicas+1). Specifying a negative value or a number greater than the number of shard copies will throw an error.

For example, suppose you have a cluster of three nodes, A, B, and C and you create an index index with the number of replicas set to 3 (resulting in 4 shard copies, one more copy than there are nodes). If you attempt an indexing operation, by default the operation will only ensure the primary copy of each shard is available before proceeding. This means that even if B and C went down and A hosted the primary shard copies, the indexing operation would still proceed with only one copy of the data. If wait_for_active_shards is set on the request to 3 (and all three nodes are up), the indexing operation will require 3 active shard copies before proceeding. This requirement should be met because there are 3 active nodes in the cluster, each one holding a copy of the shard. However, if you set wait_for_active_shards to all (or to 4, which is the same in this situation), the indexing operation will not proceed as you do not have all 4 copies of each shard active in the index. The operation will timeout unless a new node is brought up in the cluster to host the fourth copy of the shard.

It is important to note that this setting greatly reduces the chances of the write operation not writing to the requisite number of shard copies, but it does not completely eliminate the possibility, because this check occurs before the write operation starts. After the write operation is underway, it is still possible for replication to fail on any number of shard copies but still succeed on the primary. The _shards section of the API response reveals the number of shard copies on which replication succeeded and failed.

No operation (noop) updates

When updating a document by using this API, a new version of the document is always created even if the document hasn't changed. If this isn't acceptable use the _update API with detect_noop set to true. The detect_noop option isn't available on this API because it doesn’t fetch the old source and isn't able to compare it against the new source.

There isn't a definitive rule for when noop updates aren't acceptable. It's a combination of lots of factors like how frequently your data source sends updates that are actually noops and how many queries per second Elasticsearch runs on the shard receiving the updates.

Versioning

Each indexed document is given a version number. By default, internal versioning is used that starts at 1 and increments with each update, deletes included. Optionally, the version number can be set to an external value (for example, if maintained in a database). To enable this functionality, version_type should be set to external. The value provided must be a numeric, long value greater than or equal to 0, and less than around 9.2e+18.

NOTE: Versioning is completely real time, and is not affected by the near real time aspects of search operations. If no version is provided, the operation runs without any version checks.

When using the external version type, the system checks to see if the version number passed to the index request is greater than the version of the currently stored document. If true, the document will be indexed and the new version number used. If the value provided is less than or equal to the stored document's version number, a version conflict will occur and the index operation will fail. For example:

PUT my-index-000001/_doc/1?version=2&version_type=external
{
  "user": {
    "id": "elkbee"
  }
}

In this example, the operation will succeed since the supplied version of 2 is higher than the current document version of 1.
If the document was already updated and its version was set to 2 or higher, the indexing command will fail and result in a conflict (409 HTTP status code).

A nice side effect is that there is no need to maintain strict ordering of async indexing operations run as a result of changes to a source database, as long as version numbers from the source database are used.
Even the simple case of updating the Elasticsearch index using data from a database is simplified if external versioning is used, as only the latest version will be used if the index operations arrive out of order.
External documentation

Path parameters

  • index string Required

    The name of the data stream or index to target. If the target doesn't exist and matches the name or wildcard (*) pattern of an index template with a data_stream definition, this request creates the data stream. If the target doesn't exist and doesn't match a data stream template, this request creates the index. You can check for existing targets with the resolve index API.

Query parameters

  • Only perform the operation if the document has this primary term.

  • Only perform the operation if the document has this sequence number.

  • True or false if to include the document source in the error message in case of parsing errors.

  • op_type string

    Set to create to only index the document if it does not already exist (put if absent). If a document with the specified _id already exists, the indexing operation will fail. The behavior is the same as using the <index>/_create endpoint. If a document ID is specified, this paramater defaults to index. Otherwise, it defaults to create. If the request targets a data stream, an op_type of create is required.

    Values are index or create.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to _none disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.

  • refresh string

    If true, Elasticsearch refreshes the affected shards to make this operation visible to search. If wait_for, it waits for a refresh to make this operation visible to search. If false, it does nothing with refreshes.

    Values are true, false, or wait_for.

  • routing string

    A custom value that is used to route operations to a specific shard.

  • timeout string

    The period the request waits for the following operations: automatic index creation, dynamic mapping updates, waiting for active shards.

    This parameter is useful for situations where the primary shard assigned to perform the operation might not be available when the operation runs. Some reasons for this might be that the primary shard is currently recovering from a gateway or undergoing relocation. By default, the operation will wait on the primary shard to become available for at least 1 minute before failing and responding with an error. The actual wait time could be longer, particularly when multiple waits occur.

  • version number

    An explicit version number for concurrency control. It must be a non-negative long number.

  • The version type.

    Values are internal, external, external_gte, or force.

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. You can set it to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The default value of 1 means it waits for each primary shard to be active.

  • If true, the destination must be an index alias.

application/json

Body Required

object object

Responses

POST /{index}/_doc
curl \
 --request POST 'http://api.example.com/{index}/_doc' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"@timestamp\": \"2099-11-15T13:12:00\",\n  \"message\": \"GET /search HTTP/1.1 200 1070000\",\n  \"user\": {\n    \"id\": \"kimchy\"\n  }\n}"'
Request examples
Run `POST my-index-000001/_doc/` to index a document. When you use the `POST /<target>/_doc/` request format, the `op_type` is automatically set to `create` and the index operation generates a unique ID for the document.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Run `PUT my-index-000001/_doc/1` to insert a JSON document into the `my-index-000001` index with an `_id` of 1.
{
  "@timestamp": "2099-11-15T13:12:00",
  "message": "GET /search HTTP/1.1 200 1070000",
  "user": {
    "id": "kimchy"
  }
}
Response examples (200)
A successful response from `POST my-index-000001/_doc/`, which contains an automated document ID.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "W0tpsmIBdwcYyG50zbta",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}
A successful response from `PUT my-index-000001/_doc/1`.
{
  "_shards": {
    "total": 2,
    "failed": 0,
    "successful": 2
  },
  "_index": "my-index-000001",
  "_id": "1",
  "_version": 1,
  "_seq_no": 0,
  "_primary_term": 1,
  "result": "created"
}

Get multiple documents Added in 1.3.0

GET /_mget

Get multiple JSON documents by ID from one or more indices. If you specify an index in the request URI, you only need to specify the document IDs in the request body. To ensure fast responses, this multi get (mget) API responds with partial results if one or more shards fail.

Filter source fields

By default, the _source field is returned for every document (if stored). Use the _source and _source_include or source_exclude attributes to filter what fields are returned for a particular document. You can include the _source, _source_includes, and _source_excludes query parameters in the request URI to specify the defaults to use when there are no per-document instructions.

Get stored fields

Use the stored_fields attribute to specify the set of stored fields you want to retrieve. Any requested fields that are not stored are ignored. You can include the stored_fields query parameter in the request URI to specify the defaults to use when there are no per-document instructions.

Query parameters

  • Should this request force synthetic _source? Use this to test if the mapping supports synthetic _source and to get a sense of the worst case performance. Fetches with this enabled will be slower the enabling synthetic source natively in the index.

  • Specifies the node or shard the operation should be performed on. Random by default.

  • realtime boolean

    If true, the request is real-time as opposed to near-real-time.

  • refresh boolean

    If true, the request refreshes relevant shards before retrieving documents.

  • routing string

    Custom value used to route operations to a specific shard.

  • _source boolean | string | array[string]

    True or false to return the _source field or not, or a list of fields to return.

  • _source_excludes string | array[string]

    A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in _source_includes query parameter.

  • _source_includes string | array[string]

    A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the _source_excludes query parameter. If the _source parameter is false, this parameter is ignored.

  • stored_fields string | array[string]

    If true, retrieves the document fields stored in the index rather than the document _source.

application/json

Body Required

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • docs array[object] Required

      The response includes a docs array that contains the documents in the order specified in the request. The structure of the returned documents is similar to that returned by the get API. If there is a failure getting a particular document, the error is included in place of the document.

      One of:
      Hide attributes Show attributes
      • _index string Required
      • fields object

        If the stored_fields parameter is set to true and found is true, it contains the document fields stored in the index.

        Hide fields attribute Show fields attribute object
        • * object Additional properties
      • _ignored array[string]
      • found boolean Required

        Indicates whether the document exists.

      • _id string Required
      • The primary term assigned to the document for the indexing operation.

      • _routing string

        The explicit routing, if set.

      • _seq_no number
      • _source object

        If found is true, it contains the document data formatted in JSON. If the _source parameter is set to false or the stored_fields parameter is set to true, it is excluded.

      • _version number
GET /_mget
curl \
 --request GET 'http://api.example.com/_mget' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"docs\": [\n    {\n      \"_id\": \"1\"\n    },\n    {\n      \"_id\": \"2\"\n    }\n  ]\n}"'
Run `GET /my-index-000001/_mget`. When you specify an index in the request URI, only the document IDs are required in the request body.
{
  "docs": [
    {
      "_id": "1"
    },
    {
      "_id": "2"
    }
  ]
}
Run `GET /_mget`. This request sets `_source` to `false` for document 1 to exclude the source entirely. It retrieves `field3` and `field4` from document 2. It retrieves the `user` field from document 3 but filters out the `user.location` field.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "_source": false
    },
    {
      "_index": "test",
      "_id": "2",
      "_source": [ "field3", "field4" ]
    },
    {
      "_index": "test",
      "_id": "3",
      "_source": {
        "include": [ "user" ],
        "exclude": [ "user.location" ]
      }
    }
  ]
}
Run `GET /_mget`. This request retrieves `field1` and `field2` from document 1 and `field3` and `field4` from document 2.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "stored_fields": [ "field1", "field2" ]
    },
    {
      "_index": "test",
      "_id": "2",
      "stored_fields": [ "field3", "field4" ]
    }
  ]
}
Run `GET /_mget?routing=key1`. If routing is used during indexing, you need to specify the routing value to retrieve documents. This request fetches `test/_doc/2` from the shard corresponding to routing key `key1`. It fetches `test/_doc/1` from the shard corresponding to routing key `key2`.
{
  "docs": [
    {
      "_index": "test",
      "_id": "1",
      "routing": "key2"
    },
    {
      "_index": "test",
      "_id": "2"
    }
  ]
}
Response examples (200)
{
  "docs": [
    {
      "_index": "string",
      "fields": {
        "additionalProperty1": {},
        "additionalProperty2": {}
      },
      "_ignored": [
        "string"
      ],
      "found": true,
      "_id": "string",
      "_primary_term": 42.0,
      "_routing": "string",
      "_seq_no": 42.0,
      "_source": {},
      "_version": 42.0
    }
  ]
}
































Throttle a reindex operation Added in 2.4.0

POST /_reindex/{task_id}/_rethrottle

Change the number of requests per second for a particular reindex operation. For example:

POST _reindex/r1A2WoRbTwKZ516z6NEs5A:36619/_rethrottle?requests_per_second=-1

Rethrottling that speeds up the query takes effect immediately. Rethrottling that slows down the query will take effect after completing the current batch. This behavior prevents scroll timeouts.

Path parameters

  • task_id string Required

    The task identifier, which can be found by using the tasks API.

Query parameters

  • The throttle for this request in sub-requests per second. It can be either -1 to turn off throttling or any decimal number like 1.7 or 12 to throttle to that level.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
POST /_reindex/{task_id}/_rethrottle
curl \
 --request POST 'http://api.example.com/_reindex/{task_id}/_rethrottle' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "nodes": {}
}




















Update documents Added in 2.4.0

POST /{index}/_update_by_query

Updates documents that match the specified query. If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes.

If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:

  • read
  • index or write

You can specify the query criteria in the request URI or the request body using the same syntax as the search API.

When you submit an update by query request, Elasticsearch gets a snapshot of the data stream or index when it begins processing the request and updates matching documents using internal versioning. When the versions match, the document is updated and the version number is incremented. If a document changes between the time that the snapshot is taken and the update operation is processed, it results in a version conflict and the operation fails. You can opt to count version conflicts instead of halting and returning by setting conflicts to proceed. Note that if you opt to count version conflicts, the operation could attempt to update more documents from the source than max_docs until it has successfully updated max_docs documents or it has gone through every document in the source query.

NOTE: Documents with a version equal to 0 cannot be updated using update by query because internal versioning does not support 0 as a valid version number.

While processing an update by query request, Elasticsearch performs multiple search requests sequentially to find all of the matching documents. A bulk update request is performed for each batch of matching documents. Any query or update failures cause the update by query request to fail and the failures are shown in the response. Any update requests that completed successfully still stick, they are not rolled back.

Throttling update requests

To control the rate at which update by query issues batches of update operations, you can set requests_per_second to any positive decimal number. This pads each batch with a wait time to throttle the rate. Set requests_per_second to -1 to turn off throttling.

Throttling uses a wait time between batches so that the internal scroll requests can be given a timeout that takes the request padding into account. The padding time is the difference between the batch size divided by the requests_per_second and the time spent writing. By default the batch size is 1000, so if requests_per_second is set to 500:

target_time = 1000 / 500 per second = 2 seconds
wait_time = target_time - write_time = 2 seconds - .5 seconds = 1.5 seconds

Since the batch is issued as a single _bulk request, large batch sizes cause Elasticsearch to create many requests and wait before starting the next set. This is "bursty" instead of "smooth".

Slicing

Update by query supports sliced scroll to parallelize the update process. This can improve efficiency and provide a convenient way to break the request down into smaller parts.

Setting slices to auto chooses a reasonable number for most data streams and indices. This setting will use one slice per shard, up to a certain limit. If there are multiple source data streams or indices, it will choose the number of slices based on the index or backing index with the smallest number of shards.

Adding slices to _update_by_query just automates the manual process of creating sub-requests, which means it has some quirks:

  • You can see these requests in the tasks APIs. These sub-requests are "child" tasks of the task for the request with slices.
  • Fetching the status of the task for the request with slices only contains the status of completed slices.
  • These sub-requests are individually addressable for things like cancellation and rethrottling.
  • Rethrottling the request with slices will rethrottle the unfinished sub-request proportionally.
  • Canceling the request with slices will cancel each sub-request.
  • Due to the nature of slices each sub-request won't get a perfectly even portion of the documents. All documents will be addressed, but some slices may be larger than others. Expect larger slices to have a more even distribution.
  • Parameters like requests_per_second and max_docs on a request with slices are distributed proportionally to each sub-request. Combine that with the point above about distribution being uneven and you should conclude that using max_docs with slices might not result in exactly max_docs documents being updated.
  • Each sub-request gets a slightly different snapshot of the source data stream or index though these are all taken at approximately the same time.

If you're slicing manually or otherwise tuning automatic slicing, keep in mind that:

  • Query performance is most efficient when the number of slices is equal to the number of shards in the index or backing index. If that number is large (for example, 500), choose a lower number as too many slices hurts performance. Setting slices higher than the number of shards generally does not improve efficiency and adds overhead.
  • Update performance scales linearly across available resources with the number of slices.

Whether query or update performance dominates the runtime depends on the documents being reindexed and cluster resources.

Update the document source

Update by query supports scripts to update the document source. As with the update API, you can set ctx.op to change the operation that is performed.

Set ctx.op = "noop" if your script decides that it doesn't have to make any changes. The update by query operation skips updating the document and increments the noop counter.

Set ctx.op = "delete" if your script decides that the document should be deleted. The update by query operation deletes the document and increments the deleted counter.

Update by query supports only index, noop, and delete. Setting ctx.op to anything else is an error. Setting any other field in ctx is an error. This API enables you to only modify the source of matching documents; you cannot move them.

Path parameters

  • index string | array[string] Required

    A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (*). To search all data streams or indices, omit this parameter or use * or _all.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • analyzer string

    The analyzer to use for the query string. This parameter can be used only when the q query string parameter is specified.

  • If true, wildcard and prefix queries are analyzed. This parameter can be used only when the q query string parameter is specified.

  • The preferred behavior when update by query hits version conflicts: abort or proceed.

    Values are abort or proceed.

  • The default operator for query string query: AND or OR. This parameter can be used only when the q query string parameter is specified.

    Values are and, AND, or, or OR.

  • df string

    The field to use as default where no field prefix is given in the query string. This parameter can be used only when the q query string parameter is specified.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

  • from number

    Skips the specified number of documents.

  • If false, the request returns an error if it targets a missing or closed index.

  • lenient boolean

    If true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when the q query string parameter is specified.

  • max_docs number

    The maximum number of documents to process. It defaults to all documents. When set to a value less then or equal to scroll_size then a scroll will not be used to retrieve the results for the operation.

  • pipeline string

    The ID of the pipeline to use to preprocess incoming documents. If the index has a default ingest pipeline specified, then setting the value to _none disables the default ingest pipeline for this request. If a final pipeline is configured it will always run, regardless of the value of this parameter.

  • The node or shard the operation should be performed on. It is random by default.

  • q string

    A query in the Lucene query string syntax.

  • refresh boolean

    If true, Elasticsearch refreshes affected shards to make the operation visible to search after the request completes. This is different than the update API's refresh parameter, which causes just the shard that received the request to be refreshed.

  • If true, the request cache is used for this request. It defaults to the index-level setting.

  • The throttle for this request in sub-requests per second.

  • routing string

    A custom value used to route operations to a specific shard.

  • scroll string

    The period to retain the search context for scrolling.

  • The size of the scroll request that powers the operation.

  • An explicit timeout for each search request. By default, there is no timeout.

  • The type of the search operation. Available options include query_then_fetch and dfs_query_then_fetch.

    Values are query_then_fetch or dfs_query_then_fetch.

  • slices number | string

    The number of slices this task should be divided into.

  • sort array[string]

    A comma-separated list of : pairs.

  • stats array[string]

    The specific tag of the request for logging and statistical purposes.

  • The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.

    IMPORTANT: Use with caution. Elasticsearch applies this parameter to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this parameter for requests that target data streams with backing indices across multiple data tiers.

  • timeout string

    The period each update request waits for the following operations: dynamic mapping updates, waiting for active shards. By default, it is one minute. This guarantees Elasticsearch waits for at least the timeout before failing. The actual wait time could be longer, particularly when multiple waits occur.

  • version boolean

    If true, returns the document version as part of a hit.

  • Should the document increment the version number (internal) on hit or not (reindex)

  • wait_for_active_shards number | string

    The number of shard copies that must be active before proceeding with the operation. Set to all or any positive integer up to the total number of shards in the index (number_of_replicas+1). The timeout parameter controls how long each write request waits for unavailable shards to become available. Both work exactly the way they work in the bulk API.

  • If true, the request blocks until the operation is complete. If false, Elasticsearch performs some preflight checks, launches the request, and returns a task ID that you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at .tasks/task/${taskId}.

application/json

Body

  • max_docs number

    The maximum number of documents to update.

  • query object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • script object
    Hide script attributes Show script attributes object
    • source string

      The script source.

    • id string
    • params object

      Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

      Hide params attribute Show params attribute object
      • * object Additional properties
    • lang string

      Any of:

      Values are painless, expression, mustache, or java.

    • options object
      Hide options attribute Show options attribute object
      • * string Additional properties
  • slice object
    Hide slice attributes Show slice attributes object
    • field string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • id string Required
    • max number Required
  • Values are abort or proceed.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • batches number

      The number of scroll responses pulled back by the update by query.

    • failures array[object]

      Array of failures if there were any unrecoverable errors during the process. If this is non-empty then the request ended because of those failures. Update by query is implemented using batches. Any failure causes the entire process to end, but all failures in the current batch are collected into the array. You can use the conflicts option to prevent reindex from ending when version conflicts occur.

      Hide failures attributes Show failures attributes object
    • noops number

      The number of documents that were ignored because the script used for the update by query returned a noop value for ctx.op.

    • deleted number

      The number of documents that were successfully deleted.

    • The number of requests per second effectively run during the update by query.

    • retries object
      Hide retries attributes Show retries attributes object
      • bulk number Required

        The number of bulk actions retried.

    • timed_out boolean

      If true, some requests timed out during the update by query.

    • took number

      Time unit for milliseconds

    • total number

      The number of documents that were successfully processed.

    • updated number

      The number of documents that were successfully updated.

    • The number of version conflicts that the update by query hit.

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Time unit for milliseconds

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Time unit for milliseconds

POST /{index}/_update_by_query
curl \
 --request POST 'http://api.example.com/{index}/_update_by_query' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"query\": { \n    \"term\": {\n      \"user.id\": \"kimchy\"\n    }\n  }\n}"'
Run `POST my-index-000001/_update_by_query?conflicts=proceed` to update documents that match a query.
{
  "query": { 
    "term": {
      "user.id": "kimchy"
    }
  }
}
Run `POST my-index-000001/_update_by_query` with a script to update the document source. It increments the `count` field for all documents with a `user.id` of `kimchy` in `my-index-000001`.
{
  "script": {
    "source": "ctx._source.count++",
    "lang": "painless"
  },
  "query": {
    "term": {
      "user.id": "kimchy"
    }
  }
}
Run `POST my-index-000001/_update_by_query` to slice an update by query manually. Provide a slice ID and total number of slices to each request.
{
  "slice": {
    "id": 0,
    "max": 2
  },
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}
Run `POST my-index-000001/_update_by_query?refresh&slices=5` to use automatic slicing. It automatically parallelizes using sliced scroll to slice on `_id`.
{
  "script": {
    "source": "ctx._source['extra'] = 'test'"
  }
}
Response examples (200)
{
  "batches": 42.0,
  "failures": [
    {
      "cause": {
        "type": "string",
        "reason": "string",
        "stack_trace": "string",
        "caused_by": {},
        "root_cause": [
          {}
        ],
        "suppressed": [
          {}
        ]
      },
      "id": "string",
      "index": "string",
      "status": 42.0
    }
  ],
  "noops": 42.0,
  "deleted": 42.0,
  "requests_per_second": 42.0,
  "retries": {
    "bulk": 42.0,
    "search": 42.0
  },
  "": 42.0,
  "timed_out": true,
  "total": 42.0,
  "updated": 42.0,
  "version_conflicts": 42.0,
  "throttled": "string",
  "throttled_until": "string"
}

Throttle an update by query operation Added in 6.5.0

POST /_update_by_query/{task_id}/_rethrottle

Change the number of requests per second for a particular update by query operation. Rethrottling that speeds up the query takes effect immediately but rethrotting that slows down the query takes effect after completing the current batch to prevent scroll timeouts.

Path parameters

  • task_id string Required

    The ID for the task.

Query parameters

  • The throttle for this request in sub-requests per second. To turn off throttling, set it to -1.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
POST /_update_by_query/{task_id}/_rethrottle
curl \
 --request POST 'http://api.example.com/_update_by_query/{task_id}/_rethrottle' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "nodes": {}
}























































































































































Add an index block Added in 7.9.0

PUT /{index}/_block/{block}

Add an index block to an index. Index blocks limit the operations allowed on an index by blocking specific operation types.

Path parameters

  • index string Required

    A comma-separated list or wildcard expression of index names used to limit the request. By default, you must explicitly name the indices you are adding blocks to. To allow the adding of blocks to indices with _all, *, or other wildcard expressions, change the action.destructive_requires_name setting to false. You can update this setting in the elasticsearch.yml file or by using the cluster update settings API.

  • block string Required

    The block type to add to the index.

    Values are metadata, read, read_only, or write.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as open,hidden.

  • If false, the request returns an error if it targets a missing or closed index.

  • The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to -1 to indicate that the request should never timeout.

  • timeout string

    The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response will indicate that it was not completely acknowledged. It can also be set to -1 to indicate that the request should never timeout.

Responses

PUT /{index}/_block/{block}
curl \
 --request PUT 'http://api.example.com/{index}/_block/{block}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `PUT /my-index-000001/_block/write`, which adds an index block to an index.'
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "indices" : [ {
    "name" : "my-index-000001",
    "blocked" : true
  } ]
}












































































































































































































































Get index recovery information

GET /_recovery

Get information about ongoing and completed shard recoveries for one or more indices. For data streams, the API returns information for the stream's backing indices.

All recoveries, whether ongoing or complete, are kept in the cluster state and may be reported on at any time.

Shard recovery is the process of initializing a shard copy, such as restoring a primary shard from a snapshot or creating a replica shard from a primary shard. When a shard recovery completes, the recovered shard is available for search and indexing.

Recovery automatically occurs during the following processes:

  • When creating an index for the first time.
  • When a node rejoins the cluster and starts up any missing primary shard copies using the data that it holds in its data path.
  • Creation of new replica shard copies from the primary.
  • Relocation of a shard copy to a different node in the same cluster.
  • A snapshot restore operation.
  • A clone, shrink, or split operation.

You can determine the cause of a shard recovery using the recovery or cat recovery APIs.

The index recovery API reports information about completed recoveries only for shard copies that currently exist in the cluster. It only reports the last recovery for each shard copy and does not report historical information about earlier recoveries, nor does it report information about the recoveries of shard copies that no longer exist. This means that if a shard copy completes a recovery and then Elasticsearch relocates it onto a different node then the information about the original recovery will not be shown in the recovery API.

Query parameters

  • If true, the response only includes ongoing shard recoveries.

  • detailed boolean

    If true, the response includes detailed information about shard recoveries.

Responses

GET /_recovery
curl \
 --request GET 'http://api.example.com/_recovery' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_recovery?human`, which gets information about ongoing and completed shard recoveries for all data streams and indices in a cluster. This example includes information about a single index recovering a single shard. The source of the recovery is a snapshot repository and the target of the recovery is the `my_es_node` node. The response also includes the number and percentage of files and bytes recovered.
{
  "index1" : {
    "shards" : [ {
      "id" : 0,
      "type" : "SNAPSHOT",
      "stage" : "INDEX",
      "primary" : true,
      "start_time" : "2014-02-24T12:15:59.716",
      "start_time_in_millis": 1393244159716,
      "stop_time" : "0s",
      "stop_time_in_millis" : 0,
      "total_time" : "2.9m",
      "total_time_in_millis" : 175576,
      "source" : {
        "repository" : "my_repository",
        "snapshot" : "my_snapshot",
        "index" : "index1",
        "version" : "{version}",
        "restoreUUID": "PDh1ZAOaRbiGIVtCvZOMww"
      },
      "target" : {
        "id" : "ryqJ5lO5S4-lSFbGntkEkg",
        "host" : "my.fqdn",
        "transport_address" : "my.fqdn",
        "ip" : "10.0.1.7",
        "name" : "my_es_node"
      },
      "index" : {
        "size" : {
          "total" : "75.4mb",
          "total_in_bytes" : 79063092,
          "reused" : "0b",
          "reused_in_bytes" : 0,
          "recovered" : "65.7mb",
          "recovered_in_bytes" : 68891939,
          "recovered_from_snapshot" : "0b",
          "recovered_from_snapshot_in_bytes" : 0,
          "percent" : "87.1%"
        },
        "files" : {
          "total" : 73,
          "reused" : 0,
          "recovered" : 69,
          "percent" : "94.5%"
        },
        "total_time" : "0s",
        "total_time_in_millis" : 0,
        "source_throttle_time" : "0s",
        "source_throttle_time_in_millis" : 0,
        "target_throttle_time" : "0s",
        "target_throttle_time_in_millis" : 0
      },
      "translog" : {
        "recovered" : 0,
        "total" : 0,
        "percent" : "100.0%",
        "total_on_start" : 0,
        "total_time" : "0s",
        "total_time_in_millis" : 0
      },
      "verify_index" : {
        "check_index_time" : "0s",
        "check_index_time_in_millis" : 0,
        "total_time" : "0s",
        "total_time_in_millis" : 0
      }
    } ]
  }
}
A successful response from `GET _recovery?human&detailed=true`. The response includes a listing of any physical files recovered and their sizes. The response also includes timings in milliseconds of the various stages of recovery: index retrieval, translog replay, and index start time. This response indicates the recovery is done.
{
  "index1" : {
    "shards" : [ {
      "id" : 0,
      "type" : "EXISTING_STORE",
      "stage" : "DONE",
      "primary" : true,
      "start_time" : "2014-02-24T12:38:06.349",
      "start_time_in_millis" : "1393245486349",
      "stop_time" : "2014-02-24T12:38:08.464",
      "stop_time_in_millis" : "1393245488464",
      "total_time" : "2.1s",
      "total_time_in_millis" : 2115,
      "source" : {
        "id" : "RGMdRc-yQWWKIBM4DGvwqQ",
        "host" : "my.fqdn",
        "transport_address" : "my.fqdn",
        "ip" : "10.0.1.7",
        "name" : "my_es_node"
      },
      "target" : {
        "id" : "RGMdRc-yQWWKIBM4DGvwqQ",
        "host" : "my.fqdn",
        "transport_address" : "my.fqdn",
        "ip" : "10.0.1.7",
        "name" : "my_es_node"
      },
      "index" : {
        "size" : {
          "total" : "24.7mb",
          "total_in_bytes" : 26001617,
          "reused" : "24.7mb",
          "reused_in_bytes" : 26001617,
          "recovered" : "0b",
          "recovered_in_bytes" : 0,
          "recovered_from_snapshot" : "0b",
          "recovered_from_snapshot_in_bytes" : 0,
          "percent" : "100.0%"
        },
        "files" : {
          "total" : 26,
          "reused" : 26,
          "recovered" : 0,
          "percent" : "100.0%",
          "details" : [ {
            "name" : "segments.gen",
            "length" : 20,
            "recovered" : 20
          }, {
            "name" : "_0.cfs",
            "length" : 135306,
            "recovered" : 135306,
            "recovered_from_snapshot": 0
          }, {
            "name" : "segments_2",
            "length" : 251,
            "recovered" : 251,
            "recovered_from_snapshot": 0
          }
          ]
        },
        "total_time" : "2ms",
        "total_time_in_millis" : 2,
        "source_throttle_time" : "0s",
        "source_throttle_time_in_millis" : 0,
        "target_throttle_time" : "0s",
        "target_throttle_time_in_millis" : 0
      },
      "translog" : {
        "recovered" : 71,
        "total" : 0,
        "percent" : "100.0%",
        "total_on_start" : 0,
        "total_time" : "2.0s",
        "total_time_in_millis" : 2025
      },
      "verify_index" : {
        "check_index_time" : 0,
        "check_index_time_in_millis" : 0,
        "total_time" : "88ms",
        "total_time_in_millis" : 88
      }
    } ]
  }
}
















Refresh an index

POST /{index}/_refresh

A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.

By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds. You can change this default interval with the index.refresh_interval setting.

Refresh requests are synchronous and do not return a response until the refresh operation completes.

Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.

If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for query parameter option. This option ensures the indexing operation waits for a periodic refresh before running the search.

Path parameters

  • index string | array[string] Required

    Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (*). To target all data streams and indices, omit this parameter or use * or _all.

Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

  • If false, the request returns an error if it targets a missing or closed index.

Responses

POST /{index}/_refresh
curl \
 --request POST 'http://api.example.com/{index}/_refresh' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "_shards": {
    "failed": 42.0,
    "successful": 42.0,
    "total": 42.0,
    "failures": [
      {
        "index": "string",
        "node": "string",
        "reason": {
          "type": "string",
          "reason": "string",
          "stack_trace": "string",
          "caused_by": {},
          "root_cause": [
            {}
          ],
          "suppressed": [
            {}
          ]
        },
        "shard": 42.0,
        "status": "string"
      }
    ],
    "skipped": 42.0
  }
}

































































































































































Inference

Inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

























Get an inference endpoint Added in 8.11.0

GET /_inference/{task_type}/{inference_id}

Path parameters

  • task_type string Required

    The task type

    Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

  • inference_id string Required

    The inference Id

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • endpoints array[object] Required
      Hide endpoints attributes Show endpoints attributes object
      • Hide chunking_settings attributes Show chunking_settings attributes object
        • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        • overlap number

          The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        • strategy string

          The chunking strategy: sentence or word.

      • service string Required

        The service type

      • service_settings object Required
      • inference_id string Required

        The inference Id

      • task_type string Required

        Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

GET /_inference/{task_type}/{inference_id}
curl \
 --request GET 'http://api.example.com/_inference/{task_type}/{inference_id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "endpoints": [
    {
      "chunking_settings": {
        "max_chunk_size": 42.0,
        "overlap": 42.0,
        "sentence_overlap": 42.0,
        "strategy": "string"
      },
      "service": "string",
      "service_settings": {},
      "task_settings": {},
      "inference_id": "string",
      "task_type": "sparse_embedding"
    }
  ]
}




















































Create a Google Vertex AI inference endpoint Added in 8.15.0

PUT /_inference/{task_type}/{googlevertexai_inference_id}

Create an inference endpoint to perform an inference task with the googlevertexai service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Path parameters

  • task_type string Required

    The type of the inference task that the model will perform.

    Values are rerank or text_embedding.

  • The unique identifier of the inference endpoint.

application/json

Body

  • Hide chunking_settings attributes Show chunking_settings attributes object
    • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

    • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

    • strategy string

      The chunking strategy: sentence or word.

  • service string Required

    Value is googlevertexai.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • location string Required

      The name of the location to use for the inference task. Refer to the Google documentation for the list of supported locations.

      External documentation
    • model_id string Required

      The name of the model to use for the inference task. Refer to the Google documentation for the list of supported models.

      External documentation
    • project_id string Required

      The name of the project to use for the inference task.

    • Hide rate_limit attribute Show rate_limit attribute object
    • service_account_json string Required

      A valid service account in JSON format for the Google Vertex AI API.

  • Hide task_settings attributes Show task_settings attributes object
    • For a text_embedding task, truncate inputs longer than the maximum token length automatically.

    • top_n number

      For a rerank task, the number of the top N documents that should be returned.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • Hide chunking_settings attributes Show chunking_settings attributes object
      • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      • strategy string

        The chunking strategy: sentence or word.

    • service string Required

      The service type

    • service_settings object Required
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{googlevertexai_inference_id}
curl \
 --request PUT 'http://api.example.com/_inference/{task_type}/{googlevertexai_inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n    \"service\": \"googlevertexai\",\n    \"service_settings\": {\n        \"service_account_json\": \"service-account-json\",\n        \"model_id\": \"model-id\",\n        \"location\": \"location\",\n        \"project_id\": \"project-id\"\n    }\n}"'
Request examples
Run `PUT _inference/text_embedding/google_vertex_ai_embeddings` to create an inference endpoint to perform a `text_embedding` task type.
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "model_id": "model-id",
        "location": "location",
        "project_id": "project-id"
    }
}
Run `PUT _inference/rerank/google_vertex_ai_rerank` to create an inference endpoint to perform a `rerank` task type.
{
    "service": "googlevertexai",
    "service_settings": {
        "service_account_json": "service-account-json",
        "project_id": "project-id"
    }
}
Response examples (200)
{
  "chunking_settings": {
    "max_chunk_size": 42.0,
    "overlap": 42.0,
    "sentence_overlap": 42.0,
    "strategy": "string"
  },
  "service": "string",
  "service_settings": {},
  "task_settings": {},
  "inference_id": "string",
  "task_type": "sparse_embedding"
}








Create a Mistral inference endpoint Added in 8.15.0

PUT /_inference/{task_type}/{mistral_inference_id}

Creates an inference endpoint to perform an inference task with the mistral service.

When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for "state": "fully_allocated" in the response and ensure that the "allocation_count" matches the "target_allocation_count". Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Path parameters

  • task_type string Required

    The task type. The only valid task type for the model to perform is text_embedding.

    Value is text_embedding.

  • mistral_inference_id string Required

    The unique identifier of the inference endpoint.

application/json

Body

  • Hide chunking_settings attributes Show chunking_settings attributes object
    • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

    • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

    • strategy string

      The chunking strategy: sentence or word.

  • service string Required

    Value is mistral.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key of your Mistral account. You can find your Mistral API keys or you can create a new one on the API Keys page.

      IMPORTANT: You need to provide the API key only once, during the inference model creation. The get inference endpoint API does not retrieve your API key. After creating the inference model, you cannot change the associated API key. If you want to use a different API key, delete the inference model and recreate it with the same name and the updated API key.

      External documentation
    • The maximum number of tokens per input before chunking occurs.

    • model string Required

      The name of the model to use for the inference task. Refer to the Mistral models documentation for the list of available text embedding models.

      External documentation
    • Hide rate_limit attribute Show rate_limit attribute object

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • Hide chunking_settings attributes Show chunking_settings attributes object
      • The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      • The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      • strategy string

        The chunking strategy: sentence or word.

    • service string Required

      The service type

    • service_settings object Required
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are sparse_embedding, text_embedding, rerank, completion, or chat_completion.

PUT /_inference/{task_type}/{mistral_inference_id}
curl \
 --request PUT 'http://api.example.com/_inference/{task_type}/{mistral_inference_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"service\": \"mistral\",\n  \"service_settings\": {\n    \"api_key\": \"Mistral-API-Key\",\n    \"model\": \"mistral-embed\" \n  }\n}"'
Request example
Run `PUT _inference/text_embedding/mistral-embeddings-test` to create a Mistral inference endpoint that performs a text embedding task.
{
  "service": "mistral",
  "service_settings": {
    "api_key": "Mistral-API-Key",
    "model": "mistral-embed" 
  }
}
Response examples (200)
{
  "chunking_settings": {
    "max_chunk_size": 42.0,
    "overlap": 42.0,
    "sentence_overlap": 42.0,
    "strategy": "string"
  },
  "service": "string",
  "service_settings": {},
  "task_settings": {},
  "inference_id": "string",
  "task_type": "sparse_embedding"
}






























































Path parameters

  • id string | array[string] Required

    A comma-separated list of IP location database configurations.

Query parameters

  • The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. A value of -1 indicates that the request should never time out.

  • timeout string

    The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. A value of -1 indicates that the request should never time out.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

DELETE /_ingest/ip_location/database/{id}
curl \
 --request DELETE 'http://api.example.com/_ingest/ip_location/database/{id}' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "acknowledged": true
}








































































































































Path parameters

  • calendar_id string Required

    A string that uniquely identifies a calendar. You can get information for multiple calendars by using a comma-separated list of ids or a wildcard expression. You can get information for all calendars by using _all or * or by omitting the calendar identifier.

Query parameters

  • from number

    Skips the specified number of calendars. This parameter is supported only when you omit the calendar identifier.

  • size number

    Specifies the maximum number of calendars to obtain. This parameter is supported only when you omit the calendar identifier.

application/json

Body

  • page object
    Hide page attributes Show page attributes object
    • from number

      Skips the specified number of items.

    • size number

      Specifies the maximum number of items to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • calendars array[object] Required
      Hide calendars attributes Show calendars attributes object
    • count number Required
GET /_ml/calendars/{calendar_id}
curl \
 --request GET 'http://api.example.com/_ml/calendars/{calendar_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"page":{"from":42.0,"size":42.0}}'
Request examples
{
  "page": {
    "from": 42.0,
    "size": 42.0
  }
}
Response examples (200)
{
  "calendars": [
    {
      "calendar_id": "string",
      "description": "string",
      "job_ids": [
        "string"
      ]
    }
  ],
  "count": 42.0
}








































Delete expired ML data Added in 5.4.0

DELETE /_ml/_delete_expired_data

Delete all job results, model snapshots and forecast data that have exceeded their retention days period. Machine learning state documents that are not associated with any job are also deleted. You can limit the request to a single or set of anomaly detection jobs by using a job identifier, a group name, a comma-separated list of jobs, or a wildcard expression. You can delete expired data for all anomaly detection jobs by using _all, by specifying * as the <job_id>, or by omitting the <job_id>.

Query parameters

  • The desired requests per second for the deletion processes. The default behavior is no throttling.

  • timeout string

    How long can the underlying delete processes run until they are canceled.

application/json

Body

  • The desired requests per second for the deletion processes. The default behavior is no throttling.

  • timeout string

    A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
DELETE /_ml/_delete_expired_data
curl \
 --request DELETE 'http://api.example.com/_ml/_delete_expired_data' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"requests_per_second":42.0,"timeout":"string"}'
Request examples
{
  "requests_per_second": 42.0,
  "timeout": "string"
}
Response examples (200)
A successful response when deleting expired and unused anomaly detection data.
{
  "deleted": true
}




























Create an anomaly detection job Added in 5.4.0

PUT /_ml/anomaly_detectors/{job_id}

If you include a datafeed_config, you must have read index privileges on the source index. If you include a datafeed_config but do not provide a query, the datafeed uses {"match_all": {"boost": 1}}.

Path parameters

  • job_id string Required

    The identifier for the anomaly detection job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.

Query parameters

  • If true, wildcard indices expressions that resolve into no concrete indices are ignored. This includes the _all string or when no indices are specified.

  • expand_wildcards string | array[string]

    Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values. Valid values are:

    • all: Match any data stream or index, including hidden ones.
    • closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.
    • hidden: Match hidden data streams and hidden indices. Must be combined with open, closed, or both.
    • none: Wildcard patterns are not accepted.
    • open: Match open, non-hidden indices. Also matches any non-hidden data stream.
  • ignore_throttled boolean Deprecated

    If true, concrete, expanded or aliased indices are ignored when frozen.

  • If true, unavailable indices (missing or closed) are ignored.

application/json

Body Required

  • Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node. By default, if a machine learning node with capacity to run the job cannot immediately be found, the open anomaly detection jobs API returns an error. However, this is also subject to the cluster-wide xpack.ml.max_lazy_ml_nodes setting. If this option is set to true, the open anomaly detection jobs API does not return an error and the job waits in the opening state until sufficient machine learning node capacity is available.

  • analysis_config object Required
    Hide analysis_config attributes Show analysis_config attributes object
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • categorization_analyzer string | object

      One of:
    • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • If categorization_field_name is specified, you can also define optional filters. This property expects an array of regular expressions. The expressions are used to filter out matching sequences from the categorization field values. You can use this functionality to fine tune the categorization by excluding sequences from consideration when categories are defined. For example, you can exclude SQL statements that appear in your log files. This property cannot be used at the same time as categorization_analyzer. If you only want to define simple regular expression filters that are applied prior to tokenization, setting this property is the easiest method. If you also want to customize the tokenizer or post-tokenization filtering, use the categorization_analyzer property instead and include the filters as pattern_replace character filters. The effect is exactly the same.

    • detectors array[object] Required

      Detector configuration objects specify which data fields a job analyzes. They also specify which analytical functions are used. You can specify multiple detectors for a job. If the detectors array does not contain at least one detector, no analysis can occur and an error is returned.

      Hide detectors attributes Show detectors attributes object
      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • custom_rules array[object]

        Custom rules enable you to customize the way detectors operate. For example, a rule may dictate conditions under which results should be skipped. Kibana refers to custom rules as job rules.

        Hide custom_rules attributes Show custom_rules attributes object
        • actions array[string]

          The set of actions to be triggered when the rule applies. If more than one action is specified the effects of all actions are combined.

          Values are skip_result or skip_model_update.

        • conditions array[object]

          An array of numeric conditions when the rule applies. A rule must either have a non-empty scope or at least one condition. Multiple conditions are combined together with a logical AND.

        • scope object

          A scope of series where the rule applies. A rule must either have a non-empty scope or at least one condition. By default, the scope includes all series. Scoping is allowed for any of the fields that are also specified in by_field_name, over_field_name, or partition_field_name.

          Hide scope attribute Show scope attribute object
          • * object Additional properties
      • A description of the detector.

      • A unique identifier for the detector. This identifier is based on the order of the detectors in the analysis_config, starting at zero. If you specify a value for this property, it is ignored.

      • Values are all, none, by, or over.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • function string

        The analysis function that is used. For example, count, rare, mean, min, max, or sum.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • use_null boolean

        Defines whether a new series is used as the null series when there is no value for the by or partition fields.

    • influencers array[string]

      A comma separated list of influencer field names. Typically these can be the by, over, or partition fields that are used in the detector configuration. You might also want to use a field name that is not specifically named in a detector, but is available as part of the input data. When you use multiple detectors, the use of influencers is recommended as it aggregates results for each influencer entity.

    • latency string

      A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • This functionality is reserved for internal use. It is not supported for use in customer environments and is not subject to the support SLA of official GA features. If set to true, the analysis will automatically find correlations between metrics for a given by field value and report anomalies when those correlations cease to hold. For example, suppose CPU and memory usage on host A is usually highly correlated with the same metrics on host B. Perhaps this correlation occurs because they are running a load-balanced application. If you enable this property, anomalies will be reported when, for example, CPU usage on host A is high and the value of CPU usage on host B is low. That is to say, you’ll see an anomaly when the CPU of host A is unusual given the CPU of host B. To use the multivariate_by_fields property, you must also specify by_field_name in your detector.

    • Hide per_partition_categorization attributes Show per_partition_categorization attributes object
      • enabled boolean

        To enable this setting, you must also set the partition_field_name property to the same value in every detector that uses the keyword mlcategory. Otherwise, job creation fails.

      • This setting can be set to true only if per-partition categorization is enabled. If true, both categorization and subsequent anomaly detection stops for partitions where the categorization status changes to warn. This setting makes it viable to have a job where it is expected that categorization works well for some partitions but not others; you do not pay the cost of bad categorization forever in the partitions where it works badly.

    • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • Hide analysis_limits attributes Show analysis_limits attributes object
  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • Custom metadata about the job

  • Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job. Valid values range from 0 to model_snapshot_retention_days.

  • data_description object Required
    Hide data_description attributes Show data_description attributes object
    • format string

      Only JSON format is supported at this time.

    • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • The time format, which can be epoch, epoch_ms, or a custom pattern. The value epoch refers to UNIX or Epoch time (the number of seconds since 1 Jan 1970). The value epoch_ms indicates that time is measured in milliseconds since the epoch. The epoch and epoch_ms time formats accept either integer or real values. Custom patterns must conform to the Java DateTimeFormatter class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: yyyy-MM-dd'T'HH:mm:ssX. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.

  • Hide datafeed_config attributes Show datafeed_config attributes object
    • If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.

    • Hide chunking_config attributes Show chunking_config attributes object
      • mode string Required

        Values are auto, manual, or off.

      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • enabled boolean Required

        Specifies whether the datafeed periodically checks for delayed data.

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • indices string | array[string]
    • Hide indices_options attributes Show indices_options attributes object
      • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

      • expand_wildcards string | array[string]
      • If true, missing or closed indices are not included in the response.

      • If true, concrete, expanded or aliased indices are ignored when frozen.

    • job_id string
    • If a real-time datafeed has never seen any data (including during any initial training period) then it will automatically stop itself and close its associated job after this many real-time searches that return no documents. In other words, it will stop after frequency times max_empty_searches of real-time operation. If not set then a datafeed with no end time that sees no data will remain started until it is explicitly stopped.

    • query object

      An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

      External documentation
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide runtime_mappings attribute Show runtime_mappings attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • fields object

          For type composite

          Hide fields attribute Show fields attribute object
          • * object Additional properties
            Hide * attribute Show * attribute object
            • type string Required

              Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

        • fetch_fields array[object]

          For type lookup

          Hide fetch_fields attributes Show fetch_fields attributes object
          • field string Required

            Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • format string
        • format string

          A custom format for date type runtime fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • script object
          Hide script attributes Show script attributes object
          • source string

            The script source.

          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
        • type string Required

          Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

    • Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.

      Hide script_fields attribute Show script_fields attribute object
      • * object Additional properties
        Hide * attributes Show * attributes object
        • script object Required
          Hide script attributes Show script attributes object
          • source string

            The script source.

          • id string
          • params object

            Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

            Hide params attribute Show params attribute object
            • * object Additional properties
          • lang string

            Any of:

            Values are painless, expression, mustache, or java.

          • options object
            Hide options attribute Show options attribute object
            • * string Additional properties
    • The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of index.max_result_window, which is 10,000 by default.

  • A description of the job.

  • job_id string
  • groups array[string]

    A list of job groups. A job can belong to no groups or many.

  • Hide model_plot_config attributes Show model_plot_config attributes object
    • If true, enables calculation and storage of the model change annotations for each entity that is being analyzed.

    • enabled boolean

      If true, enables calculation and storage of the model bounds for each entity that is being analyzed.

    • terms string

      Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. By default, snapshots ten days older than the newest snapshot are deleted.

  • Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. The default value is the longer of 30 days or 100 bucket spans.

  • Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. Annotations generated by the system also count as results for retention purposes; they are deleted after the same number of days as results. Annotations added by users are retained forever.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • allow_lazy_open boolean Required
    • analysis_config object Required
      Hide analysis_config attributes Show analysis_config attributes object
      • bucket_span string Required

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • categorization_analyzer string | object

        One of:
      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • If categorization_field_name is specified, you can also define optional filters. This property expects an array of regular expressions. The expressions are used to filter out matching sequences from the categorization field values.

      • detectors array[object] Required

        An array of detector configuration objects. Detector configuration objects specify which data fields a job analyzes. They also specify which analytical functions are used. You can specify multiple detectors for a job.

        Hide detectors attributes Show detectors attributes object
        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • custom_rules array[object]

          An array of custom rule objects, which enable you to customize the way detectors operate. For example, a rule may dictate to the detector conditions under which results should be skipped. Kibana refers to custom rules as job rules.

          Hide custom_rules attributes Show custom_rules attributes object
          • actions array[string]

            The set of actions to be triggered when the rule applies. If more than one action is specified the effects of all actions are combined.

            Values are skip_result or skip_model_update.

          • conditions array[object]

            An array of numeric conditions when the rule applies. A rule must either have a non-empty scope or at least one condition. Multiple conditions are combined together with a logical AND.

          • scope object

            A scope of series where the rule applies. A rule must either have a non-empty scope or at least one condition. By default, the scope includes all series. Scoping is allowed for any of the fields that are also specified in by_field_name, over_field_name, or partition_field_name.

        • A description of the detector.

        • A unique identifier for the detector. This identifier is based on the order of the detectors in the analysis_config, starting at zero.

        • Values are all, none, by, or over.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • function string Required

          The analysis function that is used. For example, count, rare, mean, min, max, and sum.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • use_null boolean

          Defines whether a new series is used as the null series when there is no value for the by or partition fields.

      • influencers array[string] Required

        A comma separated list of influencer field names. Typically these can be the by, over, or partition fields that are used in the detector configuration. You might also want to use a field name that is not specifically named in a detector, but is available as part of the input data. When you use multiple detectors, the use of influencers is recommended as it aggregates results for each influencer entity.

      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • latency string

        A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • This functionality is reserved for internal use. It is not supported for use in customer environments and is not subject to the support SLA of official GA features. If set to true, the analysis will automatically find correlations between metrics for a given by field value and report anomalies when those correlations cease to hold.

      • Hide per_partition_categorization attributes Show per_partition_categorization attributes object
        • enabled boolean

          To enable this setting, you must also set the partition_field_name property to the same value in every detector that uses the keyword mlcategory. Otherwise, job creation fails.

        • This setting can be set to true only if per-partition categorization is enabled. If true, both categorization and subsequent anomaly detection stops for partitions where the categorization status changes to warn. This setting makes it viable to have a job where it is expected that categorization works well for some partitions but not others; you do not pay the cost of bad categorization forever in the partitions where it works badly.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • analysis_limits object Required
      Hide analysis_limits attributes Show analysis_limits attributes object
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • create_time string | number Required

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • Custom metadata about the job

    • data_description object Required
      Hide data_description attributes Show data_description attributes object
      • format string

        Only JSON format is supported at this time.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • The time format, which can be epoch, epoch_ms, or a custom pattern. The value epoch refers to UNIX or Epoch time (the number of seconds since 1 Jan 1970). The value epoch_ms indicates that time is measured in milliseconds since the epoch. The epoch and epoch_ms time formats accept either integer or real values. Custom patterns must conform to the Java DateTimeFormatter class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example: yyyy-MM-dd'T'HH:mm:ssX. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.

    • Hide datafeed_config attributes Show datafeed_config attributes object
      • Hide authorization attributes Show authorization attributes object
        • api_key object
          Hide api_key attributes Show api_key attributes object
          • id string Required

            The identifier for the API key.

          • name string Required

            The name of the API key.

        • roles array[string]

          If a user ID was used for the most recent update to the datafeed, its roles at the time of the update are listed in the response.

        • If a service account was used for the most recent update to the datafeed, the account name is listed in the response.

      • Hide chunking_config attributes Show chunking_config attributes object
        • mode string Required

          Values are auto, manual, or off.

        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • datafeed_id string Required
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • indices array[string] Required
      • indexes array[string]
      • job_id string Required
      • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • Hide script_fields attribute Show script_fields attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • script object Required
            Hide script attributes Show script attributes object
            • source string

              The script source.

            • id string
            • params object

              Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              Hide params attribute Show params attribute object
              • * object Additional properties
            • lang string

              Any of:

              Values are painless, expression, mustache, or java.

            • options object
              Hide options attribute Show options attribute object
              • * string Additional properties
      • Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

        • enabled boolean Required

          Specifies whether the datafeed periodically checks for delayed data.

      • Hide runtime_mappings attribute Show runtime_mappings attribute object
        • * object Additional properties
          Hide * attributes Show * attributes object
          • fields object

            For type composite

            Hide fields attribute Show fields attribute object
            • * object Additional properties
              Hide * attribute Show * attribute object
              • type string Required

                Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

          • fetch_fields array[object]

            For type lookup

            Hide fetch_fields attributes Show fetch_fields attributes object
            • field string Required

              Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

            • format string
          • format string

            A custom format for date type runtime fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

          • script object
            Hide script attributes Show script attributes object
            • source string

              The script source.

            • id string
            • params object

              Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

              Hide params attribute Show params attribute object
              • * object Additional properties
            • lang string

              Any of:

              Values are painless, expression, mustache, or java.

            • options object
              Hide options attribute Show options attribute object
              • * string Additional properties
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • Hide indices_options attributes Show indices_options attributes object
        • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

        • expand_wildcards string | array[string]
        • If true, missing or closed indices are not included in the response.

        • If true, concrete, expanded or aliased indices are ignored when frozen.

      • query object Required

        The Elasticsearch query domain-specific language (DSL). This value corresponds to the query object in an Elasticsearch search POST body. All the options that are supported by Elasticsearch can be used, as this object is passed verbatim to Elasticsearch. By default, this property has the following value: {"match_all": {"boost": 1}}.

        Query DSL
    • groups array[string]
    • job_id string Required
    • job_type string Required
    • job_version string Required
    • Hide model_plot_config attributes Show model_plot_config attributes object
      • If true, enables calculation and storage of the model change annotations for each entity that is being analyzed.

      • enabled boolean

        If true, enables calculation and storage of the model bounds for each entity that is being analyzed.

      • terms string

        Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

    • results_index_name string Required
PUT /_ml/anomaly_detectors/{job_id}
curl \
 --request PUT 'http://api.example.com/_ml/anomaly_detectors/{job_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"analysis_config\": {\n    \"bucket_span\": \"15m\",\n    \"detectors\": [\n      {\n        \"detector_description\": \"Sum of bytes\",\n        \"function\": \"sum\",\n        \"field_name\": \"bytes\"\n      }\n    ]\n  },\n  \"data_description\": {\n    \"time_field\": \"timestamp\",\n    \"time_format\": \"epoch_ms\"\n  },\n  \"analysis_limits\": {\n    \"model_memory_limit\": \"11MB\"\n  },\n  \"model_plot_config\": {\n    \"enabled\": true,\n    \"annotations_enabled\": true\n  },\n  \"results_index_name\": \"test-job1\",\n  \"datafeed_config\": {\n    \"indices\": [\n      \"kibana_sample_data_logs\"\n    ],\n    \"query\": {\n      \"bool\": {\n        \"must\": [\n          {\n            \"match_all\": {}\n          }\n        ]\n      }\n    },\n    \"runtime_mappings\": {\n      \"hour_of_day\": {\n        \"type\": \"long\",\n        \"script\": {\n          \"source\": \"emit(doc['timestamp'].value.getHour());\"\n        }\n      }\n    },\n    \"datafeed_id\": \"datafeed-test-job1\"\n  }\n}"'
Request example
A request to create an anomaly detection job and datafeed.
{
  "analysis_config": {
    "bucket_span": "15m",
    "detectors": [
      {
        "detector_description": "Sum of bytes",
        "function": "sum",
        "field_name": "bytes"
      }
    ]
  },
  "data_description": {
    "time_field": "timestamp",
    "time_format": "epoch_ms"
  },
  "analysis_limits": {
    "model_memory_limit": "11MB"
  },
  "model_plot_config": {
    "enabled": true,
    "annotations_enabled": true
  },
  "results_index_name": "test-job1",
  "datafeed_config": {
    "indices": [
      "kibana_sample_data_logs"
    ],
    "query": {
      "bool": {
        "must": [
          {
            "match_all": {}
          }
        ]
      }
    },
    "runtime_mappings": {
      "hour_of_day": {
        "type": "long",
        "script": {
          "source": "emit(doc['timestamp'].value.getHour());"
        }
      }
    },
    "datafeed_id": "datafeed-test-job1"
  }
}
Response examples (200)
A successful response when creating an anomaly detection job and datafeed.
{
  "job_id": "test-job1",
  "job_type": "anomaly_detector",
  "job_version": "8.4.0",
  "create_time": 1656087283340,
  "datafeed_config": {
    "datafeed_id": "datafeed-test-job1",
    "job_id": "test-job1",
    "authorization": {
      "roles": [
        "superuser"
      ]
    },
    "query_delay": "61499ms",
    "chunking_config": {
      "mode": "auto"
    },
    "indices_options": {
      "expand_wildcards": [
        "open"
      ],
      "ignore_unavailable": false,
      "allow_no_indices": true,
      "ignore_throttled": true
    },
    "query": {
      "bool": {
        "must": [
          {
            "match_all": {}
          }
        ]
      }
    },
    "indices": [
      "kibana_sample_data_logs"
    ],
    "scroll_size": 1000,
    "delayed_data_check_config": {
      "enabled": true
    },
    "runtime_mappings": {
      "hour_of_day": {
        "type": "long",
        "script": {
          "source": "emit(doc['timestamp'].value.getHour());"
        }
      }
    }
  },
  "analysis_config": {
    "bucket_span": "15m",
    "detectors": [
      {
        "detector_description": "Sum of bytes",
        "function": "sum",
        "field_name": "bytes",
        "detector_index": 0
      }
    ],
    "influencers": [],
    "model_prune_window": "30d"
  },
  "analysis_limits": {
    "model_memory_limit": "11mb",
    "categorization_examples_limit": 4
  },
  "data_description": {
    "time_field": "timestamp",
    "time_format": "epoch_ms"
  },
  "model_plot_config": {
    "enabled": true,
    "annotations_enabled": true
  },
  "model_snapshot_retention_days": 10,
  "daily_model_snapshot_retention_after_days": 1,
  "results_index_name": "custom-test-job1",
  "allow_lazy_open": false
}




Get model snapshots info Added in 5.4.0

GET /_ml/anomaly_detectors/{job_id}/model_snapshots/{snapshot_id}

Path parameters

  • job_id string Required

    Identifier for the anomaly detection job.

  • snapshot_id string Required

    A numerical character string that uniquely identifies the model snapshot. You can get information for multiple snapshots by using a comma-separated list or a wildcard expression. You can get all snapshots by using _all, by specifying * as the snapshot ID, or by omitting the snapshot ID.

Query parameters

  • desc boolean

    If true, the results are sorted in descending order.

  • end string | number

    Returns snapshots with timestamps earlier than this time.

  • from number

    Skips the specified number of snapshots.

  • size number

    Specifies the maximum number of snapshots to obtain.

  • sort string

    Specifies the sort field for the requested snapshots. By default, the snapshots are sorted by their timestamp.

  • start string | number

    Returns snapshots with timestamps after this time.

application/json

Body

  • desc boolean

    Refer to the description for the desc query parameter.

  • end string | number

    A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

  • page object
    Hide page attributes Show page attributes object
    • from number

      Skips the specified number of items.

    • size number

      Specifies the maximum number of items to obtain.

  • sort string

    Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

  • start string | number

    A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

Responses

GET /_ml/anomaly_detectors/{job_id}/model_snapshots/{snapshot_id}
curl \
 --request GET 'http://api.example.com/_ml/anomaly_detectors/{job_id}/model_snapshots/{snapshot_id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"desc":true,"":"string","page":{"from":42.0,"size":42.0},"sort":"string"}'
Request examples
{
  "desc": true,
  "": "string",
  "page": {
    "from": 42.0,
    "size": 42.0
  },
  "sort": "string"
}
Response examples (200)
{
  "count": 42.0,
  "model_snapshots": [
    {
      "description": "string",
      "job_id": "string",
      "latest_record_time_stamp": 42.0,
      "latest_result_time_stamp": 42.0,
      "min_version": "string",
      "model_size_stats": {
        "bucket_allocation_failures_count": 42.0,
        "job_id": "string",
        "": 42.0,
        "memory_status": "ok",
        "assignment_memory_basis": "string",
        "result_type": "string",
        "total_by_field_count": 42.0,
        "total_over_field_count": 42.0,
        "total_partition_field_count": 42.0,
        "categorization_status": "ok",
        "categorized_doc_count": 42.0,
        "dead_category_count": 42.0,
        "failed_category_count": 42.0,
        "frequent_category_count": 42.0,
        "rare_category_count": 42.0,
        "total_category_count": 42.0,
        "timestamp": 42.0
      },
      "retain": true,
      "snapshot_doc_count": 42.0,
      "snapshot_id": "string",
      "timestamp": 42.0
    }
  ]
}








































Query parameters

  • from number

    Skips the specified number of calendars. This parameter is supported only when you omit the calendar identifier.

  • size number

    Specifies the maximum number of calendars to obtain. This parameter is supported only when you omit the calendar identifier.

application/json

Body

  • page object
    Hide page attributes Show page attributes object
    • from number

      Skips the specified number of items.

    • size number

      Specifies the maximum number of items to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • calendars array[object] Required
      Hide calendars attributes Show calendars attributes object
    • count number Required
GET /_ml/calendars
curl \
 --request GET 'http://api.example.com/_ml/calendars' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"page":{"from":42.0,"size":42.0}}'
Request examples
{
  "page": {
    "from": 42.0,
    "size": 42.0
  }
}
Response examples (200)
{
  "calendars": [
    {
      "calendar_id": "string",
      "description": "string",
      "job_ids": [
        "string"
      ]
    }
  ],
  "count": 42.0
}

Query parameters

  • from number

    Skips the specified number of calendars. This parameter is supported only when you omit the calendar identifier.

  • size number

    Specifies the maximum number of calendars to obtain. This parameter is supported only when you omit the calendar identifier.

application/json

Body

  • page object
    Hide page attributes Show page attributes object
    • from number

      Skips the specified number of items.

    • size number

      Specifies the maximum number of items to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • calendars array[object] Required
      Hide calendars attributes Show calendars attributes object
    • count number Required
POST /_ml/calendars
curl \
 --request POST 'http://api.example.com/_ml/calendars' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"page":{"from":42.0,"size":42.0}}'
Request examples
{
  "page": {
    "from": 42.0,
    "size": 42.0
  }
}
Response examples (200)
{
  "calendars": [
    {
      "calendar_id": "string",
      "description": "string",
      "job_ids": [
        "string"
      ]
    }
  ],
  "count": 42.0
}




























Get filters Added in 5.5.0

GET /_ml/filters

You can get a single filter or all filters.

Query parameters

  • from number

    Skips the specified number of filters.

  • size number

    Specifies the maximum number of filters to obtain.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • count number Required
    • filters array[object] Required
      Hide filters attributes Show filters attributes object
      • A description of the filter.

      • filter_id string Required
      • items array[string] Required

        An array of strings which is the filter item list.

GET /_ml/filters
curl \
 --request GET 'http://api.example.com/_ml/filters' \
 --header "Authorization: $API_KEY"
Response examples (200)
{
  "count": 42.0,
  "filters": [
    {
      "description": "string",
      "filter_id": "string",
      "items": [
        "string"
      ]
    }
  ]
}














































































































































































































































Create a trained model vocabulary Added in 8.0.0

PUT /_ml/trained_models/{model_id}/vocabulary

This API is supported only for natural language processing (NLP) models. The vocabulary is stored in the index as described in inference_config.*.vocabulary of the trained model definition.

Path parameters

  • model_id string Required

    The unique identifier of the trained model.

application/json

Body Required

  • vocabulary array[string] Required

    The model vocabulary, which must not be empty.

  • merges array[string]

    The optional model merges if required by the tokenizer.

  • scores array[number]

    The optional vocabulary value scores if required by the tokenizer.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • acknowledged boolean Required

      For a successful response, this value is always true. On failure, an exception is returned instead.

PUT /_ml/trained_models/{model_id}/vocabulary
curl \
 --request PUT 'http://api.example.com/_ml/trained_models/{model_id}/vocabulary' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"vocabulary":["string"],"merges":["string"],"scores":[42.0]}'
Request examples
{
  "vocabulary": [
    "string"
  ],
  "merges": [
    "string"
  ],
  "scores": [
    42.0
  ]
}
Response examples (200)
{
  "acknowledged": true
}





















Create an index from a source index Technical preview

POST /_create_from/{source}/{dest}

Copy the mappings and settings from the source index to a destination index while allowing request settings and mappings to override the source values.

Path parameters

  • source string Required

    The source index or data stream name

  • dest string Required

    The destination index or data stream name

application/json

Body Required

  • Hide mappings_override attributes Show mappings_override attributes object
  • Hide settings_override attributes Show settings_override attributes object
    • index object
    • mode string
    • Hide soft_deletes attributes Show soft_deletes attributes object
      • enabled boolean

        Indicates whether soft deletes are enabled on the index.

      • Hide retention_lease attribute Show retention_lease attribute object
        • period string Required

          A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • sort object
      Hide sort attributes Show sort attributes object
    • Values are true, false, or checksum.

    • codec string
    • routing_partition_size number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • auto_expand_replicas string | null

      One of:
    • merge object
      Hide merge attribute Show merge attribute object
      • Hide scheduler attributes Show scheduler attributes object
        • max_thread_count number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

        • max_merge_count number | string

          Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

          Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • blocks object
      Hide blocks attributes Show blocks attributes object
      • read_only boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • read_only_allow_delete boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • read boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • write boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • metadata boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • analyze object
      Hide analyze attribute Show analyze attribute object
      • max_token_count number | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • Hide highlight attribute Show highlight attribute object
    • routing object
      Hide routing attributes Show routing attributes object
    • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide lifecycle attributes Show lifecycle attributes object
      • name string
      • indexing_complete boolean | string

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

      • If specified, this is the timestamp used to calculate the index age for its phase transitions. Use this setting if you create a new index that contains old data and want to use the original creation date to calculate the index age. Specified as a Unix epoch value in milliseconds.

      • Set to true to parse the origination date from the index name. This origination date is used to calculate the index age for its phase transitions. The index name must match the pattern .*-{date_format}-\d+, where the date_format is yyyy.MM.dd and the trailing digits are optional. An index that was rolled over would normally match the full format, for example logs-2016.10.31-000002). If the index name doesn’t match the pattern, index creation fails.

      • step object
        Hide step attribute Show step attribute object
        • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

      • The index alias to update when the index rolls over. Specify when using a policy that contains a rollover action. When the index rolls over, the alias is updated to reflect that the index is no longer the write index. For more information about rolling indices, see Rollover.

      • prefer_ilm boolean | string

        Preference for the system that manages a data stream backing index (preferring ILM when both ILM and DLM are applicable for an index).

    • creation_date number | string

      Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

      Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • creation_date_string string | number

      A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • uuid string
    • version object
      Hide version attributes Show version attributes object
    • translog object
      Hide translog attributes Show translog attributes object
    • Hide query_string attribute Show query_string attribute object
      • lenient boolean | string Required

        Some APIs will return values such as numbers also as a string (notably epoch timestamps). This behavior is used to capture this behavior while keeping the semantics of the field type.

        Depending on the target language, code generators can keep the union or remove it and leniently parse strings to the target type.

    • analysis object
      Hide analysis attributes Show analysis attributes object
    • settings object
    • Hide time_series attributes Show time_series attributes object
      • end_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

      • start_time string | number

        A date and time, either as a string whose format can depend on the context (defaulting to ISO 8601), or a number of milliseconds since the Epoch. Elasticsearch accepts both as input, but will generally output a string representation.

    • queries object
      Hide queries attribute Show queries attribute object
      • cache object
        Hide cache attribute Show cache attribute object
    • Configure custom similarity settings to customize how search results are scored.

    • mapping object
      Hide mapping attributes Show mapping attributes object
      • coerce boolean
      • Hide total_fields attributes Show total_fields attributes object
        • limit number | string

          The maximum number of fields in an index. Field and object mappings, as well as field aliases count towards this limit. The limit is in place to prevent mappings and searches from becoming too large. Higher values can lead to performance degradations and memory issues, especially in clusters with a high load or few resources.

        • ignore_dynamic_beyond_limit boolean | string

          This setting determines what happens when a dynamically mapped field would exceed the total fields limit. When set to false (the default), the index request of the document that tries to add a dynamic field to the mapping will fail with the message Limit of total fields [X] has been exceeded. When set to true, the index request will not fail. Instead, fields that would exceed the limit are not added to the mapping, similar to dynamic: false. The fields that were not added to the mapping will be added to the _ignored field.

      • depth object
        Hide depth attribute Show depth attribute object
        • limit number

          The maximum depth for a field, which is measured as the number of inner objects. For instance, if all fields are defined at the root object level, then the depth is 1. If there is one object mapping, then the depth is 2, etc.

      • Hide nested_fields attribute Show nested_fields attribute object
        • limit number

          The maximum number of distinct nested mappings in an index. The nested type should only be used in special cases, when arrays of objects need to be queried independently of each other. To safeguard against poorly designed mappings, this setting limits the number of unique nested types per index.

      • Hide nested_objects attribute Show nested_objects attribute object
        • limit number

          The maximum number of nested JSON objects that a single document can contain across all nested types. This limit helps to prevent out of memory errors when a document contains too many nested objects.

      • Hide field_name_length attribute Show field_name_length attribute object
        • limit number

          Setting for the maximum length of a field name. This setting isn’t really something that addresses mappings explosion but might still be useful if you want to limit the field length. It usually shouldn’t be necessary to set this setting. The default is okay unless a user starts to add a huge number of fields with really long names. Default is Long.MAX_VALUE (no limit).

      • Hide dimension_fields attribute Show dimension_fields attribute object
        • limit number

          [preview] This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

      • source object
        Hide source attribute Show source attribute object
        • mode string Required

          Values are disabled, stored, or synthetic.

    • Hide indexing.slowlog attributes Show indexing.slowlog attributes object
      • level string
      • source number
      • reformat boolean
      • Hide threshold attribute Show threshold attribute object
        • index object
          Hide index attributes Show index attributes object
          • warn string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • info string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • debug string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

          • trace string

            A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

    • Hide indexing_pressure attribute Show indexing_pressure attribute object
      • memory object Required
        Hide memory attribute Show memory attribute object
        • limit number

          Number of outstanding bytes that may be consumed by indexing requests. When this limit is reached or exceeded, the node will reject new coordinating and primary operations. When replica operations consume 1.5x this limit, the node will reject new replica operations. Defaults to 10% of the heap.

    • store object
      Hide store attributes Show store attributes object
      • type string Required

        Any of:

        Values are fs, niofs, mmapfs, or hybridfs.

      • allow_mmap boolean

        You can restrict the use of the mmapfs and the related hybridfs store type via the setting node.store.allow_mmap. This is a boolean setting indicating whether or not memory-mapping is allowed. The default is to allow it. This setting is useful, for example, if you are in an environment where you can not control the ability to create a lot of memory maps so you need disable the ability to use memory-mapping.

  • If index blocks should be removed when creating destination index (optional)

Responses

POST /_create_from/{source}/{dest}
curl \
 --request POST 'http://api.example.com/_create_from/{source}/{dest}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '{"mappings_override":{"all_field":{"analyzer":"string","enabled":true,"omit_norms":true,"search_analyzer":"string","similarity":"string","store":true,"store_term_vector_offsets":true,"store_term_vector_payloads":true,"store_term_vector_positions":true,"store_term_vectors":true},"date_detection":true,"dynamic":"strict","dynamic_date_formats":["string"],"dynamic_templates":[{}],"_field_names":{"enabled":true},"index_field":{"enabled":true},"_meta":{"additionalProperty1":{},"additionalProperty2":{}},"numeric_detection":true,"properties":{},"_routing":{"required":true},"_size":{"enabled":true},"_source":{"compress":true,"compress_threshold":"string","enabled":true,"excludes":["string"],"includes":["string"],"mode":"disabled"},"runtime":{"additionalProperty1":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"":"painless","options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"},"additionalProperty2":{"fields":{"additionalProperty1":{"type":"boolean"},"additionalProperty2":{"type":"boolean"}},"fetch_fields":[{"field":"string","format":"string"}],"format":"string","input_field":"string","target_field":"string","target_index":"string","script":{"source":"string","id":"string","params":{"additionalProperty1":{},"additionalProperty2":{}},"":"painless","options":{"additionalProperty1":"string","additionalProperty2":"string"}},"type":"boolean"}},"enabled":true,"subobjects":"true","_data_stream_timestamp":{"enabled":true}},"settings_override":{"index":{},"mode":"string","routing_path":"string","soft_deletes":{"enabled":true,"retention_lease":{"period":"string"}},"sort":{"field":"string","order":"asc","mode":"min","missing":"_last"},"number_of_shards":42.0,"number_of_replicas":42.0,"number_of_routing_shards":42.0,"check_on_startup":"true","codec":"string","":"string","load_fixed_bitset_filters_eagerly":true,"hidden":true,"auto_expand_replicas":"string","merge":{"scheduler":{"":42.0}},"search":{"idle":{"after":"string"},"slowlog":{"level":"string","source":42.0,"reformat":true,"threshold":{"query":{"warn":"string","info":"string","debug":"string","trace":"string"},"fetch":{"warn":"string","info":"string","debug":"string","trace":"string"}}}},"refresh_interval":"string","max_result_window":42.0,"max_inner_result_window":42.0,"max_rescore_window":42.0,"max_docvalue_fields_search":42.0,"max_script_fields":42.0,"max_ngram_diff":42.0,"max_shingle_diff":42.0,"blocks":{"":true},"max_refresh_listeners":42.0,"analyze":{"":42.0},"highlight":{"max_analyzed_offset":42.0},"max_terms_count":42.0,"max_regex_length":42.0,"routing":{"allocation":{"enable":"all","include":{"_tier_preference":"string","_id":"string"},"initial_recovery":{"_id":"string"},"disk":{"threshold_enabled":true}},"rebalance":{"enable":"all"}},"gc_deletes":"string","default_pipeline":"string","final_pipeline":"string","lifecycle":{"name":"string","":true,"origination_date":42.0,"parse_origination_date":true,"step":{"wait_time_threshold":"string"},"rollover_alias":"string","prefer_ilm":true},"provided_name":"string","uuid":"string","version":{"created":"string","created_string":"string"},"verified_before_close":true,"format":"string","max_slices_per_scroll":42.0,"translog":{"sync_interval":"string","durability":"request","":42.0,"retention":{"":42.0,"age":"string"}},"query_string":{"":true},"priority":42.0,"top_metrics_max_size":42.0,"analysis":{"analyzer":{},"char_filter":{},"filter":{},"normalizer":{},"tokenizer":{}},"settings":{},"time_series":{"":"string"},"queries":{"cache":{"enabled":true}},"similarity":{},"mapping":{"coerce":true,"total_fields":{"limit":42.0,"ignore_dynamic_beyond_limit":true},"depth":{"limit":42.0},"nested_fields":{"limit":42.0},"nested_objects":{"limit":42.0},"field_name_length":{"limit":42.0},"dimension_fields":{"limit":42.0},"source":{"mode":"disabled"},"ignore_malformed":true},"indexing.slowlog":{"level":"string","source":42.0,"reformat":true,"threshold":{"index":{"warn":"string","info":"string","debug":"string","trace":"string"}}},"indexing_pressure":{"memory":{"limit":42.0}},"store":{"":"fs","allow_mmap":true}},"remove_index_blocks":true}'
Request examples
{
  "mappings_override": {
    "all_field": {
      "analyzer": "string",
      "enabled": true,
      "omit_norms": true,
      "search_analyzer": "string",
      "similarity": "string",
      "store": true,
      "store_term_vector_offsets": true,
      "store_term_vector_payloads": true,
      "store_term_vector_positions": true,
      "store_term_vectors": true
    },
    "date_detection": true,
    "dynamic": "strict",
    "dynamic_date_formats": [
      "string"
    ],
    "dynamic_templates": [
      {}
    ],
    "_field_names": {
      "enabled": true
    },
    "index_field": {
      "enabled": true
    },
    "_meta": {
      "additionalProperty1": {},
      "additionalProperty2": {}
    },
    "numeric_detection": true,
    "properties": {},
    "_routing": {
      "required": true
    },
    "_size": {
      "enabled": true
    },
    "_source": {
      "compress": true,
      "compress_threshold": "string",
      "enabled": true,
      "excludes": [
        "string"
      ],
      "includes": [
        "string"
      ],
      "mode": "disabled"
    },
    "runtime": {
      "additionalProperty1": {
        "fields": {
          "additionalProperty1": {
            "type": "boolean"
          },
          "additionalProperty2": {
            "type": "boolean"
          }
        },
        "fetch_fields": [
          {
            "field": "string",
            "format": "string"
          }
        ],
        "format": "string",
        "input_field": "string",
        "target_field": "string",
        "target_index": "string",
        "script": {
          "source": "string",
          "id": "string",
          "params": {
            "additionalProperty1": {},
            "additionalProperty2": {}
          },
          "": "painless",
          "options": {
            "additionalProperty1": "string",
            "additionalProperty2": "string"
          }
        },
        "type": "boolean"
      },
      "additionalProperty2": {
        "fields": {
          "additionalProperty1": {
            "type": "boolean"
          },
          "additionalProperty2": {
            "type": "boolean"
          }
        },
        "fetch_fields": [
          {
            "field": "string",
            "format": "string"
          }
        ],
        "format": "string",
        "input_field": "string",
        "target_field": "string",
        "target_index": "string",
        "script": {
          "source": "string",
          "id": "string",
          "params": {
            "additionalProperty1": {},
            "additionalProperty2": {}
          },
          "": "painless",
          "options": {
            "additionalProperty1": "string",
            "additionalProperty2": "string"
          }
        },
        "type": "boolean"
      }
    },
    "enabled": true,
    "subobjects": "true",
    "_data_stream_timestamp": {
      "enabled": true
    }
  },
  "settings_override": {
    "index": {},
    "mode": "string",
    "routing_path": "string",
    "soft_deletes": {
      "enabled": true,
      "retention_lease": {
        "period": "string"
      }
    },
    "sort": {
      "field": "string",
      "order": "asc",
      "mode": "min",
      "missing": "_last"
    },
    "number_of_shards": 42.0,
    "number_of_replicas": 42.0,
    "number_of_routing_shards": 42.0,
    "check_on_startup": "true",
    "codec": "string",
    "": "string",
    "load_fixed_bitset_filters_eagerly": true,
    "hidden": true,
    "auto_expand_replicas": "string",
    "merge": {
      "scheduler": {
        "": 42.0
      }
    },
    "search": {
      "idle": {
        "after": "string"
      },
      "slowlog": {
        "level": "string",
        "source": 42.0,
        "reformat": true,
        "threshold": {
          "query": {
            "warn": "string",
            "info": "string",
            "debug": "string",
            "trace": "string"
          },
          "fetch": {
            "warn": "string",
            "info": "string",
            "debug": "string",
            "trace": "string"
          }
        }
      }
    },
    "refresh_interval": "string",
    "max_result_window": 42.0,
    "max_inner_result_window": 42.0,
    "max_rescore_window": 42.0,
    "max_docvalue_fields_search": 42.0,
    "max_script_fields": 42.0,
    "max_ngram_diff": 42.0,
    "max_shingle_diff": 42.0,
    "blocks": {
      "": true
    },
    "max_refresh_listeners": 42.0,
    "analyze": {
      "": 42.0
    },
    "highlight": {
      "max_analyzed_offset": 42.0
    },
    "max_terms_count": 42.0,
    "max_regex_length": 42.0,
    "routing": {
      "allocation": {
        "enable": "all",
        "include": {
          "_tier_preference": "string",
          "_id": "string"
        },
        "initial_recovery": {
          "_id": "string"
        },
        "disk": {
          "threshold_enabled": true
        }
      },
      "rebalance": {
        "enable": "all"
      }
    },
    "gc_deletes": "string",
    "default_pipeline": "string",
    "final_pipeline": "string",
    "lifecycle": {
      "name": "string",
      "": true,
      "origination_date": 42.0,
      "parse_origination_date": true,
      "step": {
        "wait_time_threshold": "string"
      },
      "rollover_alias": "string",
      "prefer_ilm": true
    },
    "provided_name": "string",
    "uuid": "string",
    "version": {
      "created": "string",
      "created_string": "string"
    },
    "verified_before_close": true,
    "format": "string",
    "max_slices_per_scroll": 42.0,
    "translog": {
      "sync_interval": "string",
      "durability": "request",
      "": 42.0,
      "retention": {
        "": 42.0,
        "age": "string"
      }
    },
    "query_string": {
      "": true
    },
    "priority": 42.0,
    "top_metrics_max_size": 42.0,
    "analysis": {
      "analyzer": {},
      "char_filter": {},
      "filter": {},
      "normalizer": {},
      "tokenizer": {}
    },
    "settings": {},
    "time_series": {
      "": "string"
    },
    "queries": {
      "cache": {
        "enabled": true
      }
    },
    "similarity": {},
    "mapping": {
      "coerce": true,
      "total_fields": {
        "limit": 42.0,
        "ignore_dynamic_beyond_limit": true
      },
      "depth": {
        "limit": 42.0
      },
      "nested_fields": {
        "limit": 42.0
      },
      "nested_objects": {
        "limit": 42.0
      },
      "field_name_length": {
        "limit": 42.0
      },
      "dimension_fields": {
        "limit": 42.0
      },
      "source": {
        "mode": "disabled"
      },
      "ignore_malformed": true
    },
    "indexing.slowlog": {
      "level": "string",
      "source": 42.0,
      "reformat": true,
      "threshold": {
        "index": {
          "warn": "string",
          "info": "string",
          "debug": "string",
          "trace": "string"
        }
      }
    },
    "indexing_pressure": {
      "memory": {
        "limit": 42.0
      }
    },
    "store": {
      "": "fs",
      "allow_mmap": true
    }
  },
  "remove_index_blocks": true
}
Response examples (200)
{
  "acknowledged": true,
  "index": "string",
  "shards_acknowledged": true
}






























































































































































































































































































































































































Query parameters

  • If false, the request returns an error if any wildcard expression, index alias, or _all value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar.

  • If true, network round-trips are minimized for cross-cluster search requests.

  • expand_wildcards string | array[string]

    The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as open,hidden. Valid values are: all, open, closed, hidden, none.

  • explain boolean

    If true, the response includes additional details about score computation as part of a hit.

  • ignore_throttled boolean Deprecated

    If true, specified concrete, expanded, or aliased indices are not included in the response when throttled.

  • If false, the request returns an error if it targets a missing or closed index.

  • The node or shard the operation should be performed on. It is random by default.

  • profile boolean

    If true, the query execution is profiled.

  • routing string

    A custom value used to route operations to a specific shard.

  • scroll string

    Specifies how long a consistent view of the index should be maintained for scrolled search.

  • The type of the search operation.

    Values are query_then_fetch or dfs_query_then_fetch.

  • If true, hits.total is rendered as an integer in the response. If false, it is rendered as an object.

  • typed_keys boolean

    If true, the response prefixes aggregation and suggester names with their respective types.

application/json

Body Required

  • explain boolean

    If true, returns detailed information about score calculation as part of each hit. If you specify both this and the explain query parameter, the API uses only the query parameter.

  • id string
  • params object

    Key-value pairs used to replace Mustache variables in the template. The key is the variable name. The value is the variable value.

    Hide params attribute Show params attribute object
    • * object Additional properties
  • profile boolean

    If true, the query execution is profiled.

  • source string

    An inline search template. Supports the same parameters as the search API's request body. It also supports Mustache variables. If no id is specified, this parameter is required.

Responses

GET /_search/template
curl \
 --request GET 'http://api.example.com/_search/template' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"id\": \"my-search-template\",\n  \"params\": {\n    \"query_string\": \"hello world\",\n    \"from\": 0,\n    \"size\": 10\n  }\n}"'
Request example
Run `GET my-index/_search/template` to run a search with a search template.
{
  "id": "my-search-template",
  "params": {
    "query_string": "hello world",
    "from": 0,
    "size": 10
  }
}
Response examples (200)
{
  "took": 42.0,
  "timed_out": true,
  "_shards": {
    "failed": 42.0,
    "successful": 42.0,
    "total": 42.0,
    "failures": [
      {
        "index": "string",
        "node": "string",
        "reason": {
          "type": "string",
          "reason": "string",
          "stack_trace": "string",
          "caused_by": {},
          "root_cause": [
            {}
          ],
          "suppressed": [
            {}
          ]
        },
        "shard": 42.0,
        "status": "string"
      }
    ],
    "skipped": 42.0
  },
  "hits": {
    "total": {
      "relation": "eq",
      "value": 42.0
    },
    "hits": [
      {
        "_index": "string",
        "_id": "string",
        "_score": 42.0,
        "_explanation": {
          "description": "string",
          "details": [
            {}
          ],
          "value": 42.0
        },
        "fields": {
          "additionalProperty1": {},
          "additionalProperty2": {}
        },
        "highlight": {
          "additionalProperty1": [
            "string"
          ],
          "additionalProperty2": [
            "string"
          ]
        },
        "inner_hits": {
          "additionalProperty1": {
            "hits": {}
          },
          "additionalProperty2": {
            "hits": {}
          }
        },
        "matched_queries": [
          "string"
        ],
        "_nested": {
          "field": "string",
          "offset": 42.0,
          "_nested": {}
        },
        "_ignored": [
          "string"
        ],
        "ignored_field_values": {
          "additionalProperty1": [
            {}
          ],
          "additionalProperty2": [
            {}
          ]
        },
        "_shard": "string",
        "_node": "string",
        "_routing": "string",
        "_source": {},
        "_rank": 42.0,
        "_seq_no": 42.0,
        "_primary_term": 42.0,
        "_version": 42.0,
        "sort": [
          42.0
        ]
      }
    ],
    "max_score": 42.0
  },
  "aggregations": {},
  "_clusters": {
    "skipped": 42.0,
    "successful": 42.0,
    "total": 42.0,
    "running": 42.0,
    "partial": 42.0,
    "failed": 42.0,
    "details": {
      "additionalProperty1": {
        "status": "running",
        "indices": "string",
        "": 42.0,
        "timed_out": true,
        "_shards": {
          "failed": 42.0,
          "successful": 42.0,
          "total": 42.0,
          "failures": [
            {}
          ],
          "skipped": 42.0
        },
        "failures": [
          {
            "index": "string",
            "node": "string",
            "reason": {},
            "shard": 42.0,
            "status": "string"
          }
        ]
      },
      "additionalProperty2": {
        "status": "running",
        "indices": "string",
        "": 42.0,
        "timed_out": true,
        "_shards": {
          "failed": 42.0,
          "successful": 42.0,
          "total": 42.0,
          "failures": [
            {}
          ],
          "skipped": 42.0
        },
        "failures": [
          {
            "index": "string",
            "node": "string",
            "reason": {},
            "shard": 42.0,
            "status": "string"
          }
        ]
      }
    }
  },
  "fields": {
    "additionalProperty1": {},
    "additionalProperty2": {}
  },
  "max_score": 42.0,
  "num_reduce_phases": 42.0,
  "profile": {
    "shards": [
      {
        "aggregations": [
          {
            "breakdown": {},
            "description": "string",
            "type": "string",
            "debug": {},
            "children": [
              {}
            ]
          }
        ],
        "cluster": "string",
        "dfs": {
          "statistics": {
            "type": "string",
            "description": "string",
            "time": "string",
            "breakdown": {},
            "debug": {},
            "children": [
              {}
            ]
          },
          "knn": [
            {}
          ]
        },
        "fetch": {
          "type": "string",
          "description": "string",
          "": 42.0,
          "breakdown": {
            "load_source": 42.0,
            "load_source_count": 42.0,
            "load_stored_fields": 42.0,
            "load_stored_fields_count": 42.0,
            "next_reader": 42.0,
            "next_reader_count": 42.0,
            "process_count": 42.0,
            "process": 42.0
          },
          "debug": {
            "stored_fields": [
              "string"
            ],
            "fast_path": 42.0
          },
          "children": [
            {}
          ]
        },
        "id": "string",
        "index": "string",
        "node_id": "string",
        "searches": [
          {
            "collector": [
              {}
            ],
            "query": [
              {}
            ],
            "rewrite_time": 42.0
          }
        ],
        "shard_id": 42.0
      }
    ]
  },
  "pit_id": "string",
  "_scroll_id": "string",
  "suggest": {
    "additionalProperty1": [
      {
        "length": 42.0,
        "offset": 42.0,
        "text": "string"
      }
    ],
    "additionalProperty2": [
      {
        "length": 42.0,
        "offset": 42.0,
        "text": "string"
      }
    ]
  },
  "terminated_early": true
}























































































































































































Delete application privileges Added in 6.4.0

DELETE /_security/privilege/{application}/{name}

To use this API, you must have one of the following privileges:

  • The manage_security cluster privilege (or a greater privilege such as all).
  • The "Manage Application Privileges" global privilege for the application being referenced in the request.
External documentation

Path parameters

  • application string Required

    The name of the application. Application privileges are always associated with exactly one application.

  • name string | array[string] Required

    The name of the privilege.

Query parameters

  • refresh string

    If true (the default) then refresh the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false then do nothing with refreshes.

    Values are true, false, or wait_for.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • * object Additional properties
      Hide * attribute Show * attribute object
      • * object Additional properties
        Hide * attribute Show * attribute object
DELETE /_security/privilege/{application}/{name}
curl \
 --request DELETE 'http://api.example.com/_security/privilege/{application}/{name}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `DELETE /_security/privilege/myapp/read`. If the privilege is successfully deleted, `found` is set to `true`.
{
  "myapp": {
    "read": {
      "found" : true
    }
  }
}




































Create or update users

PUT /_security/user/{username}

Add and update users in the native realm. A password is required for adding a new user but is optional when updating an existing user. To change a user's password without updating any other fields, use the change password API.

Path parameters

  • username string Required

    An identifier for the user.

    NOTE: Usernames must be at least 1 and no more than 507 characters. They can contain alphanumeric characters (a-z, A-Z, 0-9), spaces, punctuation, and printable symbols in the Basic Latin (ASCII) block. Leading or trailing whitespace is not allowed.

Query parameters

  • refresh string

    Valid values are true, false, and wait_for. These values have the same meaning as in the index API, but the default value for this API is true.

    Values are true, false, or wait_for.

application/json

Body Required

  • username string
  • metadata object
    Hide metadata attribute Show metadata attribute object
    • * object Additional properties
  • password string
  • A hash of the user's password. This must be produced using the same hashing algorithm as has been configured for password storage. For more details, see the explanation of the xpack.security.authc.password_hashing.algorithm setting in the user cache and password hash algorithm documentation. Using this parameter allows the client to pre-hash the password for performance and/or confidentiality reasons. The password parameter and the password_hash parameter cannot be used in the same request.

    External documentation
  • roles array[string]

    A set of roles the user has. The roles determine the user's access permissions. To create a user without any roles, specify an empty list ([]).

  • enabled boolean

    Specifies whether the user is enabled.

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • created boolean Required

      A successful call returns a JSON structure that shows whether the user has been created or updated. When an existing user is updated, created is set to false.

PUT /_security/user/{username}
curl \
 --request PUT 'http://api.example.com/_security/user/{username}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"password\" : \"l0ng-r4nd0m-p@ssw0rd\",\n  \"roles\" : [ \"admin\", \"other_role1\" ],\n  \"full_name\" : \"Jack Nicholson\",\n  \"email\" : \"jacknich@example.com\",\n  \"metadata\" : {\n    \"intelligence\" : 7\n  }\n}"'
Request example
Run `POST /_security/user/jacknich` to activate a user profile.
{
  "password" : "l0ng-r4nd0m-p@ssw0rd",
  "roles" : [ "admin", "other_role1" ],
  "full_name" : "Jack Nicholson",
  "email" : "jacknich@example.com",
  "metadata" : {
    "intelligence" : 7
  }
}
Response examples (200)
A successful response from `POST /_security/user/jacknich`. When an existing user is updated, `created` is set to `false`.
{
  "created": true 
}












Disable users

POST /_security/user/{username}/_disable

Disable users in the native realm. By default, when you create users, they are enabled. You can use this API to revoke a user's access to Elasticsearch.

Path parameters

  • username string Required

    An identifier for the user.

Query parameters

  • refresh string

    If true (the default) then refresh the affected shards to make this operation visible to search, if wait_for then wait for a refresh to make this operation visible to search, if false then do nothing with refreshes.

    Values are true, false, or wait_for.

Responses

POST /_security/user/{username}/_disable
curl \
 --request POST 'http://api.example.com/_security/user/{username}/_disable' \
 --header "Authorization: $API_KEY"
Response examples (200)
{}
























Enroll Kibana Added in 8.0.0

GET /_security/enroll/kibana

Enable a Kibana instance to configure itself for communication with a secured Elasticsearch cluster.

NOTE: This API is currently intended for internal use only by Kibana. Kibana uses this API internally to configure itself for communications with an Elasticsearch cluster that already has security features enabled.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • token object Required
      Hide token attributes Show token attributes object
      • name string Required

        The name of the bearer token for the elastic/kibana service account.

      • value string Required

        The value of the bearer token for the elastic/kibana service account. Use this value to authenticate the service account with Elasticsearch.

    • http_ca string Required

      The CA certificate used to sign the node certificates that Elasticsearch uses for TLS on the HTTP layer. The certificate is returned as a Base64 encoded string of the ASN.1 DER encoding of the certificate.

GET /_security/enroll/kibana
curl \
 --request GET 'http://api.example.com/_security/enroll/kibana' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_security/enroll/kibana`.
{
  "token" : {
    "name" : "enroll-process-token-1629123923000", 
    "value": "AAEAAWVsYXN0aWM...vZmxlZXQtc2VydmVyL3Rva2VuMTo3TFdaSDZ" 
  },
  "http_ca" : "MIIJlAIBAzVoGCSqGSIb3...vsDfsA3UZBAjEPfhubpQysAICAA=", 
}












































































































































































Update a cross-cluster API key

PUT /_security/cross_cluster/api_key/{id}

Update the attributes of an existing cross-cluster API key, which is used for API key based remote cluster access.

To use this API, you must have at least the manage_security cluster privilege. Users can only update API keys that they created. To update another user's API key, use the run_as feature to submit a request on behalf of another user.

IMPORTANT: It's not possible to use an API key as the authentication credential for this API. To update an API key, the owner user's credentials are required.

It's not possible to update expired API keys, or API keys that have been invalidated by the invalidate API key API.

This API supports updates to an API key's access scope, metadata, and expiration. The owner user's information, such as the username and realm, is also updated automatically on every call.

NOTE: This API cannot update REST API keys, which should be updated by either the update API key or bulk update API keys API.

External documentation

Path parameters

  • id string Required

    The ID of the cross-cluster API key to update.

application/json

Body Required

  • access object Required
    Hide access attributes Show access attributes object
    • replication array[object]

      A list of indices permission entries for cross-cluster replication.

      Hide replication attributes Show replication attributes object
  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • metadata object
    Hide metadata attribute Show metadata attribute object
    • * object Additional properties

Responses

  • 200 application/json
    Hide response attribute Show response attribute object
    • updated boolean Required

      If true, the API key was updated. If false, the API key didn’t change because no change was detected.

PUT /_security/cross_cluster/api_key/{id}
curl \
 --request PUT 'http://api.example.com/_security/cross_cluster/api_key/{id}' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"access\": {\n    \"replication\": [\n      {\n        \"names\": [\"archive\"]\n      }\n    ]\n  },\n  \"metadata\": {\n    \"application\": \"replication\"\n  }\n}"'
Request example
Run `PUT /_security/cross_cluster/api_key/VuaCfGcBCdbkQm-e5aOx` to update a cross-cluster API key, assigning it new access scope and metadata.
{
  "access": {
    "replication": [
      {
        "names": ["archive"]
      }
    ]
  },
  "metadata": {
    "application": "replication"
  }
}
Response examples (200)
A successful response from `PUT /_security/cross_cluster/api_key/VuaCfGcBCdbkQm-e5aOx` that indicates that the API key was updated.
{
  "updated": true
}





































Path parameters

  • repository string | array[string] Required

    A comma-separated list of repository names

Query parameters

  • local boolean

    Return local information, do not retrieve the state from master node (default: false)

  • Explicit operation timeout for connection to master node

Responses

GET /_snapshot/{repository}
curl \
 --request GET 'http://api.example.com/_snapshot/{repository}' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_snapshot/my_repository`.
{
  "my_repository" : {
    "type" : "fs",
    "uuid" : "0JLknrXbSUiVPuLakHjBrQ",
    "settings" : {
      "location" : "my_backup_location"
    }
  }
}





































































Get snapshot lifecycle management statistics Added in 7.5.0

GET /_slm/stats

Get global and policy-level statistics about actions taken by snapshot lifecycle management.

Query parameters

  • Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.

  • timeout string

    Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.

Responses

GET /_slm/stats
curl \
 --request GET 'http://api.example.com/_slm/stats' \
 --header "Authorization: $API_KEY"
Response examples (200)
A successful response from `GET /_slm/stats`.
{
  "retention_runs": 13,
  "retention_failed": 0,
  "retention_timed_out": 0,
  "retention_deletion_time": "1.4s",
  "retention_deletion_time_millis": 1404,
  "policy_stats": [ ],
  "total_snapshots_taken": 1,
  "total_snapshots_failed": 1,
  "total_snapshots_deleted": 0,
  "total_snapshot_deletion_failures": 0
}





























Get SQL search results Added in 6.3.0

GET /_sql

Run an SQL request.

Query parameters

  • format string

    The format for the response. You can also specify a format using the Accept HTTP header. If you specify both this parameter and the Accept HTTP header, this parameter takes precedence.

    Values are csv, json, tsv, txt, yaml, cbor, or smile.

application/json

Body Required

  • If true, the response has partial results when there are shard request timeouts or shard failures. If false, the API returns an error with no partial results.

  • catalog string

    The default catalog (cluster) for queries. If unspecified, the queries execute on the data in the local cluster only.

  • columnar boolean

    If true, the results are in a columnar fashion: one row represents all the values of a certain column from the current page of results. The API supports this parameter only for CBOR, JSON, SMILE, and YAML responses.

    External documentation
  • cursor string

    The cursor used to retrieve a set of paginated results. If you specify a cursor, the API only uses the columnar and time_zone request body parameters. It ignores other request body parameters.

  • The maximum number of rows (or entries) to return in one response.

  • If false, the API returns an exception when encountering multiple values for a field. If true, the API is lenient and returns the first value from the array with no guarantee of consistent results.

  • filter object

    An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.

    External documentation
  • If true, the search can run on frozen indices.

  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • If true, Elasticsearch stores synchronous searches if you also specify the wait_for_completion_timeout parameter. If false, Elasticsearch only stores async searches that don't finish before the wait_for_completion_timeout.

  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • params object

    The values for parameters in the query.

    Hide params attribute Show params attribute object
    • * object Additional properties
  • query string

    The SQL query to run.

    External documentation
  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

  • Hide runtime_mappings attribute Show runtime_mappings attribute object
    • * object Additional properties
      Hide * attributes Show * attributes object
      • fields object

        For type composite

        Hide fields attribute Show fields attribute object
        • * object Additional properties
          Hide * attribute Show * attribute object
          • type string Required

            Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

      • fetch_fields array[object]

        For type lookup

        Hide fetch_fields attributes Show fetch_fields attributes object
        • field string Required

          Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

        • format string
      • format string

        A custom format for date type runtime fields.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.

      • script object
        Hide script attributes Show script attributes object
        • source string

          The script source.

        • id string
        • params object

          Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.

          Hide params attribute Show params attribute object
          • * object Additional properties
        • lang string

          Any of:

          Values are painless, expression, mustache, or java.

        • options object
          Hide options attribute Show options attribute object
          • * string Additional properties
      • type string Required

        Values are boolean, composite, date, double, geo_point, geo_shape, ip, keyword, long, or lookup.

  • A duration. Units can be nanos, micros, ms (milliseconds), s (seconds), m (minutes), h (hours) and d (days). Also accepts "0" without a unit and "-1" to indicate an unspecified value.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • columns array[object]

      Column headings for the search results. Each object is a column.

      Hide columns attributes Show columns attributes object
    • cursor string

      The cursor for the next set of paginated results. For CSV, TSV, and TXT responses, this value is returned in the Cursor HTTP header.

    • id string
    • is_running boolean

      If true, the search is still running. If false, the search has finished. This value is returned only for async and saved synchronous searches. For CSV, TSV, and TXT responses, this value is returned in the Async-partial HTTP header.

    • is_partial boolean

      If true, the response does not contain complete search results. If is_partial is true and is_running is true, the search is still running. If is_partial is true but is_running is false, the results are partial due to a failure or timeout. This value is returned only for async and saved synchronous searches. For CSV, TSV, and TXT responses, this value is returned in the Async-partial HTTP header.

    • rows array[array] Required

      The values for the search results.

GET /_sql
curl \
 --request GET 'http://api.example.com/_sql' \
 --header "Authorization: $API_KEY" \
 --header "Content-Type: application/json" \
 --data '"{\n  \"query\": \"SELECT * FROM library ORDER BY page_count DESC LIMIT 5\"\n}"'
Request example
Run `POST _sql?format=txt` to get results for an SQL search.
{
  "query": "SELECT * FROM library ORDER BY page_count DESC LIMIT 5"
}
Response examples (200)
{
  "columns": [
    {
      "name": "string",
      "type": "string"
    }
  ],
  "cursor": "string",
  "id": "string",
  "is_running": true,
  "is_partial": true,
  "rows": [
    [
      {}
    ]
  ]
}