Using ES|QL across clusters
editUsing ES|QL across clusters
editCross-cluster search for ES|QL is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
For Cross-cluster search with ES|QL on version 8.16 or later, remote clusters must also be on version 8.16 or later.
With ES|QL, you can execute a single query across multiple clusters.
Prerequisites
edit-
Cross-cluster search requires remote clusters. To set up remote clusters on Elasticsearch Service, see configure remote clusters on Elasticsearch Service. If you run Elasticsearch on your own hardware, see Remote clusters.
To ensure your remote cluster configuration supports cross-cluster search, see Supported cross-cluster search configurations.
- For full cross-cluster search capabilities, the local and remote cluster must be on the same subscription level.
-
The local coordinating node must have the
remote_cluster_client
node role. -
If you use sniff mode, the local coordinating node must be able to connect to seed and gateway nodes on the remote cluster.
We recommend using gateway nodes capable of serving as coordinating nodes. The seed nodes can be a subset of these gateway nodes.
-
If you use proxy mode, the local coordinating node must be able
to connect to the configured
proxy_address
. The proxy at this address must be able to route connections to gateway and coordinating nodes on the remote cluster. - Cross-cluster search requires different security privileges on the local cluster and remote cluster. See Configure privileges for cross-cluster search and Remote clusters.
Security model
editElasticsearch supports two security models for cross-cluster search (CCS):
To check which security model is being used to connect your clusters, run GET _remote/info
.
If you’re using the API key authentication method, you’ll see the "cluster_credentials"
key in the response.
TLS certificate authentication
editTLS certificate authentication secures remote clusters with mutual TLS. This could be the preferred model when a single administrator has full control over both clusters. We generally recommend that roles and their privileges be identical in both clusters.
Refer to TLS certificate authentication for prerequisites and detailed setup instructions.
API key authentication
editThe following information pertains to using ES|QL across clusters with the API key based security model. You’ll need to follow the steps on that page for the full setup instructions. This page only contains additional information specific to ES|QL.
API key based cross-cluster search (CCS) enables more granular control over allowed actions between clusters. This may be the preferred model when you have different administrators for different clusters and want more control over who can access what data. In this model, cluster administrators must explicitly define the access given to clusters and users.
You will need to:
- Create an API key on the remote cluster using the Create cross-cluster API key API or using the Kibana API keys UI.
- Add the API key to the keystore on the local cluster, as part of the steps in configuring the local cluster. All cross-cluster requests from the local cluster are bound by the API key’s privileges.
Using ES|QL with the API key based security model requires some additional permissions that may not be needed when using the traditional query DSL based search.
The following example API call creates a role that can query remote indices using ES|QL when using the API key based security model.
The final privilege, remote_cluster
, is required to allow remote enrich operations.
resp = client.security.put_role( name="remote1", cluster=[ "cross_cluster_search" ], indices=[ { "names": [ "" ], "privileges": [ "read" ] } ], remote_indices=[ { "names": [ "logs-*" ], "privileges": [ "read", "read_cross_cluster" ], "clusters": [ "my_remote_cluster" ] } ], remote_cluster=[ { "privileges": [ "monitor_enrich" ], "clusters": [ "my_remote_cluster" ] } ], ) print(resp)
const response = await client.security.putRole({ name: "remote1", cluster: ["cross_cluster_search"], indices: [ { names: [""], privileges: ["read"], }, ], remote_indices: [ { names: ["logs-*"], privileges: ["read", "read_cross_cluster"], clusters: ["my_remote_cluster"], }, ], remote_cluster: [ { privileges: ["monitor_enrich"], clusters: ["my_remote_cluster"], }, ], }); console.log(response);
POST /_security/role/remote1 { "cluster": ["cross_cluster_search"], "indices": [ { "names" : [""], "privileges": ["read"] } ], "remote_indices": [ { "names": [ "logs-*" ], "privileges": [ "read","read_cross_cluster" ], "clusters" : ["my_remote_cluster"] } ], "remote_cluster": [ { "privileges": [ "monitor_enrich" ], "clusters": [ "my_remote_cluster" ] } ] }
The |
|
Typically, users will have permissions to read both local and remote indices. However, for cases where the role
is intended to ONLY search the remote cluster, the |
|
The indices allowed read access to the remote cluster. The configured cross-cluster API key must also allow this index to be read. |
|
The |
|
The remote clusters to which these privileges apply. This remote cluster must be configured with a cross-cluster API key and connected to the remote cluster before the remote index can be queried. Verify connection using the Remote cluster info API. |
|
Required to allow remote enrichment. Without this, the user cannot read from the |
You will then need a user or API key with the permissions you created above. The following example API call creates
a user with the remote1
role.
resp = client.security.put_user( username="remote_user", password="<PASSWORD>", roles=[ "remote1" ], ) print(resp)
const response = await client.security.putUser({ username: "remote_user", password: "<PASSWORD>", roles: ["remote1"], }); console.log(response);
POST /_security/user/remote_user { "password" : "<PASSWORD>", "roles" : [ "remote1" ] }
Remember that all cross-cluster requests from the local cluster are bound by the cross cluster API key’s privileges, which are controlled by the remote cluster’s administrator.
Cross cluster API keys created in versions prior to 8.15.0 will need to replaced or updated to add the new permissions required for ES|QL with ENRICH.
Remote cluster setup
editOnce the security model is configured, you can add remote clusters.
The following cluster update settings API request
adds three remote clusters: cluster_one
, cluster_two
, and cluster_three
.
resp = client.cluster.put_settings( persistent={ "cluster": { "remote": { "cluster_one": { "seeds": [ "35.238.149.1:9300" ], "skip_unavailable": True }, "cluster_two": { "seeds": [ "35.238.149.2:9300" ], "skip_unavailable": False }, "cluster_three": { "seeds": [ "35.238.149.3:9300" ] } } } }, ) print(resp)
response = client.cluster.put_settings( body: { persistent: { cluster: { remote: { cluster_one: { seeds: [ '35.238.149.1:9300' ], skip_unavailable: true }, cluster_two: { seeds: [ '35.238.149.2:9300' ], skip_unavailable: false }, cluster_three: { seeds: [ '35.238.149.3:9300' ] } } } } } ) puts response
const response = await client.cluster.putSettings({ persistent: { cluster: { remote: { cluster_one: { seeds: ["35.238.149.1:9300"], skip_unavailable: true, }, cluster_two: { seeds: ["35.238.149.2:9300"], skip_unavailable: false, }, cluster_three: { seeds: ["35.238.149.3:9300"], }, }, }, }, }); console.log(response);
PUT _cluster/settings { "persistent": { "cluster": { "remote": { "cluster_one": { "seeds": [ "35.238.149.1:9300" ], "skip_unavailable": true }, "cluster_two": { "seeds": [ "35.238.149.2:9300" ], "skip_unavailable": false }, "cluster_three": { "seeds": [ "35.238.149.3:9300" ] } } } } }
Since |
Query across multiple clusters
editIn the FROM
command, specify data streams and indices on remote clusters
using the format <remote_cluster_name>:<target>
. For instance, the following
ES|QL request queries the my-index-000001
index on a single remote cluster
named cluster_one
:
FROM cluster_one:my-index-000001 | LIMIT 10
Similarly, this ES|QL request queries the my-index-000001
index from
three clusters:
- The local ("querying") cluster
-
Two remote clusters,
cluster_one
andcluster_two
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | LIMIT 10
Likewise, this ES|QL request queries the my-index-000001
index from all
remote clusters (cluster_one
, cluster_two
, and cluster_three
):
FROM *:my-index-000001 | LIMIT 10
Cross-cluster metadata
editUsing the "include_ccs_metadata": true
option, users can request that
ES|QL cross-cluster search responses include metadata about the search on each cluster (when the response format is JSON).
Here we show an example using the async search endpoint. Cross-cluster search metadata is also present in the synchronous
search endpoint response when requested.
resp = client.esql.async_query( format="json", body={ "query": "\n FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index*\n | STATS COUNT(http.response.status_code) BY user.id\n | LIMIT 2\n ", "include_ccs_metadata": True }, ) print(resp)
const response = await client.transport.request({ method: "POST", path: "/_query/async", querystring: { format: "json", }, body: { query: "\n FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index*\n | STATS COUNT(http.response.status_code) BY user.id\n | LIMIT 2\n ", include_ccs_metadata: true, }, }); console.log(response);
POST /_query/async?format=json { "query": """ FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index* | STATS COUNT(http.response.status_code) BY user.id | LIMIT 2 """, "include_ccs_metadata": true }
Which returns:
{ "is_running": false, "took": 42, "columns" : [ { "name" : "COUNT(http.response.status_code)", "type" : "long" }, { "name" : "user.id", "type" : "keyword" } ], "values" : [ [4, "elkbee"], [1, "kimchy"] ], "_clusters": { "total": 3, "successful": 3, "running": 0, "skipped": 0, "partial": 0, "failed": 0, "details": { "(local)": { "status": "successful", "indices": "blogs", "took": 41, "_shards": { "total": 13, "successful": 13, "skipped": 0, "failed": 0 } }, "cluster_one": { "status": "successful", "indices": "cluster_one:my-index-000001", "took": 38, "_shards": { "total": 4, "successful": 4, "skipped": 0, "failed": 0 } }, "cluster_two": { "status": "successful", "indices": "cluster_two:my-index*", "took": 40, "_shards": { "total": 18, "successful": 18, "skipped": 1, "failed": 0 } } } } }
How long the entire search (across all clusters) took, in milliseconds. |
|
This section of counters shows all possible cluster search states and how many cluster
searches are currently in that state. The clusters can have one of the following statuses: running,
successful (searches on all shards were successful), skipped (the search
failed on a cluster marked with |
|
The |
|
If you included indices from the local cluster you sent the request to in your cross-cluster search, it is identified as "(local)". |
|
How long (in milliseconds) the search took on each cluster. This can be useful to determine which clusters have slower response times than others. |
|
The shard details for the search on that cluster, including a count of shards that were skipped due to the can-match phase results. Shards are skipped when they cannot have any matching data and therefore are not included in the full ES|QL query. |
The cross-cluster metadata can be used to determine whether any data came back from a cluster.
For instance, in the query below, the wildcard expression for cluster-two
did not resolve
to a concrete index (or indices). The cluster is, therefore, marked as skipped and the total
number of shards searched is set to zero.
resp = client.esql.async_query( format="json", body={ "query": "\n FROM cluster_one:my-index*,cluster_two:logs*\n | STATS COUNT(http.response.status_code) BY user.id\n | LIMIT 2\n ", "include_ccs_metadata": True }, ) print(resp)
const response = await client.transport.request({ method: "POST", path: "/_query/async", querystring: { format: "json", }, body: { query: "\n FROM cluster_one:my-index*,cluster_two:logs*\n | STATS COUNT(http.response.status_code) BY user.id\n | LIMIT 2\n ", include_ccs_metadata: true, }, }); console.log(response);
POST /_query/async?format=json { "query": """ FROM cluster_one:my-index*,cluster_two:logs* | STATS COUNT(http.response.status_code) BY user.id | LIMIT 2 """, "include_ccs_metadata": true }
Which returns:
{ "is_running": false, "took": 55, "columns": [ ... // not shown ], "values": [ ... // not shown ], "_clusters": { "total": 2, "successful": 2, "running": 0, "skipped": 0, "partial": 0, "failed": 0, "details": { "cluster_one": { "status": "successful", "indices": "cluster_one:my-index*", "took": 38, "_shards": { "total": 4, "successful": 4, "skipped": 0, "failed": 0 } }, "cluster_two": { "status": "skipped", "indices": "cluster_two:logs*", "took": 0, "_shards": { "total": 0, "successful": 0, "skipped": 0, "failed": 0 } } } } }
This cluster is marked as skipped, since there were no matching indices on that cluster. |
|
Indicates that no shards were searched (due to not having any matching indices). |
Enrich across clusters
editEnrich in ES|QL across clusters operates similarly to local enrich. If the enrich policy and its enrich indices are consistent across all clusters, simply write the enrich command as you would without remote clusters. In this default mode, ES|QL can execute the enrich command on either the local cluster or the remote clusters, aiming to minimize computation or inter-cluster data transfer. Ensuring that the policy exists with consistent data on both the local cluster and the remote clusters is critical for ES|QL to produce a consistent query result.
Enrich in ES|QL across clusters using the API key based security model was introduced in version 8.15.0. Cross cluster API keys created in versions prior to 8.15.0 will need to replaced or updated to use the new required permissions. Refer to the example in the API key authentication section.
In the following example, the enrich with hosts
policy can be executed on
either the local cluster or the remote cluster cluster_one
.
FROM my-index-000001,cluster_one:my-index-000001 | ENRICH hosts ON ip | LIMIT 10
Enrich with an ES|QL query against remote clusters only can also happen on
the local cluster. This means the below query requires the hosts
enrich
policy to exist on the local cluster as well.
FROM cluster_one:my-index-000001,cluster_two:my-index-000001 | LIMIT 10 | ENRICH hosts ON ip
Enrich with coordinator mode
editES|QL provides the enrich _coordinator
mode to force ES|QL to execute the enrich
command on the local cluster. This mode should be used when the enrich policy is
not available on the remote clusters or maintaining consistency of enrich indices
across clusters is challenging.
FROM my-index-000001,cluster_one:my-index-000001 | ENRICH _coordinator:hosts ON ip | SORT host_name | LIMIT 10
Enrich with the _coordinator
mode usually increases inter-cluster data transfer and
workload on the local cluster.
Enrich with remote mode
editES|QL also provides the enrich _remote
mode to force ES|QL to execute the enrich
command independently on each remote cluster where the target indices reside.
This mode is useful for managing different enrich data on each cluster, such as detailed
information of hosts for each region where the target (main) indices contain
log events from these hosts.
In the below example, the hosts
enrich policy is required to exist on all
remote clusters: the querying
cluster (as local indices are included),
the remote cluster cluster_one
, and cluster_two
.
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | ENRICH _remote:hosts ON ip | SORT host_name | LIMIT 10
A _remote
enrich cannot be executed after a stats
command. The following example would result in an error:
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | STATS COUNT(*) BY ip | ENRICH _remote:hosts ON ip | SORT host_name | LIMIT 10
Multiple enrich commands
editYou can include multiple enrich commands in the same query with different
modes. ES|QL will attempt to execute them accordingly. For example, this
query performs two enriches, first with the hosts
policy on any cluster
and then with the vendors
policy on the local cluster.
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | ENRICH hosts ON ip | ENRICH _coordinator:vendors ON os | LIMIT 10
A _remote
enrich command can’t be executed after a _coordinator
enrich
command. The following example would result in an error.
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001 | ENRICH _coordinator:hosts ON ip | ENRICH _remote:vendors ON os | LIMIT 10
Excluding clusters or indices from ES|QL query
editTo exclude an entire cluster, prefix the cluster alias with a minus sign in
the FROM
command, for example: -my_cluster:*
:
FROM my-index-000001,cluster*:my-index-000001,-cluster_three:* | LIMIT 10
To exclude a specific remote index, prefix the index with a minus sign in
the FROM
command, such as my_cluster:-my_index
:
FROM my-index-000001,cluster*:my-index-*,cluster_three:-my-index-000001 | LIMIT 10
Optional remote clusters
editCross-cluster search for ES|QL currently does not respect the skip_unavailable
setting. As a result, if a remote cluster specified in the request is
unavailable or failed, cross-cluster search for ES|QL queries will fail regardless of the setting.
We are actively working to align the behavior of cross-cluster search for ES|QL with other cross-cluster search APIs.
Query across clusters during an upgrade
editYou can still search a remote cluster while performing a rolling upgrade on the local cluster. However, the local coordinating node’s "upgrade from" and "upgrade to" version must be compatible with the remote cluster’s gateway node.
Running multiple versions of Elasticsearch in the same cluster beyond the duration of an upgrade is not supported.
For more information about upgrades, see Upgrading Elasticsearch.