- Introducing Elasticsearch Service
- Adding data to Elasticsearch
- Migrating data
- Ingesting data from your application
- Ingest data with Node.js on Elasticsearch Service
- Ingest data with Python on Elasticsearch Service
- Ingest data from Beats to Elasticsearch Service with Logstash as a proxy
- Ingest data from a relational database into Elasticsearch Service
- Ingest logs from a Python application using Filebeat
- Ingest logs from a Node.js web application using Filebeat
- Configure Beats and Logstash with Cloud ID
- Best practices for managing your data
- Configure index management
- Enable cross-cluster search and cross-cluster replication
- Access other deployments of the same Elasticsearch Service organization
- Access deployments of another Elasticsearch Service organization
- Access deployments of an Elastic Cloud Enterprise environment
- Access clusters of a self-managed environment
- Enabling CCS/R between Elasticsearch Service and ECK
- Edit or remove a trusted environment
- Migrate the cross-cluster search deployment template
- Manage data from the command line
- Preparing a deployment for production
- Securing your deployment
- Monitoring your deployment
- Monitor with AutoOps
- Configure Stack monitoring alerts
- Access performance metrics
- Keep track of deployment activity
- Diagnose and resolve issues
- Diagnose unavailable nodes
- Why are my shards unavailable?
- Why is performance degrading over time?
- Is my cluster really highly available?
- How does high memory pressure affect performance?
- Why are my cluster response times suddenly so much worse?
- How do I resolve deployment health warnings?
- How do I resolve node bootlooping?
- Why did my node move to a different host?
- Snapshot and restore
- Managing your organization
- Your account and billing
- Billing Dimensions
- Billing models
- Using Elastic Consumption Units for billing
- Edit user account settings
- Monitor and analyze your account usage
- Check your subscription overview
- Add your billing details
- Choose a subscription level
- Check your billing history
- Update billing and operational contacts
- Stop charges for a deployment
- Billing FAQ
- Elasticsearch Service hardware
- Elasticsearch Service GCP instance configurations
- Elasticsearch Service GCP default provider instance configurations
- Elasticsearch Service AWS instance configurations
- Elasticsearch Service AWS default provider instance configurations
- Elasticsearch Service Azure instance configurations
- Elasticsearch Service Azure default provider instance configurations
- Change hardware for a specific resource
- Elasticsearch Service regions
- About Elasticsearch Service
- RESTful API
- Release notes
- Enhancements and bug fixes - March 2025
- Enhancements and bug fixes - February 2025
- Enhancements and bug fixes - January 2025
- Enhancements and bug fixes - December 2024
- Enhancements and bug fixes - November 2024
- Enhancements and bug fixes - Late October 2024
- Enhancements and bug fixes - Early October 2024
- Enhancements and bug fixes - September 2024
- Enhancements and bug fixes - Late August 2024
- Enhancements and bug fixes - Early August 2024
- Enhancements and bug fixes - July 2024
- Enhancements and bug fixes - Late June 2024
- Enhancements and bug fixes - Early June 2024
- Enhancements and bug fixes - Early May 2024
- Bring your own key, and more
- AWS region EU Central 2 (Zurich) now available
- GCP region Middle East West 1 (Tel Aviv) now available
- Enhancements and bug fixes - March 2024
- Enhancements and bug fixes - January 2024
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- AWS region EU North 1 (Stockholm) now available
- GCP regions Asia Southeast 2 (Indonesia) and Europe West 9 (Paris)
- Enhancements and bug fixes
- Enhancements and bug fixes
- Bug fixes
- Enhancements and bug fixes
- Role-based access control, and more
- Newly released deployment templates for Integrations Server, Master, and Coordinating
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Enhancements and bug fixes
- Cross environment search and replication, and more
- Enhancements and bug fixes
- Enhancements and bug fixes
- Azure region Canada Central (Toronto) now available
- Azure region Brazil South (São Paulo) now available
- Azure region South Africa North (Johannesburg) now available
- Azure region Central India (Pune) now available
- Enhancements and bug fixes
- Azure new virtual machine types available
- Billing Costs Analysis API, and more
- Organization and billing API updates, and more
- Integrations Server, and more
- Trust across organizations, and more
- Organizations, and more
- Elastic Consumption Units, and more
- AWS region Africa (Cape Town) available
- AWS region Europe (Milan) available
- AWS region Middle East (Bahrain) available
- Enhancements and bug fixes
- Enhancements and bug fixes
- GCP Private Link, and more
- Enhancements and bug fixes
- GCP region Asia Northeast 3 (Seoul) available
- Enhancements and bug fixes
- Enhancements and bug fixes
- Native Azure integration, and more
- Frozen data tier and more
- Enhancements and bug fixes
- Azure region Southcentral US (Texas) available
- Azure region East US (Virginia) available
- Custom endpoint aliases, and more
- Autoscaling, and more
- Cross-region and cross-provider support, warm and cold data tiers, and more
- Better feature usage tracking, new cost and usage analysis page, and more
- New features, enhancements, and bug fixes
- AWS region Asia Pacific (Hong Kong)
- Enterprise subscription self service, log in with Microsoft, bug fixes, and more
- SSO for Enterprise Search, support for more settings
- Azure region Australia East (New South Wales)
- New logging features, better GCP marketplace self service
- Azure region US Central (Iowa)
- AWS region Asia Pacific (Mumbai)
- Elastic solutions and Microsoft Azure Marketplace integration
- AWS region Pacific (Seoul)
- AWS region EU West 3 (Paris)
- Traffic management and improved network security
- AWS region Canada (Central)
- Enterprise Search
- New security setting, in-place configuration changes, new hardware support, and signup with Google
- Azure region France Central (Paris)
- Regions AWS US East 2 (Ohio) and Azure North Europe (Ireland)
- Our Elasticsearch Service API is generally available
- GCP regions Asia East 1 (Taiwan), Europe North 1 (Finland), and Europe West 4 (Netherlands)
- Azure region UK South (London)
- GCP region US East 1 (South Carolina)
- GCP regions Asia Southeast 1 (Singapore) and South America East 1 (Sao Paulo)
- Snapshot lifecycle management, index lifecycle management migration, and more
- Azure region Japan East (Tokyo)
- App Search
- GCP region Asia Pacific South 1 (Mumbai)
- GCP region North America Northeast 1 (Montreal)
- New Elastic Cloud home page and other improvements
- Azure regions US West 2 (Washington) and Southeast Asia (Singapore)
- GCP regions US East 4 (N. Virginia) and Europe West 2 (London)
- Better plugin and bundle support, improved pricing calculator, bug fixes, and more
- GCP region Asia Pacific Southeast 1 (Sydney)
- Elasticsearch Service on Microsoft Azure
- Cross-cluster search, OIDC and Kerberos authentication
- AWS region EU (London)
- GCP region Asia Pacific Northeast 1 (Tokyo)
- Usability improvements and Kibana bug fix
- GCS support and private subscription
- Elastic Stack 6.8 and 7.1
- ILM and hot-warm architecture
- Elasticsearch keystore and more
- Trial capacity and more
- APM Servers and more
- Snapshot retention period and more
- Improvements and snapshot intervals
- SAML and multi-factor authentication
- Next generation of Elasticsearch Service
- Branding update
- Minor Console updates
- New Cloud Console and bug fixes
- What’s new with the Elastic Stack
Disable an Elasticsearch data tier
editDisable an Elasticsearch data tier
editAttempting to scale nodes down in size, reducing availability zones, or reverting an autoscaling change can all result in cluster instability, cluster inaccessibility, and even data corruption or loss in extreme cases.
To avoid this, especially for production environments, and in addition to making configuration changes to your indices and ILM as described on this page:
- Review the disk size, CPU, JVM memory pressure, and other performance metrics of your deployment before attempting to perform the scaling down action.
- Make sure that you have enough resources and availability zones to handle your workloads after scaling down.
- Check that your deployment’s hardware profile is correct for your business use case. For example, if you need to scale due to CPU pressure increases and are using a Storage Optimized hardware profile, consider switching to a CPU Optimized configuration instead.
Read https://www.elastic.co/cloud/shared-responsibility for additional details.
If in doubt, reach out to Support.
The process of disabling a data tier depends on whether we are dealing with searchable snapshots or regular indices.
The hot and warm tiers store regular indices, while the frozen tier stores searchable snapshots. However, the cold tier can store either regular indices or searchable snapshots. To check if a cold tier contains searchable snapshots perform the following request:
# cold data tier searchable snapshot indices GET /_cat/indices/restored-* # frozen data tier searchable snapshot indices GET /_cat/indices/partial-*
Non-searchable snapshot data tier
editElasticsearch Service tries to move all data from the nodes that are removed during plan changes. To disable a non-searchable snapshot data tier (e.g., hot, warm, or cold tier), make sure that all data on that tier can be re-allocated by reconfiguring the relevant shard allocation filters. You’ll also need to temporarily stop your index lifecycle management (ILM) policies to prevent new indices from being moved to the data tier you want to disable.
To learn more about ILM for Elasticsearch Service, or shard allocation filtering, check the following documentation:
To make sure that all data can be migrated from the data tier you want to disable, follow these steps:
-
Determine which nodes will be removed from the cluster.
- Log in to the Elasticsearch Service Console.
-
From the Deployments page, select your deployment.
On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list.
-
Filter the list of instances by the Data tier you want to disable.
Note the listed instance IDs. In this example, it would be Instance 2 and Instance 3.
-
Stop ILM.
POST /_ilm/stop
-
Determine which shards need to be moved.
GET /_cat/shards
Parse the output, looking for shards allocated to the nodes to be removed from the cluster. Note that
Instance #2
is shown asinstance-0000000002
in the output. -
Move shards off the nodes to be removed from the cluster.
You must remove any index-level shard allocation filters from the indices on the nodes to be removed. ILM uses different rules depending on the policy and version of Elasticsearch. Check the index settings to determine which rule to use:
GET /my-index/_settings
-
Updating data tier based allocation inclusion rules.
Data tier based ILM policies use
index.routing.allocation.include
to allocate shards to the appropriate tier. The indices that use this method have index routing settings similar to the following example:{ ... "routing": { "allocation": { "include": { "_tier_preference": "data_warm,data_hot" } } } ... }
You must remove the relevant tier from the inclusion rules. For example, to disable the warm tier, the
data_warm
tier preference should be removed:PUT /my-index/_settings { "routing": { "allocation": { "include": { "_tier_preference": "data_hot" } } } }
Updating allocation inclusion rules will trigger a shard re-allocation, moving the shards from the nodes to be removed.
-
Updating node attribute allocation requirement rules.
Node attribute based ILM policies uses
index.routing.allocation.require
to allocate shards to the appropriate nodes. The indices that use this method have index routing settings that are similar to the following example:{ ... "routing": { "allocation": { "require": { "data": "warm" } } } ... }
You must either remove or redefine the routing requirements. To remove the attribute requirements, use the following code:
PUT /my-index/_settings { "routing": { "allocation": { "require": { "data": null } } } }
Removing required attributes does not trigger a shard reallocation. These shards are moved when applying the plan to disable the data tier. Alternatively, you can use the cluster re-route API to manually re-allocate the shards before removing the nodes, or explicitly re-allocate shards to hot nodes by using the following code:
PUT /my-index/_settings { "routing": { "allocation": { "require": { "data": "hot" } } } }
-
Removing custom allocation rules.
If indices on nodes to be removed have shard allocation rules of other forms, they must be removed as shown in the following example:
PUT /my-index/_settings { "routing": { "allocation": { "require": null, "include": null, "exclude": null } } }
-
-
Edit the deployment, disabling the data tier.
If autoscaling is enabled, set the maximum size to 0 for the data tier to ensure autoscaling does not re-enable the data tier.
Any remaining shards on the tier being disabled are re-allocated across the remaining cluster nodes while applying the plan to disable the data tier. Monitor shard allocation during the data migration phase to ensure all allocation rules have been correctly updated. If the plan fails to migrate data away from the data tier, then re-examine the allocation rules for the indices remaining on that data tier.
-
Once the plan change completes, confirm that there are no remaining nodes associated with the disabled tier and that
GET _cluster/health
reportsgreen
. If this is the case, re-enable ILM.POST _ilm/start
Searchable snapshot data tier
editWhen data reaches the cold
or frozen
phases, it is automatically converted to a searchable snapshot by ILM.
If you do not intend to delete this data, you should manually restore each of the searchable snapshot indices to a regular index before disabling the data tier, by following these steps:
-
Stop ILM and check ILM status is
STOPPED
to prevent data from migrating to the phase you intend to disable while you are working through the next steps.# stop ILM POST _ilm/stop # check status GET _ilm/status
-
Capture a comprehensive list of index and searchable snapshot names.
-
The index name of the searchable snapshots may differ based on the data tier. If you intend to disable the cold tier, then perform the following request with the
restored-*
prefix. If the frozen tier is the one to be disabled, use thepartial-*
prefix.GET <searchable-snapshot-index-prefix>/_settings?filter_path=**.index.store.snapshot.snapshot_name&expand_wildcards=all
In the example we have a list of 4 indices, which need to be moved away from the frozen tier.
-
- (Optional) Save the list of index and snapshot names in a text file, so you can access it throughout the rest of the process.
-
Remove the aliases that were applied to searchable snapshots indices. Use the index prefix from step 2.
POST _aliases { "actions": [ { "remove": { "index": "<searchable-snapshot-index-prefix>-<index_name>", "alias": "<index_name>" } } ] }
If you use data stream, you can skip this step.
In the example we are removing the alias for the
frozen-index-1
index. -
Restore indices from the searchable snapshots.
- Follow the steps to specify the data tier based allocation inclusion rules.
-
Remove the associated ILM policy (set it to
null
). If you want to apply a different ILM policy, follow the steps to Switch lifecycle policies. -
If needed, specify the alias for rollover, otherwise set it to
null
. -
Optionally, specify the desired number of replica shards.
POST _snapshot/found-snapshots/<searchable_snapshot_name>/_restore { "indices": "*", "index_settings": { "index.routing.allocation.include._tier_preference": "<data_tiers>", "number_of_replicas": X, "index.lifecycle.name": "<new-policy-name>", "index.lifecycle.rollover_alias": "<alias-for-rollover>" } }
The
<searchable_snapshot_name>
refers to the above-mentioned step: "Capture a comprehensive list of index and searchable snapshot names".In the example we are restoring
frozen-index-1
from the snapshot infound-snapshots
(default snapshot repository) and placing it in the warm tier.
- Repeat steps 4 and 5 until all snapshots are restored to regular indices.
-
Once all snapshots are restored, use
GET _cat/indices/<index-pattern>?v=true
to check that the restored indices aregreen
and are correctly reflecting the expecteddoc
andstore.size
counts.If you are using data stream, you may need to use
GET _data_stream/<data-stream-name>
to get the list of the backing indices, and then specify them by usingGET _cat/indices/<backing-index-name>?v=true
to check. -
Once your data has completed restoration from searchable snapshots to the target data tier,
DELETE
searchable snapshot indices using the prefix from step 2.DELETE <searchable-snapshot-index-prefix>-<index_name>
-
Delete the searchable snapshots by following these steps:
-
Open Kibana and navigate to Management > Data > Snapshot and Restore > Snapshots (or go to
<kibana-endpoint>/app/management/data/snapshot_restore/snapshots
) -
Search for
*<ilm-policy-name>*
-
Bulk select the snapshots and delete them
In the example we are deleting the snapshots associated with the
policy_with_frozen_phase
.
-
Open Kibana and navigate to Management > Data > Snapshot and Restore > Snapshots (or go to
-
Confirm that no shards remain on the data nodes you wish to remove using
GET _cat/allocation?v=true&s=node
. - Edit your cluster from the console to disable the data tier.
-
Once the plan change completes, confirm that there are no remaining nodes associated with the disabled tier and that
GET _cluster/health
reportsgreen
. If this is the case, re-enable ILM.POST _ilm/start