- Elastic Cloud Enterprise - Elastic Cloud on your Infrastructure: other versions:
- Introducing Elastic Cloud Enterprise
- Preparing your installation
- Installing Elastic Cloud Enterprise
- Identify the deployment scenario
- Install ECE on a public cloud
- Install ECE on your own premises
- Alternative: Install ECE with Ansible
- Log into the Cloud UI
- Install ECE on additional hosts
- Migrate ECE to Podman hosts
- Post-installation steps
- Configuring your installation
- System deployments configuration
- Configure deployment templates
- Tag your allocators
- Edit instance configurations
- Create instance configurations
- Create deployment templates
- Configure system deployment templates
- Configure index management for templates
- Updating custom templates to support
node_roles
and autoscaling - Updating custom templates to support Integrations Server
- Default instance configurations
- Include additional Kibana plugins
- Manage snapshot repositories
- Manage licenses
- Change the ECE API URL
- Change endpoint URLs
- Enable custom endpoint aliases
- Configure allocator affinity
- Change allocator disconnect timeout
- Migrate ECE on Podman hosts to SELinux in
enforcing
mode
- Securing your installation
- Monitoring your installation
- Administering your installation
- Working with deployments
- Create a deployment
- Access Kibana
- Adding data to Elasticsearch
- Migrating data
- Ingesting data from your application
- Ingest data with Node.js on Elastic Cloud Enterprise
- Ingest data with Python on Elastic Cloud Enterprise
- Ingest data from Beats to Elastic Cloud Enterprise with Logstash as a proxy
- Ingest data from a relational database into Elastic Cloud Enterprise
- Ingest logs from a Python application using Filebeat
- Ingest logs from a Node.js web application using Filebeat
- Manage data from the command line
- Administering deployments
- Change your deployment configuration
- Maintenance mode
- Terminate a deployment
- Restart a deployment
- Restore a deployment
- Delete a deployment
- Migrate to index lifecycle management
- Disable an Elasticsearch data tier
- Access the Elasticsearch API console
- Work with snapshots
- Restore a snapshot across clusters
- Upgrade versions
- Editing your user settings
- Deployment autoscaling
- Configure Beats and Logstash with Cloud ID
- Keep your clusters healthy
- Keep track of deployment activity
- Secure your clusters
- Deployment heap dumps
- Deployment thread dumps
- Traffic Filtering
- Connect to your cluster
- Manage your Kibana instance
- Manage your APM & Fleet Server (7.13+)
- Manage your APM Server (versions before 7.13)
- Manage your Integrations Server
- Switch from APM to Integrations Server payload
- Enable logging and monitoring
- Enable cross-cluster search and cross-cluster replication
- Access other deployments of the same Elastic Cloud Enterprise environment
- Access deployments of another Elastic Cloud Enterprise environment
- Access deployments of an Elasticsearch Service organization
- Access clusters of a self-managed environment
- Enabling CCS/R between Elastic Cloud Enterprise and ECK
- Edit or remove a trusted environment
- Migrate the cross-cluster search deployment template
- Enable App Search
- Enable Enterprise Search
- Enable Graph (versions before 5.0)
- Troubleshooting
- RESTful API
- Authentication
- API calls
- How to access the API
- API examples
- Setting up your environment
- A first API call: What deployments are there?
- Create a first Deployment: Elasticsearch and Kibana
- Applying a new plan: Resize and add high availability
- Updating a deployment: Checking on progress
- Applying a new deployment configuration: Upgrade
- Enable more stack features: Add Enterprise Search to a deployment
- Dipping a toe into platform automation: Generate a roles token
- Customize your deployment
- Remove unwanted deployment templates and instance configurations
- Secure your settings
- API reference
- Changes to index allocation and API
- Script reference
- Release notes
- Elastic Cloud Enterprise 3.7.3
- Elastic Cloud Enterprise 3.7.2
- Elastic Cloud Enterprise 3.7.1
- Elastic Cloud Enterprise 3.7.0
- Elastic Cloud Enterprise 3.6.2
- Elastic Cloud Enterprise 3.6.1
- Elastic Cloud Enterprise 3.6.0
- Elastic Cloud Enterprise 3.5.1
- Elastic Cloud Enterprise 3.5.0
- Elastic Cloud Enterprise 3.4.1
- Elastic Cloud Enterprise 3.4.0
- Elastic Cloud Enterprise 3.3.0
- Elastic Cloud Enterprise 3.2.1
- Elastic Cloud Enterprise 3.2.0
- Elastic Cloud Enterprise 3.1.1
- Elastic Cloud Enterprise 3.1.0
- Elastic Cloud Enterprise 3.0.0
- Elastic Cloud Enterprise 2.13.4
- Elastic Cloud Enterprise 2.13.3
- Elastic Cloud Enterprise 2.13.2
- Elastic Cloud Enterprise 2.13.1
- Elastic Cloud Enterprise 2.13.0
- Elastic Cloud Enterprise 2.12.4
- Elastic Cloud Enterprise 2.12.3
- Elastic Cloud Enterprise 2.12.2
- Elastic Cloud Enterprise 2.12.1
- Elastic Cloud Enterprise 2.12.0
- Elastic Cloud Enterprise 2.11.2
- Elastic Cloud Enterprise 2.11.1
- Elastic Cloud Enterprise 2.11.0
- Elastic Cloud Enterprise 2.10.1
- Elastic Cloud Enterprise 2.10.0
- Elastic Cloud Enterprise 2.9.2
- Elastic Cloud Enterprise 2.9.1
- Elastic Cloud Enterprise 2.9.0
- Elastic Cloud Enterprise 2.8.1
- Elastic Cloud Enterprise 2.8.0
- Elastic Cloud Enterprise 2.7.2
- Elastic Cloud Enterprise 2.7.1
- Elastic Cloud Enterprise 2.7.0
- Elastic Cloud Enterprise 2.6.2
- Elastic Cloud Enterprise 2.6.1
- Elastic Cloud Enterprise 2.6.0
- Elastic Cloud Enterprise 2.5.1
- Elastic Cloud Enterprise 2.5.0
- Elastic Cloud Enterprise 2.4.3
- Elastic Cloud Enterprise 2.4.2
- Elastic Cloud Enterprise 2.4.1
- Elastic Cloud Enterprise 2.4.0
- Elastic Cloud Enterprise 2.3.2
- Elastic Cloud Enterprise 2.3.1
- Elastic Cloud Enterprise 2.3.0
- Elastic Cloud Enterprise 2.2.3
- Elastic Cloud Enterprise 2.2.2
- Elastic Cloud Enterprise 2.2.1
- Elastic Cloud Enterprise 2.2.0
- Elastic Cloud Enterprise 2.1.1
- Elastic Cloud Enterprise 2.1.0
- Elastic Cloud Enterprise 2.0.1
- Elastic Cloud Enterprise 2.0.0
- Elastic Cloud Enterprise 1.1.5
- Elastic Cloud Enterprise 1.1.4
- Elastic Cloud Enterprise 1.1.3
- Elastic Cloud Enterprise 1.1.2
- Elastic Cloud Enterprise 1.1.1
- Elastic Cloud Enterprise 1.1.0
- Elastic Cloud Enterprise 1.0.2
- Elastic Cloud Enterprise 1.0.1
- Elastic Cloud Enterprise 1.0.0
- What’s new with the Elastic Stack
- About this product
Manage your installation capacity
editManage your installation capacity
editIn ECE, every host is a runner. Depending on the size of your platform, runners can have one or more roles: Coordinator, director, proxy, and allocator. While planning the capacity of your ECE installation, you have to properly size the capacity for all roles. However, the allocator role deserves particular attention, as it hosts the Elasticsearch, Kibana, APM, Enterprise Search nodes, and the relevant services.
This section focuses on the allocator role, and explains how to plan its capacity in terms of memory, CPU, processors
setting, and storage.
Memory
editYou should plan your deployment size based on the amount of data you ingest. Memory is the main scaling unit for a deployment. Other units, like CPU and disks, are proportional to the memory size. The memory available for an allocator is called capacity.
During installation, the allocator capacity defaults to 85% of the host physical memory, as the rest is reserved for ECE system services.
ECE does not support hot-adding of resources to a running node. When increasing CPU/memory allocated to a ECE node, a restart is needed to utilize the additional resources.
To adjust the allocator capacity, prior to ECE 3.5.0; it is necessary to reinstall ECE on the host with a new value assigned to the --capacity
parameter. From ECE 3.5.0 onwards, just use the ECE API :
curl -X PUT \ http(s)://<ece_admin_url:port>/api/v1/platform/infrastructure/allocators/<allocator_id>/settings \ -H “Authorization: ApiKey $ECE_API_KEY” \ -H 'Content-Type: application/json' \ -d '{"capacity":<Capacity_Value_in_MB>}'
For more information on how to use API keys for authentication, check the section Access the API from the Command Line.
Prior to ECE 3.5.0, regardless of the use of this API, the CPU quota used the memory specified at installation time.
Examples
editHere are some examples to make Elastic deployments and ECE system services run smoothly on your host:
- If the runner has more than one role (allocator, coordinator, director, or proxy), you should reserve 28GB of host memory. For example, on a host with 256GB of RAM, 228GB is suitable for deployment use.
- If the runner has only the Allocator role, you should reserve 12GB of host memory. For example, on a host with 256GB of RAM, 244GB is suitable for deployment use.
CPU quotas
editECE uses CPU quotas to assign shares of the allocator host to the instances that are running on it. To calculate the CPU quota, use the following formula:
CPU quota = DeploymentRAM / HostCapacity
Examples
editConsider a 32GB deployment hosted on a 128GB allocator.
If you use the default system service reservation, the CPU quota is 29%:
If you use 12GB Allocator system service reservation, the CPU quota is 28%:
Those percentages represent the upper limit of the % of the total CPU resources available in a given 100ms period.
Processors setting
editIn addition to the CPU quotas, the processors
setting also plays a relevant role.
The allocated processors
setting originates from Elasticsearch and is responsible for calculating your thread pools.
While the CPU quota defines the percentage of the total CPU resources of an allocator that are assigned to an instance, the allocated processors
define how the thread pools are calculated in Elasticsearch, and therefore how many concurrent search and indexing requests an instance can process.
In other words, the CPU ratio defines how fast a single task can be completed, while the processors
setting defines how many different tasks can be completed at the same time.
Starting from Elasticsearch version 7.9.2, running on ECE 2.7.0 or newer, we rely on Elasticsearch and the -XX:ActiveProcessorCount
JVM setting to automatically detect the allocated processors
.
In earlier versions of ECE and Elasticsearch, the Elasticsearch processors setting was used to configure the allocated processors
according to the following formula:
Math.min(16,Math.max(2,(16*instanceCapacity*1.0/1024/64).toInt))
The following table gives an overview of the allocated processors
that are used to calculate the Elasticsearch thread pools based on the preceding formula:
instance size | vCPU |
---|---|
1024 |
2 |
2048 |
2 |
4096 |
2 |
8192 |
2 |
16384 |
4 |
32768 |
8 |
65536 | 16 |
This table also provides a rough indication of what the auto-detected value could be on newer versions of ECE and Elasticsearch.
Storage
editECE has specific hardware prerequisites for storage. Disk space is consumed by system logs, container overhead, and deployment data.
The main factor for selecting a disk quota is the deployment data, that is, data from your Elasticsearch, Kibana, and APM nodes. The biggest portion of data is consumed by the Elasticsearch nodes.
ECE uses XFS to enforce specific disk space quotas to control the disk consumption for the deployment nodes running on your allocator.
You must use XFS and have quotas enabled on all allocators, otherwise disk usage won’t display correctly.
To calculate the disk quota, use the following formula:
Diskquota = ICmultiplier * Deployment RAM
ICmultiplier
is the disk multiplier of the instance configuration that you defined in your ECE environment.
The default multiplier for data.default
is 32, which is used for hot nodes.
The default multiplier for data.highstorage
is 64, which is used for warm and cold nodes.
The FS multiplier for data.frozen
is 80, which is used for frozen nodes.
You can change the value of the disk multiplier at different levels:
- At the ECE level, check Edit instance configurations.
-
At the instance level, log into the Cloud UI and proceed as follows:
- From your deployment overview page, find the instance you want and open the instance menu.
- Select Override disk quota.
- Adjust the disk quota to your needs.
The override only persists during the lifecycle of the instance container. If a new container is created, for example during a grow_and_shrink
plan or a vacate operation, the quota is reset to its default.
To increase the storage ratio in a persistent way, edit the instance configurations.