It is time to say goodbye: This version of Elastic Cloud Enterprise has reached end-of-life (EOL) and is no longer supported.
The documentation for this version is no longer being maintained. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Prerequisites
editPrerequisites
editWe want your experience with Elastic Cloud Enterprise to be a success, so we compiled a list of tried-and-tested prerequisites that will help get your installation off to a good start. To make it easier to look through these prerequisites, we separated them into sections for hardware, software, users, networking, and JVM Heap Sizes.
Hardware
editECE has specific hardware requirements for memory and storage. We have compiled information for the minimums required to install ECE, recommended minimums, and specific deployment scenarios.
Memory | Coordinators | Directors | Proxies | Allocators |
---|---|---|---|---|
Minimum to install |
8 GB RAM |
8 GB RAM |
8 GB RAM |
8 GB RAM |
Minimum recommended |
16 GB RAM |
8 GB RAM |
8 GB RAM |
128 GB to 256 GB RAM1 |
128 GB RAM |
128 GB RAM |
128 GB RAM |
128 GB RAM |
|
32 GB RAM |
32 GB RAM |
32 GB RAM |
256 GB RAM |
|
32 GB RAM |
32 GB RAM |
16 GB RAM |
256 GB RAM |
1 Allocators must be sized to support your Elasticsearch clusters and Kibana instances. We recommend host machines that provide between 128 GB and 256 GB of memory. Smaller hosts might not pack larger Elasticsearch clusters and Kibana instances as efficiently. Larger hosts might provide fewer CPU resources per GB of RAM on average.
2 For high availability, requires three hosts each of the capacities indicated, spread across three availability zones.
3 For high availability, requires three hosts each of the capacities indicated (except for allocators), spread across three availability zones. For allocators, requires three or more hosts of the capacity indicated, spread across three availability zones.
There are some additional hardware requirements to make sure that Elastic Cloud Enterprise can work as intended, such as the requirement to use fast SSD storage for ECE management services. To learn more, see Choose the Right Host Machines.
The size of your ECE deployment has a bearing on the JVM heap sizes that you should specify during installation. To learn more, see JVM Heap Sizes. For examples, see the deployment scenarios in our Playbook for Production.
Storage | Coordinators | Directors | Proxies | Allocators |
---|---|---|---|---|
Minimum to install |
10 GB |
10 GB |
10 GB |
10 GB |
Minimum recommended |
- |
- |
- |
Enough storage to support the RAM-to-storage ratio1 |
1 For example, if you use a host with 256 GB of RAM and the default ratio of 1:32, your host must provide 8192 GB of disk space.
Software
editThe following software has been tested to work with ECE:
-
One of the following Linux distributions:
- Ubuntu 14.04 LTS (Trusty Tahr; instructions)
- Ubuntu 16.04 LTS (Xenial Xerus; instructions)
- Red Hat Enterprise Linux 7 or later (RHEL 7; instructions, limitations)
-
CentOS 7 or later (instructions, limitations)
Amazon Linux is not currently supported. If you attempt to install ECE on Amazon Linux, installation will likely fail with an error.
- Linux kernel 3.10 or higher
- Docker 1.11
-
File system:
- We recommend that you use XFS, but any file system that supports the OverlayFS storage driver used by Docker can be used.
- XFS is required if you want to use disk space quotas for Elasticsearch data directories.
-
On RHEL and CentOS, XFS file systems must be created with the
-n ftype=1
option to make sure they can work with the OverlayFS storage driver used by Docker.
- If SELinux is enabled: Your SELinux configuration must allow mounting Docker sockets into containers (required for cluster management to work)
ECE is certified for Linux kernel 3.10 or higher and Docker 1.11. The latter is the only version of Docker Elastic recommends.
If you intend to use Docker 1.12 or higher, please note:
- You should avoid Linux kernel version 4.4 or lower, as there is a known issue with kernel memory (kmem) accounting.
- While ECE may work with these Docker and kernel versions, it has not been thoroughly tested with them and might have issues. Elastic will attempt to support ECE running Docker version 1.12 or higher and kernel version 4.5 or higher, but we might not be able to resolve issues related to these configurations. In such cases, you will be asked to move to a certified kernel and Docker version, so that Elastic can support you.
Users
editThe following users and permissions are required:
-
To prepare your environment: A user with sudo permissions, such as the
elastic
user included with our AWS AMIs or the ubuntu user provided on Ubuntu. -
To install ECE: A user with a UID and GID greater than or equal to 1000 who is part of the
docker
group. You must not install ECE as theroot
user.
You can find out information about a user with the id
command:
id uid=1000(elastic) gid=1000(elastic) groups=1000(elastic),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),102(netdev),112(libvirtd),1001(docker)
In this example, the user elastic
with a UID and GID of 1000 belongs to both the sudo
and the docker
groups.
Networking
editThe first host you install ECE on initially requires the ports for all roles to be open, which includes the ports for the coordinator, allocator, director, and proxy roles. After you have brought up your initial ECE installation, only the ports for the roles that the initial host continues to hold need to remain open.
The following networking or internet access is required for ECE:
- Internet access for a typical installation (offline installation is supported)
-
Outbound traffic open on the following ports:
Host role Outbound ports Purpose All
80
Installation script and docker.elastic.co Docker registry access (HTTP)
All
443
Installation script and docker.elastic.co Docker registry access (HTTPS)
-
Inbound traffic open from any source on the following ports:
Host role Inbound ports Purpose All
22
Installation and troubleshooting access only (SSH)
Coordinator
12400
Cloud UI and API access to the administration console (HTTP)
Coordinator
12443
Cloud UI and API access to the administration console (HTTPS)
Proxy
9200/9243
Kibana and Elasticsearch (HTTP/HTTPS), also required by load balancers
Proxy
9300/9343
Elasticsearch (transport client/transport client with TLS/SSL), also required by load balancers
-
Internal components of ECE require inbound traffic open on the following ports:
Host role Inbound ports Purpose Coordinator
22191-22195
Connections to initial coordinator from allocators and proxies (for up to five coordinators)
Director
12191-12201, 12898-12908, 13898-13908
ZooKeeper stunnels (up to five are typically used)
Director
2112
ZooKeeper ensemble discovery/joining
Allocator
18000-20000
Elasticsearch (HTTP and transport)
A typical ECE installation should be contained within a single data center. We recommend that ECE installations not span different data centers, due to variations in networking latency and bandwidth that cannot be controlled.
Installation of ECE across multiple data centers might be feasible with sufficiently low latency and high bandwidth, with some restrictions around what we can support. Based on our experience with our hosted Elastic Cloud service, the following is required:
- A typical network latency between the data centers of less than 10ms round-trip time during pings
- A network bandwidth of at least 10 Gigabit
If you choose to deploy a single ECE installation across multiple data centers, you might need to contend with additional disruptions due to bandwidth or latency issues. Both ECE and Elasticsearch are designed to be resilient to networking issues, but this resiliency is intended to handle exceptions and should not be depended on as part of normal operations. If Elastic determines during a support case that an issue is related to an installation across multiple data centers, the recommended resolution will be to consolidate your installation into a single data center, with further support limited until consolidation is complete.
JVM Heap Sizes
editECE uses default JVM heap sizes for services that work for testing. For production systems, we recommend that you install ECE according to the JVM heap size recommendations in this section. Our recommendations are based on our longstanding experience with the Elastic Cloud hosted offering and our growing experience with ECE in customer settings. Other JVM heap sizes can be left at their defaults.
For small deployments, we recommend:
Service | JVM Heap Size (Xms and Xmx) |
---|---|
|
1 GB |
|
4 GB |
|
8 GB |
|
4 GB |
|
1 GB |
|
4 GB |
|
4 GB |
For medium deployments, we recommend:
Service | JVM Heap Size (Xms and Xmx) |
---|---|
|
1 GB |
|
4 GB |
|
8 GB |
|
4 GB |
|
1 GB |
|
4 GB |
|
4 GB |
For large deployments, we recommend:
Service | JVM Heap Size (Xms and Xmx) |
---|---|
|
1 GB |
|
4 GB |
|
8 GB |
|
4 GB |
|
1 GB |
|
4 GB |
|
4 GB |
You specify the recommended JVM heap sizes with --memory-settings JVM_SETTINGS
parameter when you install ECE. For examples, see the deployment scenarios in our Playbook for Production.
Elasticsearch clusters and JVM Heap Size
editFor Elasticsearch clusters, ECE gives 50% of the available memory to the JVM heap used by Elasticsearch, while leaving the other 50% for the operating system. This memory won’t go unused, as Lucene is designed to leverage the underlying OS for caching in-memory data structures, meaning that Lucene will happily gobble up whatever is left over. The ideal heap size is somewhere below 32 GB, as heap sizes above 32 GB become less efficient.
What these recommendations mean is that on a 64 GB cluster, we dedicate 32 GB to the Elasticsearch heap and 32 GB to the operating system in the container that hosts your cluster. If you provision a 128 GB cluster, we create two 64 GB nodes, each node with 32 GB reserved for the Elasticsearch heap and 32 GB reserved for the operating system.
For more information about why heap sizes, memory for the operating system, and the 32 GB maximum for JVMs matter, see Heap: Sizing and Swapping.