It is time to say goodbye: This version of Elastic Cloud Enterprise has reached end-of-life (EOL) and is no longer supported.
The documentation for this version is no longer being maintained. If you are running this version, we strongly advise you to upgrade. For the latest information, see the current release documentation.
Load balancers
editLoad balancers
editElastic Cloud Enterprise is designed to be used in conjunction with at least one load balancer. A load balancer is not included with Elastic Cloud Enterprise, so you need to provide one yourself and place it in front of the Elastic Cloud Enterprise proxies.
Use the following recommendations to configure your load balancer:
- High availability: The exact number of load balancers depends on the utilization rate for your clusters. In a highly available installation, use at least two load balancers per availability zone in your installation.
- Inbound ports: Load balancers require that inbound traffic is open on the ports used by Elasticsearch, Kibana, and the transport client. To learn more, see the networking prerequisites.
- Deployment traffic and control plane traffic: Create separate load balancers for deployment traffic (Elasticsearch and Kibana traffic) and control plane traffic (Cloud UI console and ECE} API). This separation allows you to migrate to a large installation topology without reconfiguring or creating an additional load balancer.
- Traffic across proxies: Balance traffic evenly across all proxies. Proxies are constantly updated with the internal routing information on how to direct requests to clusters on allocators that are hosting their nodes across zones. Proxies prefer cluster nodes in their local zone and route requests primarily to nodes in their own zone.
- Network: Use a network that is fast enough from a latency and throughput perspective to be considered local for the Elasticsearch clustering requirement. There shouldn’t be a major advantage in "preferring local" from a load balancer perspective (rather than a proxy perspective). This might even lead to potential hot spotting on specific proxies, so it should be avoided.
-
X-Forwarded-For: Configure load balancers to strip inbound
X-Forwarded-For
headers and replace them with the client source IP as seen by the load balancer. This is required to prevent clients from spoofing their IP addresses. Elastic Cloud Enterprise usesX-Forwarded-For
to log client IP addresses and, if you have implemented IP filtering, to manage traffic. - HTTP: Use HTTP mode for ports 9200/9243 (HTTP traffic to clusters) and also for ports 12400/12443 (adminconsole traffic). Make sure that all load balancers or proxies sending HTTP traffic to deployments hosted on Elastic Cloud Enterprise are sending HTTP/1.1 traffic.
- TCP: Use TCP mode for ports 9300/9343 (transport client traffic to clusters) and the load balancer should enable the proxy protocol support.
Proxy health check for ECE 2.0 and earlier
editYou can use /__elb_health__
on your proxy hosts and check for a 200 response that indicates healthy.
http://<proxy-address>:9200>/__elb_health__
or
https://<proxy-address>:9243>/__elb_health__
This returns a healthy response as:
{"ok":true,"status":200}
Proxy health check for ECE 2.1 and later
editFor Elastic Cloud Enterprise 2.1 and later, the health check endpoint has changed.
You can use /_health
on proxy hosts with a result of either a 200 OK to indicate healthy or a 502 Bad Gateway response for unhealthy. A healthy response also means that internal routing tables in the proxy are valid and initialized, but not necessarily up-to-date.
http://PROXY_ADDRESS:9200/_health
or
https://PROXY_ADDRESS:9243/_health
This returns a healthy response as:
{"ok":true,"status":200}