Limitations and known problems

edit

The following limitations and known problems apply to the 3.1.0 release of ECE, or to earlier releases as noted.

For troubleshooting help, you can also refer to the list of Common issues.

Installation and configuration

edit
  • When you install Elastic Cloud Enterprise on a new host and assign it the allocator role from the command line with the --roles "allocator" parameter during installation, new deployments might not get created on the allocator. To resolve this issue, check Allocators Are Not Being Used.
  • Some change management tools that auto-reload firewall rules can cause networking issues. Specifically, Docker networking can fail on new containers after restarting the iptables service. To avoid networking failures, disable the automatic reloading of firewall rules.
  • On RHEL and CentOS, the firewalld service is not compatible with Docker and interferes with the installation of ECE. You must disable firewalld before installing or reinstalling ECE.
  • When you use OverlayFS with Kernel-LT 4.4.156 and later, there is a known regression that prevents Elastic Cloud Enterprise from completing the installation. This regression is fixed with Kernel-LT 4.9.
  • If you install ECE on AWS, you likely need to modify the cluster endpoint, as the public hostname resolves to a different IP address externally than it does internally on the cluster.
  • ECE is unable to support VMotion functionality in VMWare. To use ECE, you must disable VMotion.
  • When you use virtualization resources, make sure that you avoid resource overallocation.
  • Due to a known Elasticsearch bug, plans for Elasticsearch versions 7.7.0 and 7.7.1 can fail unexpectedly with an error that indicates that there was a bad request while performing the constructor’s step validate-enough-disk-space. To resolve this, first try manually restarting all nodes of the cluster. If restarting doesn’t resolve the problem, you can edit the cluster plan to set the override_failsafe option to true. We also recommend upgrading to version 7.8 or higher, which resolves this bug. For more details, check the version 2.2.2 release notes.

Security

edit
  • Changing the generated password for the admin user on the administration console deployment that backs the Cloud UI is not supported. This is the admin user on the admin-console-elasticsearch deployment that gets created during the ECE installation.

    Do not change the generated password for the admin user on the administration console deployment or you risk losing administrative access to your installation.

  • When configuring Elastic Cloud Enterprise role-based access control:

    • Trying to use an invalid SAML provider can cause the security deployment to bootloop. The deployment falls back to the previous configuration, but if there are any issues between the UI and the actual configuration, remove or update the SAML provider profile. If the problem persists, review the deployment activity and logs.
    • PEM and PKCS11 formatted certificates are not supported.
  • In versions 2.6 and later, some or all platform certificates get generated with a 398 day expiration. Installations running these versions must have their certificates rotated manually before expiry. For details, check our KB article.

Some additional limitations apply when securing your installation. To learn more, check Secure Elastic Cloud Enterprise.

Deployments

edit
  • Pending plan changes for your deployment in the Cloud UI that exceed the available capacity will fail as expected, but might then require you to manually recover from the failure. (To recover, locate the details for the plan attempt and copy the diff; manually edit the diff to revert to the original plan and apply the modified plan into the Advanced cluster configuration panel.)
  • ECE has a maximum limit of 320 seconds, so we recommend optimizing long-running queries in Elasticsearch.

Deployment templates

edit
  • In ECE 2.13, a known issue can prevent custom deployment templates from being created through the UI, when selecting instance configurations different from the default ones. This results in an error related to autoscaling displayed in the UI when you try to create the template. If you are affected by this issue, you can contact support@elastic.co.

Upgrading

edit
  • A permissions change in the Elastic Agent Docker container can prevent the Elastic Agent or Integrations Server component from booting up within an ECE deployment. The change affects ECE installations that are deployed with a Linux UID other than 1000.

    ECE users with deployments that include APM or Integrations Server are recommended to wait for the next patch release, which is planned to include a fix for this problem.

  • A known issue can prevent direct rolling upgrades from Elasticsearch version 5.6.10 to version 6.3.0. As a workaround, we have removed version 6.3.0 from the Cloud UI for new cluster deployments and for upgrading existing ones. If you are affected by this issue, check Rolling upgrades from 5.6.x to 6.3.0 fails with "java.lang.IllegalStateException: commit doesn’t contain history uuid" in our Elastic Support Portal. If these steps do not work or you do not have access to the Support Portal, you can contact support@elastic.co.
  • When upgrading to 2.4.0, make sure that nothing listens on your proxy node over port 9000. If a Minio repository runs on the same ECE node, you’ll need to change the default listening port. If there is a port conflict, proxy fails and won’t boot.
  • Starting with ECE version 2.6.0, deployment upgrades initiated from the UI can fail if there is no healthy instance of Kibana available. As a workaround, you can perform an advanced edit on the cluster to upgrade the cluster version. In the cluster configuration, each occurrence of elasticsearch.version can be updated to the version that you choose. For details, check Advanced cluster configuration.
  • When upgrading to ECE version 2.7.0 the automatic upgrade of the admin-console-elasticsearch cluster may fail, particularly on older installations of ECE that started with a 5.x cluster. If this happens, it is safe to leave the cluster at the latest 6.x Elasticsearch version until the automatic upgrade is fixed in the next version of ECE. Note that the logging-and-metrics cluster should also remain on 6.x, only the security cluster should be upgraded to the latest 7.x Elasticsearch version using the dedicated API.
  • When upgrading from version 2.9.x (2.9.0, 2.9.1, 2.9.2) to a version between 2.10.0 and 2.11.0, you might get an error that indicates deployment aliases cannot be set in the console. As a workaround, you can fix it using the dedicated API by setting the value of config_option_id to enable-deployment-alias and the value of request body to { "value": "false" }.
  • When upgrading a deployment with APM & Fleet enabled to version 8.2, after the upgrade APM & Fleet no longer appears on the side menu nor on the Edit page. The APM & Fleet component still works correctly and you can keep using it. This problem is fixed in Elastic Cloud Enterprise 3.2.0.

Transport client

edit

The transport client is not considered thread safe in a cloud environment. We recommend that you use the Java REST client instead. This restriction relates to the fact that your deployments hosted on Elastic Cloud Enterprise are behind proxies, which prevent the transport client from communicating directly with Elasticsearch clusters.

The transport client does not work with Elasticsearch clusters at version 7.6 and later that are managed by ECE. The transport client continues to work with Elasticsearch clusters at version 7.5 and earlier. Note that the transport client was deprecated with Elasticsearch version 7.0 and will be removed with 8.0.

Fleet

edit

In ECE version 2.10 with the original Elastic Stack pack version 7.14, if downloaded from the Elastic website before August 10, 2021, Fleet does not work when enabled in a deployment. To support Fleet, you can get and re-upload a fresh copy of the version 7.14 Elastic Stack pack to overwrite the original one. If you have existing version 7.14 deployments, then restart Fleet/APM after re-uploading the Elastic Stack pack to enable Fleet. This issue will be addressed in later stack packs and ECE versions.

Integrations Server

edit

A bug in Elastic Stack versions 8.1 and later may lead to a full disk for Elastic APM users with tail-based sampling enabled. A fix has been merged and will be released in a future version.