Restrictions and known problems

edit

When using Elasticsearch Add-On for Heroku, there are some limitations you should be aware of:

For limitations related to logging and monitoring, check the Restrictions and limitations section of the logging and monitoring page.

Occasionally, we also publish information about Known problems with our Elasticsearch Add-On for Heroku or the Elastic Stack.

To learn more about the features that are supported by Elasticsearch Add-On for Heroku, check Elastic Cloud Subscriptions.

Elasticsearch Add-On for Heroku

edit

Not all features of our Elasticsearch Service are available to Heroku users. Specifically, you cannot create additional deployments or use different deployment templates.

Generally, if a feature is shown as available in the Elasticsearch Add-On for Heroku console, you can use it.

Security

edit
  • File and LDAP realms cannot be used. The Native realm is enabled, but the realm configuration itself is fixed in Elastic Cloud. Alternatively, authentication protocols such as SAML, OpenID Connect, or Kerberos can be used.
  • Client certificates, such as PKI certificates, are not supported.

Transport client

edit
  • The transport client is not considered thread safe in a cloud environment. We recommend that you use the Java REST client instead. This restriction relates to the fact that your deployments hosted on Elasticsearch Add-On for Heroku are behind proxies, which prevent the transport client from communicating directly with Elasticsearch clusters.
  • The transport client is not supported over private link connections. Use the Java REST client instead, or connect over the public internet.
  • The transport client does not work with Elasticsearch clusters at version 7.6 and later that are hosted on Cloud. Transport client continues to work with Elasticsearch clusters at version 7.5 and earlier. Note that the transport client was deprecated with version 7.0 and will be removed with 8.0.

Elasticsearch and Kibana plugins

edit
  • Kibana plugins are not supported.
  • Elasticsearch plugins, are not enabled by default for security purposes. Please reach out to support if you would like to enable Elasticsearch plugins support on your account.
  • Some Elasticsearch plugins do not apply to Elasticsearch Add-On for Heroku. For example, you won’t ever need to change discovery, as Elasticsearch Add-On for Heroku handles how nodes discover one another.
  • In Elasticsearch 5.0 and later, site plugins are no longer supported. This change does not affect the site plugins Elasticsearch Add-On for Heroku might provide out of the box, such as Kopf or Head, since these site plugins are serviced by our proxies and not Elasticsearch itself.
  • In Elasticsearch 5.0 and later, site plugins such as Kopf and Paramedic are no longer provided. We recommend that you use our cluster performance metrics, X-Pack monitoring features and Kibana’s (6.3+) Index Management UI if you want more detailed information or perform index management actions.

Private Link and SSO to Kibana URLs

edit

Currently you can’t use SSO to login directly from Elastic Cloud into Kibana endpoints that are protected by Private Link traffic filters. However, you can still SSO into Private Link protected Kibana endpoints individually using the SAML or OIDC protocol from your own identity provider, just not through the Elastic Cloud console. Stack level authentication using the Elasticsearch username and password should also work with {kibana-id}.{vpce|privatelink|psc}.domain URLs.

PDF report generation using Alerts or Watcher webhooks

edit
  • PDF report automatic generation via Alerts is not possible on Elastic Cloud.
  • PDF report generation isn’t possible for deployments running on Elastic stack version 8.7.0 or before that are protected by traffic filters. This limitation doesn’t apply to public webhooks such as Slack, PagerDuty, and email. For deployments running on Elastic stack version 8.7.1 and beyond, PDF report automatic generation via Watcher webhook is possible using the xpack.notification.webhook.additional_token_enabled configuration setting to bypass traffic filters.

Kibana

edit
  • The maximum size of a single Kibana instance is 8GB. This means, Kibana instances can be scaled up to 8GB before they are scaled out. For example, when creating a deployment with a Kibana instance of size 16GB, then 2x8GB instances are created. If you face performance issues with Kibana PNG or PDF reports, the recommendations are to create multiple, smaller dashboards to export the data, or to use a third party browser extension for exporting the dashboard in the format you need.
  • Running an external Kibana in parallel to Elasticsearch Add-On for Heroku’s Kibana instances may cause errors, for example Unable to decrypt attribute, due to a mismatched xpack.encryptedSavedObjects.encryptionKey as Elasticsearch Add-On for Heroku does not allow users to set nor expose this value. While workarounds are possible, this is not officially supported nor generally recommended.

APM Agent central configuration with PrivateLink or traffic filters

edit

If you are using APM 7.9.0 or older:

Fleet with PrivateLink or traffic filters

edit
  • You cannot use Fleet 7.13.x if your deployment is secured by traffic filters. Fleet 7.14.0 and later works with traffic filters (both Private Link and IP filters).
  • If you are using Fleet 8.12+, using a remote Elasticsearch output with a target cluster that has traffic filters enabled is not currently supported.

Enterprise Search in Kibana

edit

Enterprise Search’s management interface in Kibana does not work with traffic filters with 8.3.1 and older, it will return an Insufficient permissions (403 Forbidden) error. In Kibana 8.3.2, 8.4.0 and higher, the Enterprise Search management interface works with traffic filters.

Restoring a snapshot across deployments

edit

Kibana and Enterprise Search do not currently support restoring a snapshot of their indices across Elastic Cloud deployments.

  • Kibana uses encryption keys in various places, ranging from encrypting data in some areas of reporting, alerts, actions, connector tokens, ingest outputs used in Fleet and Synthetics monitoring to user sessions.
  • Enterprise Search uses encryption keys when storing content source synchronization credentials, API tokens and other sensitive information.
  • Currently, there is not a way to retrieve the values of Kibana and Enterprise Search encryption keys, or set them in the target deployment before restoring a snapshot. As a result, once a snapshot is restored, Kibana and Enterprise Search will not be able to decrypt the data required for some Kibana and Enterprise Search features to function properly in the target deployment.
  • If you have already restored a snapshot across deployments and now have broken Kibana saved objects or Enterprise Search features in the target deployment, you will have to recreate all broken configurations and objects, or create a new setup in the target deployment instead of using snapshot restore.

A snapshot taken using the default found-snapshots repository can only be restored to deployments in the same region. If you need to restore snapshots across regions, create the destination deployment, connect to the custom repository, and then restore from a snapshot.

When restoring from a deployment that’s using searchable snapshots, you must not delete the snapshots in the source deployment even after they are successfully restored in the destination deployment.

Migrate Fleet-managed Elastic Agents across deployments by restoring a snapshot

edit

There are situations where you may need or want to move your installed Elastic Agents from being managed in one deployment to being managed in another deployment.

In Elastic Cloud, you can migrate your Elastic Agents by taking a snapshot of your source deployment, and restoring it on a target deployment.

To make a seamless migration, after restoring from a snapshot there are some additional steps required, such as updating settings and resetting the agent policy. Check Migrate Elastic Agents for details.

Regions and Availability Zones

edit
  • The AWS us-west-1 region is limited to two availability zones for ES data nodes and one (tiebreaker only) virtual zone (as depicted by the -z in the AZ (us-west-1z). Deployment creation with three availability zones for Elasticsearch data nodes for hot, warm, and cold tiers is not possible. This includes scaling an existing deployment with one or two AZs to three availability zones. The virtual zone us-west-1z can only hold an Elasticsearch tiebreaker node (no data nodes). The workaround is to use a different AWS US region that allows three availability zones, or to scale existing nodes up within the two availability zones.
  • The AWS eu-central-2 region is limited to two availability zones for CPU Optimized (ARM) Hardware profile ES data node and warm/cold tier. Deployment creation with three availability zones for Elasticsearch data nodes for hot (for CPU Optimized (ARM) profile), warm and cold tiers is not possible. This includes scaling an existing deployment with one or two AZs to three availability zones. The workaround is to use a different AWS region that allows three availability zones, or to scale existing nodes up within the two availability zones.

Known problems

edit

Repository Analysis API is unavailable in Elastic Cloud

edit
  • The Elasticsearch Repository analysis API is not available in Elastic Cloud due to deployments defaulting to having operator privileges enabled that prevent non-operator privileged users from using it along with a number of other APIs.