- Elastic Cloud Enterprise - Elastic Cloud on your Infrastructure: other versions:
- Introducing Elastic Cloud Enterprise
- Preparing your installation
- Installing Elastic Cloud Enterprise
- Identify the deployment scenario
- Install ECE on a public cloud
- Install ECE on your own premises
- Alternative: Install ECE with Ansible
- Log into the Cloud UI
- Install ECE on additional hosts
- Migrate ECE to Podman hosts
- Post-installation steps
- Configuring your installation
- System deployments configuration
- Configure deployment templates
- Tag your allocators
- Edit instance configurations
- Create instance configurations
- Create deployment templates
- Configure system deployment templates
- Configure index management for templates
- Updating custom templates to support
node_roles
and autoscaling - Updating custom templates to support Integrations Server
- Default instance configurations
- Include additional Kibana plugins
- Manage snapshot repositories
- Manage licenses
- Change the ECE API URL
- Change endpoint URLs
- Enable custom endpoint aliases
- Configure allocator affinity
- Change allocator disconnect timeout
- Migrate ECE on Podman hosts to SELinux in
enforcing
mode
- Securing your installation
- Monitoring your installation
- Administering your installation
- Working with deployments
- Create a deployment
- Access Kibana
- Adding data to Elasticsearch
- Migrating data
- Ingesting data from your application
- Ingest data with Node.js on Elastic Cloud Enterprise
- Ingest data with Python on Elastic Cloud Enterprise
- Ingest data from Beats to Elastic Cloud Enterprise with Logstash as a proxy
- Ingest data from a relational database into Elastic Cloud Enterprise
- Ingest logs from a Python application using Filebeat
- Ingest logs from a Node.js web application using Filebeat
- Manage data from the command line
- Administering deployments
- Change your deployment configuration
- Maintenance mode
- Terminate a deployment
- Restart a deployment
- Restore a deployment
- Delete a deployment
- Migrate to index lifecycle management
- Disable an Elasticsearch data tier
- Access the Elasticsearch API console
- Work with snapshots
- Restore a snapshot across clusters
- Upgrade versions
- Editing your user settings
- Deployment autoscaling
- Configure Beats and Logstash with Cloud ID
- Keep your clusters healthy
- Keep track of deployment activity
- Secure your clusters
- Deployment heap dumps
- Deployment thread dumps
- Traffic Filtering
- Connect to your cluster
- Manage your Kibana instance
- Manage your APM & Fleet Server (7.13+)
- Manage your APM Server (versions before 7.13)
- Manage your Integrations Server
- Switch from APM to Integrations Server payload
- Enable logging and monitoring
- Enable cross-cluster search and cross-cluster replication
- Access other deployments of the same Elastic Cloud Enterprise environment
- Access deployments of another Elastic Cloud Enterprise environment
- Access deployments of an Elasticsearch Service organization
- Access clusters of a self-managed environment
- Enabling CCS/R between Elastic Cloud Enterprise and ECK
- Edit or remove a trusted environment
- Migrate the cross-cluster search deployment template
- Enable App Search
- Enable Enterprise Search
- Enable Graph (versions before 5.0)
- Troubleshooting
- RESTful API
- Authentication
- API calls
- How to access the API
- API examples
- Setting up your environment
- A first API call: What deployments are there?
- Create a first Deployment: Elasticsearch and Kibana
- Applying a new plan: Resize and add high availability
- Updating a deployment: Checking on progress
- Applying a new deployment configuration: Upgrade
- Enable more stack features: Add Enterprise Search to a deployment
- Dipping a toe into platform automation: Generate a roles token
- Customize your deployment
- Remove unwanted deployment templates and instance configurations
- Secure your settings
- API reference
- Changes to index allocation and API
- Script reference
- Release notes
- Elastic Cloud Enterprise 3.7.3
- Elastic Cloud Enterprise 3.7.2
- Elastic Cloud Enterprise 3.7.1
- Elastic Cloud Enterprise 3.7.0
- Elastic Cloud Enterprise 3.6.2
- Elastic Cloud Enterprise 3.6.1
- Elastic Cloud Enterprise 3.6.0
- Elastic Cloud Enterprise 3.5.1
- Elastic Cloud Enterprise 3.5.0
- Elastic Cloud Enterprise 3.4.1
- Elastic Cloud Enterprise 3.4.0
- Elastic Cloud Enterprise 3.3.0
- Elastic Cloud Enterprise 3.2.1
- Elastic Cloud Enterprise 3.2.0
- Elastic Cloud Enterprise 3.1.1
- Elastic Cloud Enterprise 3.1.0
- Elastic Cloud Enterprise 3.0.0
- Elastic Cloud Enterprise 2.13.4
- Elastic Cloud Enterprise 2.13.3
- Elastic Cloud Enterprise 2.13.2
- Elastic Cloud Enterprise 2.13.1
- Elastic Cloud Enterprise 2.13.0
- Elastic Cloud Enterprise 2.12.4
- Elastic Cloud Enterprise 2.12.3
- Elastic Cloud Enterprise 2.12.2
- Elastic Cloud Enterprise 2.12.1
- Elastic Cloud Enterprise 2.12.0
- Elastic Cloud Enterprise 2.11.2
- Elastic Cloud Enterprise 2.11.1
- Elastic Cloud Enterprise 2.11.0
- Elastic Cloud Enterprise 2.10.1
- Elastic Cloud Enterprise 2.10.0
- Elastic Cloud Enterprise 2.9.2
- Elastic Cloud Enterprise 2.9.1
- Elastic Cloud Enterprise 2.9.0
- Elastic Cloud Enterprise 2.8.1
- Elastic Cloud Enterprise 2.8.0
- Elastic Cloud Enterprise 2.7.2
- Elastic Cloud Enterprise 2.7.1
- Elastic Cloud Enterprise 2.7.0
- Elastic Cloud Enterprise 2.6.2
- Elastic Cloud Enterprise 2.6.1
- Elastic Cloud Enterprise 2.6.0
- Elastic Cloud Enterprise 2.5.1
- Elastic Cloud Enterprise 2.5.0
- Elastic Cloud Enterprise 2.4.3
- Elastic Cloud Enterprise 2.4.2
- Elastic Cloud Enterprise 2.4.1
- Elastic Cloud Enterprise 2.4.0
- Elastic Cloud Enterprise 2.3.2
- Elastic Cloud Enterprise 2.3.1
- Elastic Cloud Enterprise 2.3.0
- Elastic Cloud Enterprise 2.2.3
- Elastic Cloud Enterprise 2.2.2
- Elastic Cloud Enterprise 2.2.1
- Elastic Cloud Enterprise 2.2.0
- Elastic Cloud Enterprise 2.1.1
- Elastic Cloud Enterprise 2.1.0
- Elastic Cloud Enterprise 2.0.1
- Elastic Cloud Enterprise 2.0.0
- Elastic Cloud Enterprise 1.1.5
- Elastic Cloud Enterprise 1.1.4
- Elastic Cloud Enterprise 1.1.3
- Elastic Cloud Enterprise 1.1.2
- Elastic Cloud Enterprise 1.1.1
- Elastic Cloud Enterprise 1.1.0
- Elastic Cloud Enterprise 1.0.2
- Elastic Cloud Enterprise 1.0.1
- Elastic Cloud Enterprise 1.0.0
- What’s new with the Elastic Stack
- About this product
Create deployment templates
editCreate deployment templates
editElastic Cloud Enterprise comes with some deployment templates already built in, but you can create new deployment templates to address particular use cases that you might have.
For example: You might decide to create a new deployment template, if you have a specific search use case that requires Elasticsearch data nodes in a specific configuration that also includes machine learning for anomaly detection. If you need to create these deployments fairly frequently, you can create a deployment template once and deploy it as many times as you like. Or, create a single template for both your test and production deployments to ensure they are exactly the same.
Before you begin
editBefore you start creating your own deployment templates, you should have: tagged your allocators to tell ECE what kind of hardware you have available for Elastic Stack deployments. If the default instance configurations don’t provide what you need, you might also need to create your own instance configurations first.
Create deployment templates in the UI
edit- Log into the Cloud UI.
- From the Platform menu, select Templates.
- Select Create template.
- Give your template a name and include a description that reflects its intended use.
- Select Create template. The Configure instances page opens.
-
Choose whether or not autoscaling is enabled by default for deployments created using the template. Autoscaling adjusts resources available to the deployment automatically as loads change over time.
-
Configure the initial settings for all of the data tiers and components that will be available in the template. A default is provided for every setting and you can adjust these as needed. For each data tier and component, you can:
-
Select which instance configuration to assign to the template. This allows you to optimize the performance of your deployments by matching a machine type to a use case. A hot data and content tier, for example, is best suited to be allocated with an instance configuration having fast SSD storage, while warm and cold data tiers should be allocated with an instance configuration with larger storage but likely less performant, lower cost hardware.
-
Adjust the default, initial amount of memory and storage. Increasing memory and storage also improves performance by increasing the CPU resources that get assigned relative to the size of the instance, meaning that a 32 GB instance gets twice as much CPU resource as a 16 GB one. These resources are just template defaults that can be adjusted further before you create actual deployments.
-
Configure autoscaling settings for the deployment.
- For data nodes, autoscaling up is supported based on the amount of available storage. You can set the default initial size of the node and the default maximum size that the node can be autoscaled up to.
- For machine learning nodes, autoscaling is supported based on the expected memory requirements for machine learning jobs. You can set the default minimum size that the node can be scaled down to and the default maximum size that the node can be scaled up to. If autoscaling is not enabled for the deployment, the "minimum" value will instead be the default initial size of the machine learning node.
The default values provided by the deployment template can be adjusted at any time. Check our Autoscaling example for details about these settings. Nodes and components that currently support autoscaling are indicated by a
supports autoscaling
badge on the Configure instances page. -
Add fault tolerance (high availability) by using more than one availability zone.
-
Add user settings to configure how Elasticsearch and other components run. Check Editing your user settings for details about what settings are available.
If a data tier or component is not required for your particular use case, you can simply set its initial size per zone to
0
. You can enable a tier or component anytime you need it just by scaling up the size. If autoscaling is enabled, data tiers and machine learning nodes are sized up automatically when they’re needed. For example, when you configure your first machine learning job, ML nodes are enabled by the autoscaling process. Similarly, if you choose to create a cold data phase as part of your deployment’s index lifecycle management (ILM) policy, a cold data node is enabled automatically without your needing to configure it. -
- Select Manage indices.
- On this page you can configure index management by assigning attributes to each of the data nodes in the deployment template. In Kibana, you can configure an index lifecycle management (ILM) policy, based on the node attributes, to control how data moves across the nodes in your deployment.
- Select Stack features.
- You can select a snapshot repository to be used by default for deployment backups.
- You can choose to enable logging and monitoring by default, so that deployment logs and metrics are send to a dedicated monitoring deployment, and so that additional log types, retention options, and Kibana visualizations are available on all deployments created using this template.
- Select Extensions.
- Select any Elasticsearch extensions that you would like to be available automatically to all deployments created using the template.
- Select Save and create template.
Create deployment templates through the RESTful API
edit-
Obtain the existing deployment templates to get some examples of what the required JSON looks like. You can take the JSON for one of the existing templates and modify it to create a new template, similar to what gets shown in the next step.
curl -k -X GET -H "Authorization: ApiKey $ECE_API_KEY" https://COORDINATOR_HOST:12443/api/v1/deployments/templates?region=ece-region
-
Post the JSON for your new deployment template.
The following example creates a deployment template that defaults to a highly available Elasticsearch cluster with 4 GB per hot node, a 16 GB machine learning node, 3 dedicated master nodes of 1 GB each, a 1 GB Kibana instance, and a 1 GB dedicated coordinating node that is tasked with handling and coordinating all incoming requests for the cluster. Elasticsearch and Kibana use the default instance configurations, but the machine learning node is based on the custom instance configuration in our previous example.
curl -k -X POST -H "Authorization: ApiKey $ECE_API_KEY" https://$COORDINATOR_HOST:12443/api/v1/deployments/templates?region=ece-region -H 'content-type: application/json' -d '{ "name" : "Default", "description" : "Default deployment template for clusters", "deployment_template": { "resources": { "elasticsearch": [ { "ref_id": "es-ref-id", "region": "ece-region", "plan": { "cluster_topology": [ { "node_type": { "master": true, "data": true, "ingest": true }, "zone_count": 1, "instance_configuration_id": "data.default", "size": { "value": 4096, "resource": "memory" }, "node_roles": [ "master", "ingest", "data_hot", "data_content", "remote_cluster_client", "transform" ], "id": "hot_content", "elasticsearch": { "node_attributes": { "data": "hot" } }, "topology_element_control": { "min": { "value": 1024, "resource": "memory" } }, "autoscaling_max": { "value": 2097152, "resource": "memory" } }, { "node_type": { "data": true, "ingest": false, "master": false }, "instance_configuration_id": "data.highstorage", "zone_count": 1, "size": { "resource": "memory", "value": 0 }, "node_roles": [ "data_warm", "remote_cluster_client" ], "id": "warm", "elasticsearch": { "node_attributes": { "data": "warm" } }, "topology_element_control": { "min": { "value": 0, "resource": "memory" } }, "autoscaling_max": { "value": 2097152, "resource": "memory" } }, { "node_type": { "data": true, "ingest": false, "master": false }, "instance_configuration_id": "data.highstorage", "zone_count": 1, "size": { "resource": "memory", "value": 0 }, "node_roles": [ "data_cold", "remote_cluster_client" ], "id": "cold", "elasticsearch": { "node_attributes": { "data": "cold" } }, "topology_element_control": { "min": { "value": 0, "resource": "memory" } }, "autoscaling_max": { "value": 2097152, "resource": "memory" } }, { "node_type": { "data": true, "ingest": false, "master": false }, "instance_configuration_id": "data.frozen", "zone_count": 1, "size": { "resource": "memory", "value": 0 }, "node_roles": [ "data_frozen" ], "id": "frozen", "elasticsearch": { "node_attributes": { "data": "frozen" } }, "topology_element_control": { "min": { "value": 0, "resource": "memory" } }, "autoscaling_max": { "value": 2097152, "resource": "memory" } }, { "node_type": { "master": false, "data": false, "ingest": true }, "zone_count": 1, "instance_configuration_id": "coordinating", "size": { "value": 1024, "resource": "memory" }, "node_roles": [ "ingest", "remote_cluster_client" ], "id": "coordinating", "topology_element_control": { "min": { "value": 0, "resource": "memory" } } }, { "node_type": { "master": true, "data": false, "ingest": false }, "zone_count": 3, "instance_configuration_id": "master", "size": { "value": 1024, "resource": "memory" }, "node_roles": [ "master", "remote_cluster_client" ], "id": "master", "topology_element_control": { "min": { "value": 0, "resource": "memory" } } }, { "node_type": { "master": false, "data": false, "ingest": false, "ml": true }, "zone_count": 1, "instance_configuration_id": "ml", "size": { "value": 0, "resource": "memory" }, "node_roles": [ "ml", "remote_cluster_client" ], "id": "ml", "topology_element_control": { "min": { "value": 16384, "resource": "memory" } }, "autoscaling_min": { "resource": "memory", "value": 16384 }, "autoscaling_max": { "value": 2097152, "resource": "memory" } } ], "elasticsearch": {}, "autoscaling_enabled": false }, "settings": { "dedicated_masters_threshold": 3 } } ], "kibana": [ { "ref_id": "kibana-ref-id", "elasticsearch_cluster_ref_id": "es-ref-id", "region": "ece-region", "plan": { "zone_count": 1, "cluster_topology": [ { "instance_configuration_id": "kibana", "size": { "value": 1024, "resource": "memory" } } ], "kibana": {} } } ], "apm": [ { "ref_id": "apm-ref-id", "elasticsearch_cluster_ref_id": "es-ref-id", "region": "ece-region", "plan": { "cluster_topology": [ { "instance_configuration_id": "apm", "size": { "value": 0, "resource": "memory" }, "zone_count": 1 } ], "apm": {} } } ], "enterprise_search": [ { "ref_id": "enterprise_search-ref-id", "elasticsearch_cluster_ref_id": "es-ref-id", "region": "ece-region", "plan": { "cluster_topology": [ { "node_type": { "appserver": true, "connector": true, "worker": true }, "instance_configuration_id": "enterprise.search", "size": { "value": 0, "resource": "memory" }, "zone_count": 2 } ], "enterprise_search": {} } } ] } } }'
When specifying node_roles
in the Elasticsearch plan of the deployment template, the template must contain all resource types and all Elasticsearch tiers.
The deployment template must contain exactly one entry for each resource type. It must have one Elasticsearch, one Kibana, one APM, and one Enterprise Search. On top of that, it must also include all supported Elasticsearch tiers in the Elasticsearch plan. The supported tiers are identified by the IDs hot_content
, warm
, cold
, frozen
, master
, coordinating
and ml
.
Deployment templates without node_roles
or id
should only contain hot and warm data tiers, with different `instance_configuration_id`s. Node roles are highly recommended when using the cold tier and are mandatory for the frozen tier.
After you have saved your new template, you can start creating new deployments with it.
To support deployment templates that are versioned due to a constraint on architecture that is only supported by newer versions of ECE, for example ARM instances, you must add additional configuration:
-
The
template_category_id
for both template versions must be identical. -
The
min_version
attribute must be set.
These attributes are set at the same level as name
and description
. The UI selects the template with the highest matching min_version
that is returned by the API.
On this page
ElasticON events are back!
Learn about the Elastic Search AI Platform from the experts at our live events.
Register now