Automate all the things: Terraform + Ansible + Elastic Cloud Enterprise
A sequel to our first post, Automating the installation of Elastic Cloud Enterprise with Ansible, this blog shows how to extend automation to cloud provisioning with Terraform. In the first post, we detailed how to deploy and configure Elastic Cloud Enterprise (ECE) across three availability zones in AWS using Ansible. However, the provisioning of the underlying EC2 instances and configuration of the security groups was all manual.
In this post, we will improve upon our methodology by using Terraform to automate the provisioning and configuration of those EC2 instances and security groups. And we’re also going to use it to automate the installation, configuration, and execution of the Ansible playbook we built last time. Automation is good. And good is not dumb.
BTW, what is Terraform? It has nothing to do with the Genesis device like I initially thought. According to HashiCorp (the developers), it is a tool for building, changing, and versioning infrastructure using high-level configuration: infrastructure as code. This is handy for many reasons, not the least of which is that everything necessary to get your infrastructure up, running, and configured lives in text files that can be version controlled and programmatically CRUDed (is that a word?).
Although this post focuses on AWS as the cloud provider, you can use Terraform to provision and manage resources on any cloud provider. Check out their major cloud provider list and the not major (apparently) cloud provider list. An example using GCP is available in our examples repo on GitHub.
And lastly, as before, this blog is a basic demonstration that will create an ECE environment suitable for a small proof of concept or development environment. A full production deployment should make use of instance groups, load balancers, and other high-availability constructs which have been left out of this setup. See the Elastic Cloud Enterprise planning docs for additional details regarding production planning and deployment.
Tasks
We’re once again going to follow the small baseline installation example in the ECE docs, but our tasks are pretty different:
- Install Terraform
- Define the infrastructure
- Run it
- Jazz hands
Install Terraform
HashiCorp’s documentation is the place to go for this. You'll also need Ansible, which we installed in the first blog post. If you haven't done that yet, check out the Ansible docs. For me, with my brittle-keyboard Macbook Pro, I did:
> brew install terraform > brew install ansible
It’s important to note that the configuration I used is compatible with Terraform version 0.12. I specifically used 0.12.19.
Define the infrastructure
The steps in this task are:
- Bootstrap your Terraform configuration with the ones I used.
- Browse through them to get a sense of what's going on.
- Set some variables that are hopefully unique to you.
Bootstrap your Terraform configuration
As stated earlier, Terraform is a way of provisioning infrastructure via configuration files. And so there are a lot of config files. To get going more quickly, clone the repo or download the files from our examples repository on GitHub. Everything we’re going to do is done to these files.
Like before, my steps are tailored for the CLI-inclined folks like me since it's much harder to write up point-and-click instructions!
Steps
- Clone or download the examples repository.
> git clone https://github.com/elastic/examples.git
- File-browse your way over to where you saved it.
> cd /workspace/github/elastic/examples
- Navigate to the ECE AWS Terraform examples.
> cd Cloud\ Enterprise/Getting\ Started\ Examples/aws/terraform
- Prepare to do some text editing.
Browse through the files
You should have the following files in front of you. Browse through them if you like!
terraform.tfvars.example | After renaming to terraform.tfvars , it’s the main place where we set our secret stuff and override any variable in variables.tf that we so desire.
|
variables.tf | Bunches of variables, you should probably look through that. |
provider.tf | Terraform config to tell it how to connect to AWS. |
servers.tf | Terraform config to find our desired AMI, deploy our instances, and captures the instance metadata for use by Ansible. |
networking.tf | Terraform config that does a bunch of AWS network setup: a VPC, internet gateway, routing table, subnet, and security groups. |
main.tf | Terraform config that launches the Ansible script and collects some of its more important output. |
ansible-install.sh | The script Terraform will modify and call to configure and run Ansible with the ECE role. |
Set variables
We need to set a few different variables to get this working for you (vs. for me, because I already got it working for me and this is all about you).
- A project name which will be tagged on all the AWS resources that are created
- Your IP so that only you have ssh access to the underlying instances; setting to
0.0.0.0/0
would work too if you want to open it up to the world - AWS access credentials to allow Terraform to provision stuff
If you don’t have an AWS access/secret key pair, follow the AWS docs to create one.
Note: variables.tf
defines a public and private key file location to use for ssh-ing into the EC2 instances. If you don’t have an ssh key, I suggest you google around. DigitalOcean has a good, basic description for Linux/OSX and Windows.
Steps
- Rename
terraform.tfvars.example
toterraform.tfvars
. - In
terraform.tfvars
, make the following changes:- Set a
project_name
used to identify all your AWS resources. - Set
trusted_network
to your IP in CIDR notation, or whichever range you prefer. - Set
aws_access_key
andaws_secret_key
with your info. - Optionally override
aws_region
,public_key
, and/orprivate_key
.
- Set a
Two settings I'm not asking you to change are the AMI and EC2 instance type settings. There are a few reasons for this, but mainly to reduce complexity with getting started. Like last time, our plan is to deploy three i3.xlarge instances from a CentOS 7 AMI.
- To ensure Terraform can find the right AMI, we can provide it with various metadata. In this case, I used a name pattern, an owner ID (centos.org's), and a virtualization type. The name pattern and owner id are configurable in
variables.tf
, but be aware that if you select a different AMI (e.g. one with Ubuntu), you could need a different value forremote_user
. - Changing the instance type also has ramifications. i3's use locally attached NVMe drives which have a specific OS device name and configuration settings in Terraform. If you use a different instance and attach EBS volumes, you'll need to change
servers.tf
to map it properly. See the Terraform docs for more information.
The settings I used for AMI and EC2 instance type can be seen in variables.tf
.
# The name of the AMI in the AWS Marketplace variable "aws_ami_name" { default = "CentOS Linux 7 x86_64 HVM*" } # The owner of the AMI variable "aws_ami_owner" { default = "679593333241" # centos.org } # User to log in to instances and perform install # This is dependent upon the AMI you use, so make sure these are in sync. For example, an Ubuntu AMI would use the ubuntu user variable "remote_user" { default = "centos" } # ECE instance type variable "aws_instance_type" { default = "i3.xlarge" } # The device name of the non-root volume that will be used by ECE # For i3 instances, this is nvme0n1. # If you use a different instance type, this value will change and might also require changes to the resource definition in servers.tf variable "device_name" { default="nvme0n1" }
Run it
We’re pretty much done. Crazy, right? It’s possible that if you just ran ahead with the defaults and didn't spend too much time investigating all the config files, you’re at this point having just installed Terraform, copied some files, and set a couple of variables. And then? Then you just run the thing!
Steps
- Initialize Terraform.
> terraform init
- Apply the configurations we worked so hard to build.
> terraform apply
Yup. That’s it.
After a bunch of minutes you should get some output like shown below that tells you the URL and admin password for the ECE admin console:
null_resource.run-ansible (local-exec): TASK [ansible-elastic-cloud-enterprise : debug] ******************************** null_resource.run-ansible (local-exec): ok: [ec2-52-70-7-3.compute-1.amazonaws.com] => { null_resource.run-ansible (local-exec): "msg": "Adminconsole is reachable at: https://ec2-35-175-235-131.compute-1.amazonaws.com:12443" null_resource.run-ansible (local-exec): } null_resource.run-ansible (local-exec): TASK [ansible-elastic-cloud-enterprise : debug] ******************************** null_resource.run-ansible (local-exec): ok: [ec2-52-70-7-3.compute-1.amazonaws.com] => { null_resource.run-ansible (local-exec): "msg": "Adminconsole password is: yI6ClXYNQ5LGiZlBuOm94s8hGV5ispQS24WVfL5fE9q" null_resource.run-ansible (local-exec): } null_resource.run-ansible (local-exec): TASK [ansible-elastic-cloud-enterprise : include_tasks] ************************ null_resource.run-ansible (local-exec): skipping: [ec2-52-70-7-3.compute-1.amazonaws.com] null_resource.run-ansible (local-exec): PLAY RECAP ********************************************************************* null_resource.run-ansible (local-exec): ec2-34-229-205-85.compute-1.amazonaws.com : ok=68 changed=37 unreachable=0 failed=0 skipped=7 rescued=0 ignored=1 null_resource.run-ansible (local-exec): ec2-35-175-235-131.compute-1.amazonaws.com : ok=68 changed=29 unreachable=0 failed=0 skipped=8 rescued=0 ignored=1 null_resource.run-ansible (local-exec): ec2-52-70-7-3.compute-1.amazonaws.com : ok=68 changed=37 unreachable=0 failed=0 skipped=7 rescued=0 ignored=1 null_resource.run-ansible: Creation complete after 16m50s [id=7701810896203834102] Apply complete! Resources: 1 added, 0 changed, 1 destroyed. Outputs: ece-instances = [ [ "ec2-35-175-235-131.compute-1.amazonaws.com", "ec2-34-229-205-85.compute-1.amazonaws.com", "ec2-52-70-7-3.compute-1.amazonaws.com", ], ] installed-ece-url = https://ec2-35-175-235-131.compute-1.amazonaws.com:12443
I encourage you to open it up and take it for a spin. Don’t be afraid! If you screw it up you can just destroy it all and recreate it like:
> terraform destroy > terraform apply
Note: The certificates used for TLS are self-signed so you’ll get a warning in your browser when you try to access the admin console. You can always configure ECE to use your own certificates. You can read more about it in the docs.
Jazz hands
I hope you'll agree that the level of effort this time around was substantially less than last time. We managed to wholly avoid the AWS console and the approximately 38 steps we used to get all the resources deployed, configured, and secured (this new method is 10 steps by my count).
Now go forth and conquer, for the cloud is small and you are a giant! Play with ECE, spin up clusters, upgrade them, and resize them. Enjoy the ease, enjoy the breeze.