Lately I have been on a little bit of an Ansible + Terraform kick so I thought I would throw together a code example for deploying a Consul cluster in to an IBM Cloud VPC using these tools.
Consul is a service mesh control plane with baked in service discovery, configuration, and segmentation functionality. As more and more of our deployed applications and services are spread out between clouds, Consul allows us a secure communication layer regardless of where our infrastructure is hosted.
You can get $500 (USD) in credit towards VPC resources in IBM by adding the code
VPC500
to your account.
Prerequisites
- Tfswitch installed
- Ansible installed
- An IBM Cloud API Key
Use Terraform to Create Infrastructure
Terraform is an infrastructure as code
tool that allows you to provision and manage a wide range of clouds, infrastructure, and services. Using Terraform allows us to create consistent, repeatable deployments.
Steps
- Clone repository:
$ git clone https://github.com/cloud-design-dev/ibm-vpc-consul-terraform-ansible.git
$ cd ibm-vpc-consul-terraform-ansible
- Copy
terraform.tfvars.template
toterraform.tfvars
:
$ cp terraform.tfvars.template terraform.tfvars
Edit
terraform.tfvars
to match your environment.Run
tfswitch
to point to the right Terraform version for this solution:
$ tfswitch
- Deploy all resources:
$ terraform init
$ terraform plan -out default.tfplan
$ terraform apply default.tfplan
If the plan completes successfully you should see something like the following output:
Apply complete! Resources: 27 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
Outputs:
bastion_instance_ip = 10.242.0.36
bastion_public_ip = x.y.x.y
consul_instance_ip = [
"10.242.0.4",
"10.242.0.6",
"10.242.0.5",
]
consul_names = [
"default-041430-eu-gb-1-consul1",
"default-041430-eu-gb-1-consul2",
"default-041430-eu-gb-1-consul3",
]
Our Terraform deployment has also generated:
- An Ansible inventory file
- A variables file that will be used by the ansible playbook
- A temporary ansible.cfg file for use with our playbook
After the plan completes we can move on to deploying Consul using Ansible.
Run Ansible Playbook to Create the Consul Cluster
Whereas Terraform is best suited for the deployment of infrastructure, when it comes to configuration management I prefer Ansible. In this example Ansible will be used to:
- Update the base operating system
- Add the consul public key to the server
- Install the consul binary
- Bootstrap a 3 node cluster using Ansible templates
$ cd ansible
$ ansible-playbook -i inventory playbooks/consul-cluster.yml
If you would like a little more insight in to what Ansible is doing behind the scenes, add -vv
to your ansible-playbook command:
$ ansible-playbook -vv -i inventory playbooks/consul-cluster.yml
Verify that the cluster is running
Since we bound the Consul agent to the main private IP of the VPC instances we first need to set the environmental variable for CONSUL_HTTP_ADDR. Take one of the consul instance IPs and run the following command:
$ ansible -m shell -b -a "CONSUL_HTTP_ADDR=\"http://CONSUL_INSTANCE_IP:8500\" consul members" CONSUL_INSTANCE_NAME -i inventory
Example output
ansible -m shell -b -a "CONSUL_HTTP_ADDR=\"http://10.241.0.36:8500\" consul members" dev-011534-us-east-1-consul1 -i inventory
dev-011534-us-east-1-consul1 | CHANGED | rc=0 >>
Node Address Status Type Build Protocol DC Segment
dev-011534-us-east-1-consul1 10.241.0.36:8301 alive server 1.9.0 2 us-east <all>
dev-011534-us-east-1-consul2 10.241.0.38:8301 alive server 1.9.0 2 us-east <all>
dev-011534-us-east-1-consul3 10.241.0.37:8301 alive server 1.9.0 2 us-east <all>
Top comments (0)