DEV Community

Cover image for Building HashiCorp Nomad Cluster in Vultr Cloud using Terraform

Posted on

Building HashiCorp Nomad Cluster in Vultr Cloud using Terraform

Nomad is really awesome!

In this blog post, let us see how to build and automate a Nomad cluster using Terraform and the Vultr cloud computing platform. The previous blog post discusses how to create a machine image in Packer. Here we will use the machine image created to deploy a Nomad cluster with both servers and clients. They are attached to Vultr load balancers and protected by firewalls.
The source code is available here

HashiCorp Nomad is a workload orchestrator to deploy applications in the cloud or on-premise. It enables deployment and management of various workloads including containers, jar files, exec jobs etc.. It is highly scalable and can run millions of containers in a single cluster.

Terraform is an infrastructure as code application where cloud infrastructure deployment can be automated and codified.

This Nomad cluster will have a single server and 3 clients. The server manages the state of the cluster and job deployments. The clients are the machines on which actual applications run.

Nomad Cluster

To deploy the infrastructure, we will require Terraform, Vultr cloud, and some shell scripts. All the scripts required to spawn up infra are available here.

Firstly, let us install Terraform using HashiCorp's documentation. Once Terraform is set up, we need to create a Vultr account. Vultr cloud is providing free $250 credits for first-time signup. Generate an API key from the dashboard and keep it safe, as it is required for creating the infrastructure.

Steps to build the cluster

The terraform.tfvars file looks like this:

region = "bom"
plan = "vc2-1c-2gb"
snapshot_id = "c31a3a09-8b8b-4b96-a56f-a020606d4cd4"
private_network_label = "nomad-network"
nomad_server_hostname_prefix = "nomad-server"
nomad_client_hostname_prefix = "nomad-client"
lb_server_name = "nomad-servers-lb"
lb_client_name = "nomad-clients-lb"
Enter fullscreen mode Exit fullscreen mode
  • Run command export VULTR_API_KEY="your-vultr-api-key" - this is to store the API key value in an environment variable. We do not want to expose it in the code.

  • Run command terraform init

  • Run command terraform plan -var="vultr_api_key=${VULTR_API_KEY}" to get information about the changes happening to infra when the application is deployed. Here the variable Vultr API key is fetched from the environment variable.

terraform plan

  • Run command terraform apply -var="vultr_api_key=${VULT R_API_KEY}" to build the infra.

terraform apply

In this process, the Terraform will connect to Vultr cloud and execute the creation of various services like Virtual machines, load balancers, and firewall. The output of the Terraform shows the load balancer IPs.

The Vultr cloud shows both Nomad servers and clients:

vultr cloud

We can verify the cluster is active by pasting the IP of the Nomad server in the browser. It will be like this:

Nomad server

Clients and servers:

clients and servers

Using Terraform, we can easily add more clients to the clusters by increasing the count of Nomad clients in the client configuration. On updating the number of clients to 3 and running Terraform apply, I was able to add 2 more clients to the system. More clients, more compute!

three clients

Destroying the Cluster

The entire infrastructure can be dismantled by running the command terraform destroy -var="vultr_api_key=${VULT

Terraform provides us a quick and convenient way to build and tear down infrastructures on demand. This enables the creation of multiple identical environments for test, prod, and QA. In the current setup, the Terraform state file is stored in the local machine, but it can also be stored in a remote backend like S3 or Terraform Cloud as well.

source github repo:

In the upcoming post, we will discuss how to run various workloads in Nomad including stateless and stateful applications.

Top comments (0)