DEV Community

Let's Do Tech
Let's Do Tech

Posted on • Originally published at letsdotech.dev

Terraform Provisioners

Okay, enough talking about creating cloud resources! We have created a couple of instances using our Terraform code. But, now what? We usually spin up cloud resources for some purpose. When the resources are successfully created, it is rather a start of something.

What are provisioners?

Cloud resources by themselves are not much of use. Compute resources are created to deploy some applications on them. The part where applications or softwares are installed and/or managed is called provisioning.

Traditionally, sysadmins would SSH into these machines and run some commands to download and install the softwares. In case of upgrades, they would do the same. However, it is not a very safe way to do operations especially when the number of servers grows to the size of 100s and 1000s.

Terraform helps us run some initial provisioning steps, where it helps us perform certain initial command line activities as soon as the new system boots. Having said that — it should be mentioned that Terraform is not a provisioning tool. There are other tools which are dedicated to work for such configuration management.

Creation of cloud computing resources is usually done by selecting machine image templates which come pre-configured in a cloud platform. For example, AWS EC2 instances make use of AMIs to spin virtual machines. These AMIs mainly pre-define the OSs and some software applications to be installed in the newly created machine.

In such cases it is desirable to pass in some user data, which helps customize the machine behaviour after the boot is complete. For example, it would be much easier to execute certain operational steps as soon as the machine boots without having to SSH into it.

AWS provides a similar functionality in terms of “ user data ". When launching a new EC2 instance in AWS, you can provide a shell script snippet to be run while the machine is being created. This is a nice feature as it doesn't require anybody to login to the system and perform those steps. It should be noted though - these scripts are run as root users automatically.

Of course, this cannot suffice everything under the hood of software provisioning, but it makes a lot of sense to trigger certain actions like running a repository update or install database client or download a compressed file and extract it in particular location.

Now-a-days Linux distributions come bundled with a cloud-init software which assists in running these “ user_data " scripts when a VM is spun on cloud platforms.

Terraform provides a similar support of executing certain operational commands when a resource is created or destroyed. This is achieved by provisioners. Provisioners are used as nested blocks within resources blocks. Primarily they accept a bash/shell script and execute it once the creation is successful or before destruction is triggered.

There are 3 types of provisioners which we would discuss in this post below. We shall refer to this commit for the same.

file

The file provisioner helps transport files from local machine to target virtual machine once it is up and running. The local machine here means a machine where Terraform is installed and Terraform configuration is executed. File type provisioner helps us copy files or directories into the target (newly created) system.

Referring to the example below, it mainly contains 2 attributes — source and destination. Source is the local path of file or directory which needs to be copied and destination is the path in target system where files need to be saved.

resource “aws_instance” “myvm” {
  . . .

  Provisioner “file” {
    Source = “<path to local file/directory>”
    Destination = “<target machine path>”
  }
}
Enter fullscreen mode Exit fullscreen mode

Optionally, you can also make use of content attribute if you want to save text content into a file in a remote system. When the above code is executed and a VM is created, Terraform copies the files from source on the local system to the destination path on target system.

local-exec

Running terraform apply to a configuration which is responsible to create virtual machines - causes more than one system to exist. First is the machine on which Terraform is installed, and others are all the machines created by Terraform.

local-exec is a type of provisioner which runs the commands or scripts on a local machine, i.e. the machine on which Terraform is installed. This is useful when we want to save a customized output of Terraform execution on a local disk.

For example, in our example here, we are creating 2 virtual machines. At the end of creation, we want to save certain attributes like public IP, DNS, OS type into separate files in our local system for reference. We can use a local-exec provisioner as below.

resource "aws_instance" "demo_vm_1" {
 provider = aws.aws_west
 ami = data.aws_ami.myAmi.id
 instance_type = var.type

 provisioner "local-exec" {
   command = "echo \"VM 1 Public IP: \" ${self.public_ip} >> /mymachines/message1.txt"
 }

 tags = {
   name = "Demo VM 1"
 }
}

resource "aws_instance" "demo_vm_2" {
 provider = aws.aws_west
 ami = data.aws_ami.myAmi.id
 instance_type = var.type

 provisioner "local-exec" {
   command = "echo \"VM 2 Public IP: \" ${self.public_ip} >> /mymachines/message2.txt"
 }

 tags = {
   name = "Demo VM 2"
 }
}
Enter fullscreen mode Exit fullscreen mode

We have included a local-exec provisioner block which has a command attribute. The given command saves the public IP address of both the machines in text files at a given path.

Before you run the above code, make sure to comment out the remote backend configuration in providers.tf file and reinitialize the Terraform directory. If you don't do it the local-exec provisioner would run on Terraform Cloud where you would not have permissions to do such operations. Doing this imports the remote state locally and works in local backend mode.

Run the command and when successful check for the files and it’s content in the given path.

remote-exec

This is similar to local-exec, in the sense, it enables you to run shell scripts — but on target machines. As the name suggests you can execute a set of instructions to be run on target machines either after successful creation or before triggering the destruction of the cloud resource.

In case of remote-exec, instead of command attribute, it uses inline attribute. Inline is a list of commands which are executed in a given sequence on the remote machine. There is also an option to specify a script which is of type string and takes in a path to the script file. If there are multiple script files, you can make use of a scripts attribute which accepts a list of script file paths.

The code below represents a remote-exec provisioner block along with a new connection block. The connection block passes the connection details using which Terraform logs into the target machine to execute inline scripts of remote-exec provisioner. It mentions the communication protocol, username, private key and the host public IP address.

connection {
   type = "ssh"
   user = "ubuntu"
   private_key = file(“<path to private key file>”)
   host = aws_instance.myvm.public_ip
 }

 provisioner "remote-exec" {
   inline = [
     "chmod +x /tmp/setup.sh",
     "/tmp/setup.sh args",
   ]
 }
Enter fullscreen mode Exit fullscreen mode

This brings us to the end of Terraform provisioners, for now.

Originally published at http://letsdotech.dev on January 26, 2021.

Top comments (0)