Welcome to the second part in the “Terraform for Dummies” series 😃 In the last post, we learned about instructure as code and got a high level overview of how terraform works in a DevOps environment. If you haven’t already, please learn the basics in the previous post before reading further. In this post, we will get our environment set up so we can start creating Terraform code!
If you haven't already, please read part 1 of this series to have the best info going forward 👍
Before we go any further, we will need to do three things:
- Have an active AWS account
- Install an IDE of your choice (I use VSCode)
- Install terraform
Create an AWS account:
Once you have done these three things, we will need to open up our IDE. I recommend using VSCode because of its wide array of extensions. One of the extensions I will be using is the Hashicorp Terraform extension, which will do things like syntax highlighting and autocompletion.
Open up a terminal in VSCode and type
$ terraform version. If you get an error message, you may have missed a step in the terraform installation step.
Create a project folder and a new file called
main.tf. Inside of this file, we are going to add our aws provider found on hashicorp’s website. If you have programmed before, you are probably familiar with ‘main’ as a special keyword for compilers to find as a starting point for code execution. This is actually not the case with Terraform; Terraform treats all files with the
.tf extension as one single file, so any file names that you specify are just logical groups. It’s also a best practice to keep your naming conventions and file structure consistent across your projects.
You can see that we have a terraform block which is our terraform settings configuration. We need at least one provider in this block, so in our case we have aws. Notice that the provider is actually Hashicorp and not AWS themselves, although it is the official aws provider. At the time I am creating this post, the current version of terraform is 4.16.0.
We can also see we have a provider block named “aws”. In this provider block, you can specify our region, access key and secret key. It is important to know that you should NEVER share these keys with ANYONE!
While it is possible to configure these credentials directly in the terraform code, you should not do this. A better way is to store them in an AWS profile, which is what we will do for this tutorial. The best practice would be to store these keys in a secret manager like Hashicorp’s Vault. For the sake of simplicity and as to not deviate from focusing on Terraform itself, we will not be storing these secrets in Vault, but again in an AWS profile.
Generating AWS Keys
To generate our keys, we will first need to navigate to Identity Access Management in the AWS console.
Once in IAM, create a User with any name that you want. In this case I will simply call it “Terraform.” Since we will only be using it from the terminal, only give it programmatic access and attach the PowerUserAccess policy. This policy essentially gives the same permissions as the AdministratorAccess policy minus management of users and groups.
Once you have created your user, you should see your Access key ID and Secret access key.
Again… DO NOT SHOW YOUR access_key OR secret_key TO ANYONE OR PUSH THEM TO ANY REPOSITORY. Doing so could cause your AWS Account to be compromised!! That being said, I am not responsible for any wrongdoing that may come as a result of exposing secret keys.
Configuring an AWS Profile
Depending on how experienced you are with AWS CLI, you may have already configured your default aws profile to have access to AWS. We can configure AWS CLI to have multiple profiles which will allow us to have an IAM user dedicated to this terraform tutorial.
To configure these credentials, edit the .aws/credentials file using your favorite text editor:
~/.aws/credentials (Linux & Mac) or %USERPROFILE%.aws\credentials (Windows)
In this file we can see the default credentials and we can add our own set of credentials to use. AWS specifies that you cannot use the word “profile” when creating an entry in the credentials file.
[default] aws_access_key_id=AKIAIOSFODNN7EXAMPLE aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY [terraform_tutorial] aws_access_key_id=ACCESS_KEY aws_secret_access_key=SECRET_ACCESS_KEY
Once you have configured your credentials, you can now reference that credentials profile in your terraform code.
Now that we have taken care of the credentials configuration, we can write a small piece of terraform code called a resource block. A resource block describes one or more infrastructure objects, such as virtual networks, compute instances, or higher-level components such as DNS records. In our case, we can start by creating a sample S3 bucket so we don’t run up charges on our AWS account.
We can see that this resource block declares resource type
"aws_s3_bucket" with a given local name
"test-terraform-bucket". The local name is used to refer to the s3 bucket resource from elsewhere in the same Terraform module, but has no significance outside that module's scope.
Within the block body (between the
}) are the configuration arguments,
bucket, we must give a string that will create an s3 bucket with a globally unique name.
tags are optional, but I have added this argument to demonstrate that many configuration arguments can also have arguments from within themselves.
Recall from the previous post that “init” is the second stage of the terraform lifecycle, right after code. To recap, terraform init is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control.
We can initialize terraform using
$ terraform init
The next step in the terraform lifecycle is plan. This step creates an execution plan, like a blueprint for a house. Unless explicitly disabled, terraform does a refresh and then declaratively determines what needs to be done in order to reach the desired configuration state.
Run the terraform plan for this with
$ terraform plan. If successful, the following output and more will print out:
You can see that the bucket and tags are listed in the output, and that everything else will be known after the apply. If you are receiving an error, you may have given invalid values or there might be a syntax error. At this point, terraform has not applied any changes to the AWS environment.
You may have also noticed 2 new files generated in your project folder: “terraform-provider-aws_v4.16.0_x5” and “.terraform.lock.hcl”
“terraform-provider-aws_v4.16.0_x5” is nothing but the aws provider binary for terraform 4.16 and “.terraform.lock.hcl” is known as the dependency lock file which allows terraform to “remember” which exact version of each provider you used before.
Once your plan is successful, you can run
$ terraform apply. This will apply all of the configuration changes which were made in the plan stage.
When you run this command, you will be greeted with the following message asking if you want to perform the changes.
Now your changes will be visible in the AWS console. Alternatively, you can run the following command to list your bucket. Make sure to use the profile that we configured earlier.
$ aws s3 ls –profile <profile-name>
You will also see a new file called “terraform.tfstate”. This file unsurprisingly stores the state of your terraform configuration. While the format of the state files are just JSON, direct file editing of the state is discouraged. Terraform provides the terraform state command to perform basic modifications of the state using the CLI.
At this point you have everything you need to deploy an s3 bucket solely using terraform. You might be seeing how powerful terraform can be now, since you can create and manage cloud infrastructure with some code and a few simple commands.
At the end of every lifecycle, in our case terraform, there is an end. If you wish to tear down all of the infrastructure that you created, it's an easy task for terraform. Please note that you should only do this if you wish to delete the infrastructure that you provisioned.
Once again, we are greeted with a similar message upon running
$ terraform destroy.
Once we allow terraform to destroy our resources by typing “yes”, we will no longer be able to see our s3 bucket in the aws console. And of course, you can again list your buckets by using the command:
$ aws s3 ls –profile <profile-name>
This concludes part 2 of this series! Thank you for reading, and stay tuned for part 3 of this series. In the next post, we will be taking an even deeper dive into terraform.
Please comment below if you’re enjoying these posts, and let me know if there’s anything I can improve!😊