DEV Community

rderik
rderik

Posted on • Originally published at rderik.com on

How to set up a new Terraform project using S3 backend and DynamoDB locking

Sometimes it feels easier to work on complex and challenging tasks with our tools. We forget how to do the simple initial steps for a project. The reason for this is that we lack practice starting projects. If you work for a company, you'll probably set up your Infrastructure as Code (IaC) once and then iterate for it. Unless you work on a consultancy or start projects just for fun, you might forget the initial steps to set up a project.

This post is about setting up a Terraform project storing the TFState using an S3 bucket and using DynamoDB as our lock mechanism to help prevent issues when two people are working on the same infrastructure.

Note: If you are interested in learning more about how to set up the directory structure for your Terraform project, you might find my guide, Meditations on Directory Structure for Terraform Projects, useful.

The chicken and egg problem of setting up resources before the state

It should be a straightforward process to start a Terraform project. Set the state to be stored in an S3 bucket and DynamoDB to keep the lock. But the problem begins when we realise that those resources, the S3 bucket and the DynamoDB table, won't be tracked by Terraform. We would also like to keep track of those resources in our IaC; we don't want them to be an exception. So what do you do? Do you create them manually and then later import them? While I believe you could do that, there might be a better way.

First, create the resources using the local state, then migrate the state to S3 and DynamoDB. That is a simple idea, so let's do it.

Creating the resources storing the state locally

Using the following code, let's define the initial template for creating the S3 bucket and the DynamoDB table. We will create a bucket called my-terraform-state and a table called my-terraform-lock. You can name yours however works for you. Create a file named main.tf with the following content:

provider "aws" {
  region = "us-west-1"
}

#terraform {
# backend "s3" {
# region = "us-west-1"
# bucket = "my-terraform-state"
# key = "global/tfstate/terraform.tfstate"
# dynamodb_table = "my-terraform-lock"
# encrypt = true
# }
#}

resource "aws_s3_bucket" "my-terraform-state" {
  bucket = "my-terraform-state"
  acl = "private"
}

resource "aws_dynamodb_table" "my-terraform-lock" {
  name = "my-terraform-lock"
  billing_mode = "PAY_PER_REQUEST"
  hash_key = "LockID"
  attribute {
    name = "LockID"
    type = "S"
  }
}

Enter fullscreen mode Exit fullscreen mode

You'll notice that there is some commented-out code. We usually would use that to tell Terraform to keep the state in S3 and to use DynamoDB for locking. We will use that code, but not just now.

With our main.tf file created, we can run terraform init, this will create the state locally.

terraform init

Enter fullscreen mode Exit fullscreen mode

Now that we have the state, we can apply the template and create the bucket and table:

terraform apply

Enter fullscreen mode Exit fullscreen mode

Perfect, we now have the table and table created.

Set up our backend to use S3 and DynamoDB

Now we remove the comments on the code that defines our backend:

terraform {
  backend "s3" {
    region = "us-west-1"
    bucket = "my-terraform-state"
    key = "global/tfstate/terraform.tfstate"
    dynamodb_table = "my-terraform-lock"
    encrypt = true
  }
}

Enter fullscreen mode Exit fullscreen mode

Save main.tf and run terraform init again:

terraform init

Enter fullscreen mode Exit fullscreen mode

And that's it, you are ready!

Final Thoughts

This was a short and simple post, but I hope it is useful. I've had this question before, and it is nice to have a quick article to point to instead of trying to remember how I did it the first time.

Top comments (0)