DEV Community

Arseny Zinchenko
Arseny Zinchenko

Posted on • Originally published at rtfm.co.ua on

Terraform: planning a new project with Dev/Prod environments

I need to plan the use of Terraform in a new project, and this includes planning the file structure for the project, how to create a backend (i.e. bootstrap) and other resources needed to get started, and thinking about working with multiple environments and AWS accounts.

In general, this post was originally written purely about creating AWS SES, but I started adding a lot of details on how exactly to create and plan a new project with Terraform, so I decided to make it a separate post. I will also add about SES later because it is quite interesting there, specifically in terms of SES and email/SMTP in general.

At the end of this post, as always will be many interesting links, but I especially want to highlight the Terraform Best Practices by Anton Babenko.

Planning Terraform for a new project

What will we need to think about?

  • project files structure
  • a backend — AWS S3, how to make a bucket for the first run?
  • DynamoDB for State Locking would be nice, but will do that another time
  • using Dev/Prod environments and AWS multi-account — how do we do it in Terraform?

Terraform files structure

In the Terraform project for AWS SES initially, I did everything in one file, but let’s do it “as it should”, see for example How to Structure Your Terraform Projects.

So, how do we organize our files:

  • main.tf – module calls
  • terraform.tf – a backend parameters, providers, versions
  • providers.tf - here will be the AWS provider itself, its authentication, region
  • variables.tf - here we will declare the variables
  • terraform.tfvars – values ​​for variables

In the SES project also will have dedicated files ses.tf and route53.tf for everything related to them.

Multiple environments with Terraform

Okay, so what about working with several Dev/Prod environments, or even different AWS accounts?

We can do it through Terraform Workspaces, but I don’t remember hearing much about them in terms of use for Dev/Prod and with CI/CD pipelines. Although as an option, it might make sense to try somehow. See How to manage multiple environments with Terraform using workspaces).

In general, there is no a “golden bullet”, and we have a lot of options. Someone uses Git branches or Terragrunt (How to manage multiple environments with Terraform), someone uses different directories (How to Create Terraform Multiple Environments), someone uses a solution like Spacelift.

Dev and Production by directories

In my opinion, a better option for a small project is to use several directories for environments, and Terraform modules for resources.

Let’s try it to see how it all looks and works.

In the project directory, create a directory structure:

$ mkdir -p environments/{dev,prod}
$ mkdir -p modules/vpc
Enter fullscreen mode Exit fullscreen mode

As a result, we have the following:

$ tree
.
| — environments
| | — dev
| ` — prod
` — modules
` — vpc
Enter fullscreen mode Exit fullscreen mode

Here we have environments/dev/ and prod/ catalogs with independent projects, each with its own parameters, and they will use common modules from the catalog modules. In this way, the process of developing a new feature for the infrastructure can be first tested in a separate file in the directory environments/dev, then moved to the modules, added to dev in the form of a module, and after repeated testing there - added to production.

In addition, since they will have their own parameter files for AWS, we will be able to use separate AWS accounts.

For now, let’s create a basket for states with our hands — we’ll get to that later when we will talk about Boostrap:

$ aws s3 mb s3://tfvars-envs
make_bucket: tfvars-envs
Enter fullscreen mode Exit fullscreen mode

Creating a shared module

Go to the directory modules/vpc/ and add a main.tf file to describe the VPC (however, if really follow the best practices, it is better to use the VPC module, also from Anton Babenko):

resource "aws_vpc" "env_vpc" {
  cidr_block = var.vpc_cidr

  tags = {
    environment = var.environment
  }
}
Enter fullscreen mode Exit fullscreen mode

In the same directory, create a file variables.tf with variables, but without values ​​- just declare them:

variable "vpc_cidr" {
  type = string   
}

variable "environment" {
  type = string
}
Enter fullscreen mode Exit fullscreen mode

Creating Dev/Prod environments

Go to the environments/dev directory and prepare the files. Let's start with the parameters - terraform.tf and provider.tf.

In the terraform.tf describe the required providers, versions, and backend.

In the backend, in the key specify the path to the state file in the directory dev/ - it will be created during deployment. And for Prod – set prod/ (although we can use different buckets):

terraform {
  required_providers {
    aws = { 
      source = "hashicorp/aws"
      version = ">= 4.6.0"
    }
  }

  required_version = ">= 1.4"

  backend "s3" {
    bucket = "tfvars-envs"
    region = "eu-central-1"
    key = "dev/terraform.tfstate"
  }   
}
Enter fullscreen mode Exit fullscreen mode

In the provider.tf file specify parameters of the AWS provider - the region and AWS profile from the ~/.aws/config:

provider "aws" {
  region = var.region
  profile = "default"
}
Enter fullscreen mode Exit fullscreen mode

Although we could combine them into one terraform.tf, but for the future let it be so.

Create a main.tf, where we use our module from the modules directory with a set of variables:

module "vpc" {
  source = "../../modules/vpc"

  vpc_cidr = var.vpc_cidr
  environment = var.environment
}
Enter fullscreen mode Exit fullscreen mode

Add a file variables.tf in which we also only declare variables, and here we added a new variable called region to be used in the terraform.tf and providers.tf:

variable "vpc_cidr" {
  type = string 
}

variable "environment" {
  type = string
}

variable "region" {
  type = string
}
Enter fullscreen mode Exit fullscreen mode

And finally, the values ​​of the variables themselves are described in the file terraform.tfvars:

vpc_cidr = "10.0.0.0/24"
environment = "dev"
region = "eu-central-1"
Enter fullscreen mode Exit fullscreen mode

Do the same for the environments/prod/, just with the directory prod/ in the backend config and other values ​​in the terraform.tfvars:

vpc_cidr = "10.0.1.0/24"
environment = "prod"
region = "eu-central-1"
Enter fullscreen mode Exit fullscreen mode

Now we have the following structure:

$ tree
.
| — environments
| | — dev
| | | — main.tf
| | | — provider.tf
| | | — terraform.tf
| | | — terraform.tfvars
| | ` — variables.tf
| ` — prod
| | — main.tf
| | — provider.tf
| | — terraform.tf
| | — terraform.tfvars
| ` — variables.tf
` — modules
` — vpc
| — main.tf
` — variables.tf
Enter fullscreen mode Exit fullscreen mode

Let’s check our Dev — run init:

$ terraform init
Initializing the backend…
Successfully configured the backend “s3”! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules…
Initializing provider plugins…
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/aws v4.67.0…
- Installed hashicorp/aws v4.67.0 (signed by HashiCorp)
Terraform has been successfully initialized!
Enter fullscreen mode Exit fullscreen mode

And plan:

$ terraform plan
…
Terraform will perform the following actions:
module.vpc.aws_vpc.env_vpc will be created
+ resource “aws_vpc” “env_vpc” {
+ arn = (known after apply)
+ cidr_block = “10.0.0.0/24”
…
+ tags = {
+ “environment” = “dev”
}
+ tags_all = {
+ “environment” = “dev”
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Enter fullscreen mode Exit fullscreen mode

Okay, we can create resources, but what about the bucket tfvars-envs that we specified in the backend? If we will try to run apply now, the deployment will fail because there is no bucket for the backend.

That is, how to prepare an AWS account for using Terraform, i.e. bootstrap it?

Terraform Backend Bootstrap

That is, we have a new project, a new account, and we need to store state files somewhere. We will use an AWS S3 bucket, and then add a DynamoDB for the state lock, but both the bucket and the table in DynamoDB must be created before the new project is deployed.

So far I see four main solutions:

  • the “clickops”: we can create everything manually through the AWS Console
  • script or manually create through the AWS CLI
  • solutions like Terragrunt or Terraform Cloud, but this is over-engineering for such a small project
  • have a dedicated project with Terraform, let’s call it bootstrap, where necessary resources and a state file will be created, and then we'll import these resources into the new backend

Another option I found is to use AWS CloudFormation for this, see How to bootstrap an AWS account with Terraform state backend, but I don’t like the idea to mix several orchestration management tools.

Another googled solution is to use a Makefile — Terrastrap — Bootstrap a S3 and DynamoDB backend for Terraform, an interesting implementation.

By the way, if you have GitLab, it has its own backend for Terraform-state, see GitLab-managed Terraform state, in such case nothing needs to be created (just DynamoDB and AIM, but the state question is closed).

Actually, if the task is just to create a bucket, then using the AWS CLI is also possible, but what if both S3 and DynamoDB are planned, as well as a separate IAM user for Terraform with its own IAM Policy? Do everything through the AWS CLI? And repeat this for all new projects manually? No, thanks.

The first solution that came up to my mind is to have a single bootstrap project in which we will create resources for all other projects, i.e. — all buckets/Dynamo/IAM, just by using different tfvars for each environment, so it would be possible to organize something like a solution with Dev/Prod environments as was did above. That is, in the repository with the bootstrap project, we could have separate directories with their own files terraform.tf, provider.tf and terraform.tfvars for each new project.

In this case, you can manually create the first bucket for the bootstrap project using the AWS CLI, and then in this project, we’ll describe the creation of DynamoDB, S3 buckets, and IAM resources for other projects.

For the Bootstrap project, you can take some existing ACCESS/SECRET keys for authentication, and other projects will be able to use an IAM user or an IAM role that we will create in the Bootstrap.

Seems like a working idea, but another option is to use the Bootstrap directory/repository as a module in each project and build the resources before starting the project.

That is:

  • the bootstrap module — we store it in a repository for access from other projects
  • then when creating a new project — include this module in the code, use it to create an S3 bucket, AIM and DynamoDB
  • after creation — import the state file received after bootstrap into a new bucket
  • and then we can start working with the environment

Let’s try it — in my opinion, this solution looks good.

Delete the bucket that was created at the beginning, it must be empty, because we did not create anything with the terraform apply:

$ aws s3 rb s3://tfvars-envs
remove_bucket: tfvars-envs
Enter fullscreen mode Exit fullscreen mode

Creating a Bootstrap module

Create a repository, and in it add a file s3.tf with the aws\_s3\_bucket resource - for now, we will have no IAM/Dynamo, here just as an example and to check the idea in general:

resource "aws_s3_bucket" "project_tfstates_bucket" {
  bucket = var.tfstates_s3_bucket_name

  tags = {
    environment = "ops"
  }
}

resource "aws_s3_bucket_versioning" "project_tfstates_bucket_versioning" {
  bucket = aws_s3_bucket.project_tfstates_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}
Enter fullscreen mode Exit fullscreen mode

Add a variables.tf with a variable declaration to set the name of the bucket:

variable "tfstates_s3_bucket_name" {
  type = string
}
Enter fullscreen mode Exit fullscreen mode

Now go back to our project, and in its root create a file main.tf in which we use the Bootstrap module from Github:

module "bootstrap" {
  source = "git@github.com:setevoy2/terraform-bootsrap.git"

  tfstates_s3_bucket_name = var.tfstates_s3_bucket_name
}
Enter fullscreen mode Exit fullscreen mode

In the source by the way, we can specify a brunch or a version, for example:

source = "git@github.com:setevoy2/terraform-bootsrap.git?ref=main
Enter fullscreen mode Exit fullscreen mode

Next, add the variables.tf file:

variable "tfstates_s3_bucket_name" {
  type = string 
}

variable "region" {
  type = string
}
Enter fullscreen mode Exit fullscreen mode

And provider.tf:

provider "aws" {
  region = var.region
  profile = "default"
}
Enter fullscreen mode Exit fullscreen mode

And terraform.tf:

terraform {
  required_providers {
    aws = { 
      source = "hashicorp/aws"
      version = ">= 4.6.0"
    }
  }

  required_version = ">= 1.4"

# backend "s3" {
# bucket = "tfvars-envs"
# region = "eu-central-1"
# key = "bootstrap/terraform.tfstate"
# }   
}
Enter fullscreen mode Exit fullscreen mode

Here, the block backend is commented out - we will return to it when we create the bucket, for now, the state file will be generated locally. The key specifies the path bootstrap/terraform.tfstate - that is where our state will be imported later.

Add the file terraform.tfvars:

tfstates_s3_bucket_name = "tfvars-envs"
region = "eu-central-1"
Enter fullscreen mode Exit fullscreen mode

Now the structure is as follows:

$ tree
.
| — environments
| | — dev
| | | — main.tf
| | | — provider.tf
| | | — terraform.tf| | | — terraform.tfvars
| | ` — variables.tf
| ` — prod
| | — main.tf
| | — provider.tf
| | — terraform.tf
| | — terraform.tfvars
| ` — variables.tf
| — main.tf
| — modules
| ` — vpc
| | — main.tf
| ` — variables.tf
| — provider.tf
| — terraform.tf
| — terraform.tfvars
` — variables.tf
Enter fullscreen mode Exit fullscreen mode

That is, at the root of the project in the main.tf we only perform the bootstrap to create a bucket, and then from the environments/{dev,prod} we will create infrastructure resources.

Creating a Bootstrap S3-bucket

From the root run terraform init:

$ terraform init
Initializing the backend…
Initializing modules…
Downloading git::ssh://git@github.com/setevoy2/terraform-bootsrap.git for bootstrap…
- bootstrap in .terraform/modules/bootstrap
Initializing provider plugins…
- Finding hashicorp/aws versions matching “>= 4.6.0”…
- Installing hashicorp/aws v4.67.0…
- Installed hashicorp/aws v4.67.0 (signed by HashiCorp)
…
Enter fullscreen mode Exit fullscreen mode

Check if the configs are correct with terraform plan, and if everything is fine, then start creating the bucket:

$ terraform apply
…
module.bootstrap.aws_s3_bucket.project_tfstates_bucket will be created
+ resource “aws_s3_bucket” “project_tfstates_bucket” {
…
module.bootstrap.aws_s3_bucket_versioning.project_tfstates_bucket_versioning: Creation complete after 2s [id=tfvars-envs]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Enter fullscreen mode Exit fullscreen mode

Now the next step is to import the local state file:

$ head -5 terraform.tfstate
{
“version”: 4,
“terraform_version”: “1.4.6”,
“serial”: 4,
“lineage”: “d34da6b7–08f4–6444–1941–2336f5988447”,
Enter fullscreen mode Exit fullscreen mode

Uncomment the backend block in the terraform.tf file of the root module:

terraform {
  required_providers {
    aws = { 
      source = "hashicorp/aws"
      version = ">= 4.6.0"
    }
  }

  required_version = ">= 1.4"

  backend "s3" {
    bucket = "tfvars-envs"
    region = "eu-central-1"
    key = "bootstrap/terraform.tfstate"
  }   
}
Enter fullscreen mode Exit fullscreen mode

And run terraform init again - now it will find that instead of the local backend it has an s3 backend, and will offer to migrate the terraform.tfstate there - reply yes:

$ terraform init
Initializing the backend…
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous “local” backend to the
newly configured “s3” backend. No existing state was found in the newly
configured “s3” backend. Do you want to copy this state to the new “s3”
backend? Enter “yes” to copy and “no” to start with an empty state.
Enter fullscreen mode Exit fullscreen mode

Enter a value: yes

Successfully configured the backend “s3”! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing modules…
Initializing provider plugins…
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v4.67.0
Terraform has been successfully initialized!
Enter fullscreen mode Exit fullscreen mode

Now we have a configured backend that we can use for the project.

Let’s go back to the environments/dev/, check again with the plan, and finally, create our Dev environment:

$ terraform apply
…
module.vpc.aws_vpc.env_vpc: Creating…
module.vpc.aws_vpc.env_vpc: Creation complete after 2s [id=vpc-0e9bb9408db6a2968]
Enter fullscreen mode Exit fullscreen mode

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Check the bucket:

$ aws s3 ls s3://tfvars-envs
PRE bootstrap/
PRE dev/
Enter fullscreen mode Exit fullscreen mode

And the state file in it:

$ aws s3 ls s3://tfvars-envs/dev/
2023–05–14 13:37:27 1859 terraform.tfstate
Enter fullscreen mode Exit fullscreen mode

Everything is there.

So, the process of creating a new project will be the following:

  1. at the root of the project, we create a main.tf, in which we describe the use of the Bootstrap module with the source = "git@github.com:setevoy2/terraform-bootsrap.git
  2. in the terraform.tf file we describe the backend, but commented out
  3. create a bucket from the bootsrap module
  4. uncomment the backend, and with terrafrom init import the local state file

After that, the project is ready to create Dev/Prod/etc environments with a backend for the state files in the new bucket.

Useful links


Did you like the article? Buy me a coffee!


Originally published at RTFM: Linux, DevOps, and system administration.


Top comments (0)