As promised in my last article, Terraform AWS - Dynamic Subnets, today you're going to learn how to manage Workspaces in Terraform, which are simply used for segregating your developing environments (dev, qa, stage, prod) while sharing the same infrastructure between them. We will also take advantage of the free Terraform Cloud service to store the state file (tfstate) remotely.
Objectives
- Share the same infrastructure as code (IaC) in multiple environments (Workspaces)
- Store the tfstate file remotely to allow colleagues to manage the infrastructure you're working on
Knowledge and assumptions
- You read my Terraform AWS - Dynamic Subnets tutorial which covers most of the Terraform functions that I'll use in this tutorial
- You know how to use AWS named profiles
- Your infrastructure's repository is in one of the following: GitHub, GitLab, or Bitbucket. The repository I'll be using is in Github
- I'll create a single S3 bucket resource. We will be able to create this resource in each environment just by switching Workspaces; we'll get to that
Setup
Create Workspaces
- Create a free account in Terraform Cloud
- Create a new Organization; I've created tutorials organization
- Create a new Workspace and connect it to the VCS provider. Select Only select repositories and select the relevant repository, in my case, it's tf-tutorial-workspaces
- Click the repository's name to continue
- Workspace name should be the name of the environment you're working on, in my case tf-tutorial-workspaces-development
- (Optional) Configure Workspace Advanced Options - insert VCS branch if relevant. I'll be using the master branch
- Click Create workspace!
- Repeat steps 3 to 7, in my case: tf-tutorial-workspaces-qa, tf-tutorial-workspaces-staging, tf-tutorial-workspaces-production
- For each Workspace that you've created, click Workspaces --> click Workspace name --> Settings --> General --> Execution Mode: Local
Important: The default Execution Mode for each Workspace is Remote, which is the best practice for executing a Terraform plan. Unfortunately, there's a bug, and Remote execution mode fails to work with the variable terraform.workspace
.
Make sure you set Execution Mode to Local until this bug is fixed, you can track it here, issue 22802.
Workspaces page should look something like this:
Now it's time to generate a token that will allow us to use the Workspaces we've created.
Terraform Cloud Tokens
There are two types of tokens that we're going to use, the first one is Team Token, which we will use in our automation processes and CI/CD pipeline.
The second token is User Token, and we will use it for planning/applying infrastructure manually, so it's usually good for planning and testing.
In this tutorial, we'll create both tokens.
Note: In case you're concerned about Two-Factor-Authentication, it will be ignored when you use Tokens as an authentication method, so keep that in mind.
Team Token
- Create the file .terraformrc-team in your project's directory, with Bash it's
touch /.terraformrc-team
- Generate a token in Terraform Cloud; click on Settings --> Teams --> Create an authentication token
- Edit the file /.terraformrc-team, with Bash it's:
vim /.terraformrc-team
then hit I for INSERT mode The file should look like this:
credentials "app.terraform.io" {
token = "Your.generated.team.token.xctOeIoLtmjydg.jCCpJ6GKjCCpJe"
}
Vim: To save the file in vim, hit ESC, type: :x
and hit ENTER. To make sure the file was correctly saved, execute cat /.terraformrc
, the output should look like the above example. Here's an excellent vim-notebook. And if you're asking yourself why to use vim, then read Vim: from foe to friend in 9 minutes
User Token
- Create the file .terraformrc-user in your project's directory, with Bash it's
touch /.terraformrc-user
- Generate a token in Terraform Cloud; click on your profile picture --> User Settings --> Tokens --> Generate token
- Edit the file /.terraformrc-user, with Bash it's:
vim /.terraformrc-user
then hit I for INSERT mode The file should look like this:
credentials "app.terraform.io" {
token = "Your.generated.user.token.xctOeIoLtmjydg.jCCpJ6GKjCCpJe"
}
Setting up AWS named profiles
To be able to apply the changes, we'll create a named profile for each environment (Workspace).
Note: Terraform developers decided to use the word Workspace instead of Environment due to the overuse of this word, see here. Right choice (not kidding).
The credentials and config file should look like this:
~/.aws/credentials
[tf-tutorial-workspaces-development]
aws_access_key_id = ACCESS_KEY_FOR_DEVELOPMENT
aws_secret_access_key = ZKBZ6_rsRFJx+bU2#=jY]w%u_e!Xrau?9fc!}:}c
[tf-tutorial-workspaces-qa]
aws_access_key_id = ACCESS_KEY_FOR_QA
aws_secret_access_key = K9N?Bjb:w>Uyw9w.k^,Ap2BK-7CbsZ^fY*J3t}vp
[tf-tutorial-workspaces-staging]
aws_access_key_id = ACCESS_KEY_FOR_STAGING
aws_secret_access_key = VpKHs2*Urp3BhE3j~MVC9@W&TpR.aQu?s.n.PrBP
[tf-tutorial-workspaces-production]
aws_access_key_id = ACCESS_KEY_FOR_PRODUCTION
aws_secret_access_key = Kk~Xo&Z>3QKi-M%Vq6]LRLNAwy>7R-q4=C2rGJ8x
~/.aws/config
[profile tf-tutorial-workspaces-development]
output = json
[profile tf-tutorial-workspaces-qa]
output = json
[profile tf-tutorial-workspaces-staging]
output = json
[profile tf-tutorial-workspaces-prodcution]
output = json
Configuring the backend
Now we need to configure our infrastructure to use Terraform's Remote backend.
Project structure:
.
├── LICENSE
├── README.md
├── main.tf
└── variables.tf
main.tf
First, let's set up main.tf
:
main.tf
provider "aws" {
version = "~> 2.28"
profile = lookup(local.profile, local.environment)
region = lookup(local.region, local.environment)
}
terraform {
required_version = "~> 0.12"
backend "remote" {
hostname = "app.terraform.io"
organization = "tutorials"
workspaces { prefix = "tf-tutorial-workspaces-" }
}
}
Some explanations to the code above:
- profile - Will be selected according to environment (Workspace)
- region - Will be selected according to environment (Workspace)
-
backend "remote" {} - configuring our backend
- hostname - This is the configuration that tells our tfstate to be hosted remotely in app.terraform.io (Terraform Cloud)
- organization - the organization that we've created in Terraform Cloud
- workspaces - Since we're using multiple Workspaces (environments), we need to use the keyword prefix. Take the prefix of your workspaces and add a dash (-) to the end of it, just like in the code above
Important: The backend's configuration currently does not support using variables/local values. We have to hardcode our prefix; otherwise, I would've used "${local.profile_prefix}-"
variables.tf
Moving on to the variables.tf
file, this is where the magic happens. Before I share 30 lines of code with you, let's break it down for our needs and how we answer those needs.
- app_name - We need to give a name to our application, this will serve as a prefix for all of our resources
- profile_prefix - For convenience, will be used in the local value profile
-
profile - We need a map for profile per environment (Workspace), used in
main.tf
-
region - We need a map for region per environment (Workspace), also used in
main.tf
-
environment - We need to initialize this variable with the name of the Workspace we're currently using, luckily we have
terraform.workspace
for doing that - common_tags - For convenience, we will use it in all resources, it will help us mark the resources that are managed by Terraform
- name_prefix - For convenience, we will use it in all resources
variables.tf
locals {
app_name = "workspaces-app"
profile_prefix = "tf-tutorial-workspaces"
}
locals {
profile = {
"development" = "${local.profile_prefix}-development"
"qa" = "${local.profile_prefix}-qa"
"staging" = "${local.profile_prefix}-staging"
"production" = "${local.profile_prefix}-production"
}
region = {
"development" = "us-west-2"
"qa" = "us-east-2"
"staging" = "us-east-1"
"production" = "ca-central-1"
}
}
locals {
environment = "${terraform.workspace}"
}
locals {
common_tags = {
Terraform = "true"
Environment = local.environment
}
name_prefix = "${local.app_name}-${local.environment}"
}
Note: I split the local values into four groups to make it more readable and organized.
Everything is ready! Now we need to initialize Terraform to make it work with Workspaces and a remote backend that stores tfstate.
Select relevant terraformrc configuration file
We've created the files terraformrc-team
and terraformrc-user
, since we're doing manual work and we're not executing CI/CD pipeline, we'll use the terraformrc-user
.
The environment variable TF_CLI_CONFIG_FILE defines the location of the configuration file that will be used for the current run.
Assuming your current working directory is your project's directory:
export TF_CLI_CONFIG_FILE="${PWD}/.terraformrc-user"
You can set this environment variable automatically on system startup, follow these instructions: Windows Git Bash, Ubuntu, MacOS.
Make sure you replace ${PWD}
with the absolute path to .terraformrc-user
Initialize Terraform
Make sure you're in the repository's working dir, in my case it's ./tf-tutorial-workspaces
.
Execute:
terraform init
When prompted to select a Workspace, insert one (1), and hit ENTER, it doesn't matter at this point.
Expected output:
Initializing the backend...
The currently selected workspace (default) does not exist.
This is expected behavior when the selected workspace did not have an
existing non-empty state. Please enter a number to select a workspace:
1. development
2. production
3. qa
4. staging
Enter a value: 1
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.29.0...
Terraform has been successfully initialized!
# omitted the rest of the text for brevity
Troubleshooting - terraform init
"local.environment is default"
This error happens when Execution Mode is Remote in the Workspace's settings. The Remote Execution Mode doesn't work with the terraform.workspace
variable, so make sure you set it in the Workspace's settings to Local, at least until the issue22802 is fixed.
Share tfstate and Workspaces with colleagues
Everything we did so far was the initial setup. Now tell your colleagues to:
- Create an account in Terraform Cloud - and then you need to add them to you your organization, click organization name --> Settings --> Teams --> owners --> Add a New Member
- If you can't see owners, then it's ok, click on Add Users and it will add this user to the owners team. The owners team is the default team
- Configure Terraform credentials with user token in
.terraformrc-user
- just like you did earlier. Don't forget to select TF_CLI_CONFIG_FILE - Create AWS named profiles (config, credentials), just like you did earlier
- Execute
terraform init
Your colleagues will now be using the same tfstate file you're using, and they can access the Workspaces that you've already created.
Note: Instead of adding users to the owners
team, you should create a team per department/actual team, for example, developers-team, operations-team, etc.
Working with Workspaces
Select Workspace (environment)
You'll be amazed by how simple it is to select the environment, here are the available commands:
-
terraform workspace list
- List available Workspaces -
terraform workspace select workspace_name
- Select relevant Workspace -
terraform workspace show
- Shows the selected Workspace -
terraform apply
- Apply changes to infrastructure -
terraform destroy
- Destroy infrastructure
Example for usage
Adding resources to your git repository
Let's add an S3 bucket:
s3.tf
resource "aws_s3_bucket" "bucket" {
count = "${lookup(local.create_s3, local.environment)}"
bucket = "${local.name_prefix}-s3"
acl = "private"
region = "${lookup(local.region, local.environment)}"
versioning {
enabled = true
}
tags = local.common_tags
}
s3.variables.tf
- A good example of how we can control the creation of resources per environment.
locals {
create_s3 = {
"development" = 0
"qa" = 1
"staging" = 1
"production" = 1
}
}
The current project structure:
.
├── LICENSE
├── README.md
├── main.tf
├── s3.tf
├── s3.variables.tf
└── variables.tf
This project is available on GitHub:
unfor19 / tf-tutorial-workspaces
Learn how to use Terraform Cloud and Workspaces
Example
$ terraform workspace list
* development
production
qa
staging
$ terraform workspace select qa
$ terraform apply
Acquiring state lock. This may take a few moments...
# omitted text for brevity
# aws_s3_bucket.bucket[0] will be created
+ resource "aws_s3_bucket" "bucket" {
+ acceleration_status = (known after apply)
+ acl = "private"
+ arn = (known after apply)
+ bucket = "workspaces-app-qa-s3"
+ bucket_domain_name = (known after apply)
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = false
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ region = "us-east-2"
+ request_payer = (known after apply)
+ tags = {
+ "Environment" = "qa"
+ "Terraform" = "true"
}
# omitted arguments for brevity
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions in workspace "qa"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Look at the S3 bucket name and region! Everything indicates that we're working on the qa Workspace, woohoo!
Troubleshooting - terraform plan
Error: error starting operation: The configured "remote" backend encountered an unexpected error:
invalid value for workspace
You executed a terraform plan
or terraform apply
without selecting a Workspace. Make sure you select a Workspace first.
Final words
Terraform Cloud service is still new, but it's fantastic, and if you have any questions or comments, fire at will!
My next article will be about how to use 3rd-party binaries, such as aws-vault and Terraform, in Windows Git Bash.
Did you like this tutorial? Clap/heart/unicorn and share it with your friends and colleagues.
Originally published at https://prodops.io on October 6, 2019.
Top comments (2)
Awesome . Thanks for sharing !
You're welcome fii, let me know if you have any questions