At the heart of any Cloud Platform is Infrastructure, and how we manage infrastructure in large-scale systems has a direct correlation with how effective your development teams are when deploying, securing, and auditing your infrastructure.
For this article, I will go through the simple steps for setting up Terraform and connecting the state to an Azure Blob Storage Account.
A huge disclaimer here is that I am possibly suffering from a severe case of imposter syndrome and that I am treating grounds that so many developers have walked the path before me. There is notes on how to get started, and I hope that you find them useful.
Prerequisites
We are of course using some command line magic to get us started, so before you go any further make sure to have the following installed command line tools on your dev box
- Azure CLI
- Terraform
- An Azure Subscription with Contributor rights, and access to Azure DevOps
The Challenge
Infrastructure as Code is an addiction. You discover that all that complicated stuff, that you have used to setup manually either on Amazon, Azure, or even Cloudflare, can be coded and managed via Infrastructure as Code, you have an unlimited amount of terraform providers, and you discover that it supports everything that you are trying to build! That is freaking awesome!
However, you need to start somewhere and you cannot start using Terraform and not have either a state file locally, or somewhere else. That means, that there is a small amount of infrastructure that you still have to manually create in order to get started.
Since this is an Azure tutorial, this tutorial will cover setting up the foundational stuff that you need in order to execute Terraform from a CI/CD Pipeline.
- An Azure Subscription that will contain all the configured resources, that we need to get started
- A Service Principle with Contributor rights to the Subscription
- A Manually Created Storage Account to Store the Terraform state
- An Azure DevOps variable group, to store the service principle and the Storage Account Access Key
Creating Service Principle
In order to execute our terraform you need to be able to access your Azure Subscription with Contributor access rights.
If you want to work with Terraform locally it is enough to log in via the Azure CLI
az login
And switch to the Subscription that you are currently working with
az account set --subscription <SubscriptionId>
However we would like to have our terraform executed via an automated build pipeline, and for that, we need to create a Service Principle. Long story short, a service principle is an identity, that has been assigned scopes to a particular subscription, and that Identity can be used to interact with Azure.
Spin up your favorite command line and lets us create the service principal
az ad sp create-for-rbac --display-name="root-terraform-sp" --role="Contributor" --scopes="/subscriptions/42e5d700***"
{
"appId": "519ef410****",
"displayName": "root-terraform-sp",
"password": "JKX8Q~*****",
"tenant": "e4794ea3-****"
}
Setting up the Storage Account for the Terraform Backend
By default, Terraform will be storing your state locally on your development box, which isn’t ideal if you want to either manage your Infrastructure via the build pipeline (because it cannot access your local store DUH!). Additionally, you will never be able to recover the state files should the laptop go missing. So let's put it somewhere on the Cloud.
Setting up an Azure Storage Account to manage our state puts us in a chicken and an egg problem. We’d want to use Terraform to manage our infrastructure, but we need some infrastructure to store our state. My recommendation is just to create a resource group and the storage account manually, and not think too much about it.
This is the snipped is shamelessly copied from the documentation linked below.
#!/bin/bash
RESOURCE_GROUP_NAME=tfstate
STORAGE_ACCOUNT_NAME=tfstate$RANDOM
CONTAINER_NAME=tfstate
# Create resource group
az group create --name $RESOURCE_GROUP_NAME --location eastus
# Create storage account
az storage account create --resource-group $RESOURCE_GROUP_NAME --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --encryption-services blob
# Create blob container
az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME
Once that is done, you can get the storage Access Key, and store this somewhere secure
ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query '[0].value' -o tsv)
echo $ACCOUNT_KEY
Setting up a basic Terraform
The initial structure for Terraform project is simple.
.
├── backend
│ ├── backend.ci.tfvars
│ └── backend.local.tfvars
├── environments
│ ├── dev
│ │ └── variables.tfvars
│ └── live
│ └── variables.tfvars
├── main.tf
├── providers.tf
└── variables.tf
On the top level, we have the expected main.tf and providers.tf for configuring our terraform modules, and secondly, we have a list of environments that we will be targeting. Each environment has a set of variables and an associated backend.ci.tfvars file
Assuming that you would like to use Azure as a provider, then I’ve configured the provider.tf to look like this
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.69.0"
}
}
required_version = ">= 1.5.5"
}
provider "azurerm" {
features {}
}
Backend File
From here it's time to set up terraforms backend.local.tfvars file, with the following. The idea behind the file is to have a local copy that you can work with, so make sure that this file does not leave your machine, and preferably not part of your git repository!
resource_group_name = "tfstate"
storage_account_name = "<storage_account_name>"
container_name = "tfstate"
key = "terraform.tfstate"
access_key = "<your account key>
We are going to create a new backend.ci.tfvars file, that contains replaceable strings that are being used from CI/CD
resource_group_name = "tfstate"
storage_account_name = "<storage_account_name>"
container_name = "tfstate"
key = "terraform.tfstate"
access_key = "{tf_backend_storage_account_key}"
Creating an Azure Pipelines Variable group
In order to access these variables from our Azure Pipelines we need to create new Azure pipelines variable group, where we can store our service principal and our storage account access key.
Spin up your favorite Azure DevOps Project, Go to Pipeline, Go to Library, and Click the plus sign.
From here fill in the information for our service principle and our storage account access key.
Deploying your Terraform using Azure Pipelines
The whole point of us creating all this infrastructure as code, is that our infrastructure is audited, and not changed by humans directly, so of course we would like our infrastructure to be executed automatically whenever we push infrastructure changes to our repository.
More on that in an upcoming article.
Conclusion
So for the challenges we set up an initial terraform repository, we have configured a backend.ts that is configured to use Azure Storage Account as a backend Store. Additionally, we have configured a Service Principle that we can use to execute Terraform from Azure Pipelines, and set up an Azure Pipelines Variable Group to store the information.
I hope that this was enough to get you started!
References
The documentation information found in this article is already out there
- Sample Repo: https://github.com/mclausen/Cloud.Platform.Foundation
- Setting up Azure Storage Account for Terraform: https://learn.microsoft.com/en-us/azure/developer/terraform/store-state-in-azure-storage?tabs=azure-cli
- Creating Azure Directory Service Principles: https://learn.microsoft.com/en-us/cli/azure/ad/sp?view=azure-cli-latest#az-ad-sp-create-for-rbac
- Setting up Terraform: https://developer.hashicorp.com/terraform/tutorials/azure-get-started/infrastructure-as-code
Top comments (0)