loading...

Linked templates in ARM template

omiossec profile image Olivier Miossec ・8 min read

In a previous blog post, I have talked about the need to deploy a whole resource group from a template. Using only one template for all resources can be simple but most of the time your resource groups are complex and include a large set of resources. Using a single file is unpractical. The file is too big to be easily modified and reuse it across different environments is difficult.
For example, a resource group composed of n VMs in a scale set (depending on the environment), x other VMs, an Azure SQL database, an Azure Cache for Redis, a service bus, and several storage accounts. The list will grow up with the developer's needs.
To solve this challenge and avoid creating a 15000 rows template there is a technique; template linking. In this post, I will show you how it works and how you can use it. You will learn how to modularize your templates and make them easy to review.
The first thing you need when using template linking is the main template. This template will run and orchestrate all deployments defined in child template files.
By convention, the main template is name azuredeploy.json.
Child templates are referenced in the main template by using the deployments resources (Microsoft.Resources/deployments). This resource allows you to perform nested deployment in the same or different scope. It creates a new deployment. The code for these deployments can be embedded within the main template, we talk about nested templates here, or in separate files, and this is what we call, linked templates.
Child templates need to be accessible online. These templates are not read on the local machine when you perform a new-azResourceGroupDeployment. Instead, templates are downloaded by Azure Resource Manager and executed for you by the resource manager. In template linking mode, each deployment resource must contain one templateUri parameter to specify the globally available URI of the child template.

How does it work?

The Azure Resource Manager download the template and create a new deployment in the target resource group with the downloaded template and an optional parameter files (you can see the deployment log in the deployments section of your resource group)

Template location

The location can be a simple web site, a GitHub repository, or any web service with public access. You can use a security token if you need as long as the token is present in the URI.
In most examples you will see the GitHub with an URL like this :
https://raw.githubusercontent.com/omiossec/arm-ttk-demo/master/armfiles/azuredeploy.json

       {
            "apiVersion": "2020-06-01",
            "name": "demogithub",
            "type": "Microsoft.Resources/deployments",
            "properties": {
                "mode": "Incremental",
                "templateLink": {
                    "uri": "https://raw.githubusercontent.com/omiossec/arm-ttk-demo/master/armfiles/child.json",
                    "contentVersion": "1.0.0.0"
                }
            }
        }

But most enterprises are reluctant to put their code, even if it's only infrastructure code, on a public web site without any security mechanism and they will prefer to use a more secure solution with a token.
One solution is to use a blob container in an Azure storage account. By default, blob containers are not public and you need a shared access signature (SAS) token or a connection to Azure.
Azure resource manager needs to access the URI, with an Azure storage account without public access you will need to provide the SAS token associated.

        {
            "apiVersion": "2020-06-01",
            "name": "demoAzureStorage",
            "type": "Microsoft.Resources/deployments",
            "properties": {
                "mode": "Incremental",
                "templateLink": {
                    "uri": "[concat(parameters('artifactLocation'),'azuredeploy.json',parameters('artifactSASKey'))]",
                    "contentVersion": "1.0.0.0"
                }
            }
        }

The URI in the templateLink property is the concatenation of the blob container URI (ex. https://demoomclinked001.blob.core.windows.net/template/), your template file name, and the shared access signature (SAS) token.

To create the container and the SAS token with the Azure PowerShell module.

$storageAccount = Get-AzStorageAccount -ResourceGroupName "01-test-arm" -StorageAccountName "demoomclinked001"

New-AzStorageContainer -Name "template" -Context $storageAccount.Context -Permission off 

New-AzStorageContainerSASToken -container "template" -Permission r -ExpiryTime (get-date).AddMonths(6) -Context $storageAccount.Context

The token created by the command is valid for 6 months ((get-date).AddMonths(6)). The only permission required is read (-Permission r). Azure Resource Manager only needs to download templates.

The SAS token value can be a string parameter or a secure string if you want to store it in an Azure Key Vault (and it's a best practice to avoid retrieving the token in logs).

        "artifactSASKey": {
            "type": "securestring",
            "metadata": {
                "description": "Blob container SAS key for linked template deployment"
            }
        }

Content version

In the previous example, you probably noticed the contentVersion parameter in the templateLink object. This property is not required. But it ensures the linked template contentVersion match the value in the main template. If the version doesn’t match it will raise an error.
If you want to use it in your deployment is better to put the template file and the contentVersion in parameters

        {
            "apiVersion": "2020-06-01",
            "name": "demoAzureStorageVersion",
            "type": "Microsoft.Resources/deployments",
            "properties": {
                "mode": "Incremental",
                "templateLink": {
                    "uri": "[concat(parameters('artifactLocation'),parameters('templateFile'),parameters('artifactSASKey'))]",
                    "contentVersion": "[parameters('templateVersion')]"
                }
            }
        }

Parameter

Templates mostly work with parameters, there only a few cases where you do not need at least one parameter. With the templateLink object, you have two options to apply parameters in a deployment resource. You can use the inline form, parameters are in the main template within the deployment resource or you can use the external form, where parameters are in an external file like the child template.

To use the inline form, you must add a "parameters" object in the deployments resource properties object. It contains a collection of key/value pairs corresponding to the template parameters.

        {
            "apiVersion": "2020-06-01",
            "name": "demoAzureStorageVersion",
            "type": "Microsoft.Resources/deployments",
            "properties": {
                "mode": "Incremental",
                "templateLink": {
                    "uri": "[concat(parameters('artifactLocation'),parameters('templateFile'),parameters('artifactSASKey'))]",
                    "contentVersion": "[parameters('templateVersion')]"
                },
                "parameters": {
                    "serverSize": {
                        "value": "[parameters('vmSKU')]"
                    },
                    "adminUserName": {
                        "value": "[parameters('localAdminUserName')]"
                    },
                    "localAdminPassword": {
                        "value": "[parameters('localAdminPassword')]"
                    },
                    "location": {
                        "value": "[resourceGroup().location]"
                    }
                }
            }
        }

Using an external template is similar to templateLink object. You need to put the parameters file in an online location, globally accessible with or without an access token. But as parameters may contain more sensible information than the template itself you may prefer to privately host parameter files in a storage account.
You need to use the paramtersLink object to use an external file.

       {
            "apiVersion": "2020-06-01",
            "name": "demoAzureStorageVersion",
            "type": "Microsoft.Resources/deployments",
            "properties": {
                "mode": "Incremental",
                "templateLink": {
                    "uri": "[concat(parameters('artifactLocation'),parameters('templateFile'),parameters('artifactSASKey'))]",
                    "contentVersion": "[parameters('templateVersion')]"
                },
               "parametersLink": {
                    "uri": "[concat(parameters('artifactLocation'),parameters('parametersFile'),parameters('artifactSASKey'))]",
                    "contentVersion": "[parameters('parametersFileVersion')]"
               }
            }
        }

As you see you can also use a contentVersion property to ensure the parameter file version match.

I prefer to use the inline form. It offers more flexibility as you can change parameter easily using ARM functions. The main problem you will have is the number of parameters that can make the template not digest. For that, I prefer to use a unique JSON object instead of multitude parameters.

Managing template and parameter files

One problem occurs with template linking. Each time you need to modify one linked template or a linked parameter file, you need to upload it to azure. You can do it manually, but it is a waste of time and error-prone.
You need a solution for automating the process.
The solution, using Git to store the template and a pipeline to upload template to an Azure storage blob account.

For the demo, I will use an Azure DevOps project for the git repository and the pipeline.

At the root of the repository, I create an azuredeploy.json file, it’s the main template, and a linked-templates folder to store linked template and parameters files.

To be able to upload files to the blob container in the Azure storage account.
If you remember the command used to create the SAS token for template linking, we can use it here to create a token to upload files from the pipeline.

New-AzStorageContainerSASToken -container "template" -Permission rwd -ExpiryTime (get-date).AddMonths(6) -Context $storageAccount.Context

The only difference is the permission, rwd, for Read, Write, and Delete.

The next step is to create a PowerShell script to upload files in the linked-templates folder to the blob container in Azure.
The script will list all JSON file in the linked-templates folder and push every file in the container using the SAS token created before.

We need some parameters, the SAS token, the local folder name, The storage account name, and the blob container name.
Instead of using parameters, you can use environment variables. It's easier to set up in Azure DevOps Pipeline.

try {
    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
    Install-Module -Name AZ.storage -Force  -Scope CurrentUser -AllowClobber

    $BuildDirectory = [Environment]::GetEnvironmentVariable('BUILD_SOURCESDIRECTORY')
    $StorageAccountName = [Environment]::GetEnvironmentVariable('STORAGEACCOUNTNAME')
    $LinkedTemplateFolder = [Environment]::GetEnvironmentVariable('TEMPLATEFOLDER')
    $BlobContainerName = [Environment]::GetEnvironmentVariable('CONTAINERNAME')
    $SasToken = [Environment]::GetEnvironmentVariable('SASKEY')
    $TemplateFolderPath = join-path -path $BuildDirectory -ChildPath $LinkedTemplateFolder

    if ($null -eq $BuildDirectory) {
        Throw "Error No Build directory found make sur to run this script in Azure DevOps build server"
        Exit 1
    }
    if (!(test-path -Path $TemplateFolderPath -ErrorAction SilentlyContinue)) {
        Throw "No Template folder"
        Exit 1
    }
    $AzureStorageContext = New-AzStorageContext $storageaccountname -SasToken $SasToken
    if ($null -eq $AzureStorageContext){
        Throw "No storage context"
        Exit 1
    }

    $TemplateFilesList =  get-childitem -Path $TemplateFolderPath -Attributes !Directory+!System+!Encrypted+!Temporary -Recurse -filter "*.json" -ErrorAction SilentlyContinue | select-object FullName,Name
    foreach ($jsonFile in $TemplateFilesList) {
        Set-AzStorageBlobContent -File $jsonFile.FullName -Container $BlobContainerName -Context $AzureStorageContext -Blob $jsonFile.Name -Force 
    }
}
catch {
    Write-Error -Message " Exception Type: $($_.Exception.GetType().FullName) $($_.Exception.Message)"
}

As you see the Azure storage context cmdlet used the SAS token to create the context used to upload files to Azure without using any connection to Azure.

After the script, you need to add an azurepipeline YAML file to configure the pipeline. This file describes how to run the PowerShell script and detail any variables needed to run the script and the trigger that will run the script.

trigger:
- master
resources:
- repo: self

variables:
  storageAccountName: 'demoomclinked001'
  containerName: 'template'
  templateFolder: 'linked-templates'
  vmImageName: 'windows-latest'

stages:
- stage: UploadTemplateToAzure
  displayName: Upload template to Azure Storage
  jobs:  
  - job: PowerShellScript
    displayName: PowerShell Script to upload file
    pool:
      vmImage: $(vmImageName)
    steps:
    - task: PowerShell@2
      displayName: push linked templates to Azure
      env:
        SASKEY: $(SasKey)
      inputs:
        targetType: 'filePath'
        filePath: '$(System.DefaultWorkingDirectory)/pipeline.ps1'

The trigger run the pipeline on push action made on the master branch in the Azure DevOps repository

trigger:
- master
resources:
- repo: self

Then you need to define variables used in the script, the storage account, the blob container, and the folder where linked templates are.
But something is missing. There is no variable for the SAS token. It's for security reasons, you do not want to share this secret. With this token, any user can modify or delete your files. So, you need another way to pass the value to the script and keep it secret from anyone with access to the repository.
Azure DevOps let you create secret variables. You need to go to the Pipeline and select your pipeline and enter the edit mode. You will have a button named "Variables" on the upper right side of the page. You can add a secret variable by checking the “Keep this value secret” box.
Unlike normal variables, secret variables are not added as environment variables by default. You need to map the variable in your tasks by using

    - task: PowerShell@2
      env:
        SASKEY: $(SasKey)

Now every modification in the Repository will upload any JSON files in the linked-templates folder in the blob container in Azure without the need to manually do it.

This is a simplified version of a pipeline to show you how you can handle linked templates and automatically send them to Azure without human intervention. But you can go further and integrate a testing solution (ARM-TTK, -whatif …) and deployment from pipeline. The deployment ARM resource can use conditions, dependsOn, and copy (loop). It makes linked templates able to deploy complex solutions and let you define an entire resource group from one main template. It helps team members to focus on a small part of a big project.

My last advice when using linked templates is to make each deployment independent if possible. It will be easier to test and modify.

Posted on by:

omiossec profile

Olivier Miossec

@omiossec

Microsoft Azure MVP, Passionate about Cloud and DevOps. Co-organizers of the French PowerShell & DevOps UG . Find me on https://github.com/omiossec

Discussion

pic
Editor guide