Cover image for Infra as Code, working with ARM Templates

Infra as Code, working with ARM Templates

omiossec profile image Olivier Miossec Updated on ・10 min read

Some of my friends hate to use ARM Templates. It gives them headaches when they need to deal with it. Sometimes they are frightened, or they simply don't understand how to deal with JSON files. I guess it's the same for any other form of code-based cloud provisioning engine.

Why use ARM template:

  • It's repeatable and data-driven, once you have a template file, you can reuse it with other data (You can deploy a dev, a UAT and a production infrastructure with the same template)
  • it's testable, you can run unit and integration testing against templates
  • It's a way to create Infra as code, it creates a single source of truth and it’s Idempotent

How to use it in the team. How to create and manage template in a team. ARM Temple allows teams to deploy your entire project from a single deployment. It can also help to manage the project life cycle.

What are ARM Templates? It's a JSON file (sort of) with 5 sections;

  • Parameters
  • Variables
  • Resources
  • Functions
  • output


The parameters section specifies the value to be used for a specific deployment. The value will be used in the Resources section as input.

There is a best practice here. Limit the number of parameters. There are several motives for that. The first one is the hard limit; you can't have more than 256 parameter per template. But the more important is, the template represents a deployment, the creation of resources in a resource group. It must be a single source of truth. The template, like documentation, tells you what is deployed for a specific project. It must be understood by anyone in the team. During the resource group life cycle. The project can change, adding or removing resources or scaling others. To keep the source of true accurate changes must be done via the template.

If the template is to complex to read or to update there are chances that people will try to update the project in other ways and you lose the links between the template and what deployed on Azure. In other terms, you lose control of your resources.
Limiting the complexity of a template is a key priority for every project.

First things to know, if there is a default value defined in the template, the parameter is optional during the deployment.

        "ManagedIdentity": { 
            "type": "string", 
            "defaultValue": "yes"         

If the value of a parameter doesn’t change often across deployments, this is a good option. For example, if for a project the VM size is the same for DEV, QAL, LAB and only change for PROD and UAT the VM size parameter should look like this:

        "VmSize": { 
            "type": "string", 
            "defaultValue": "Standard_B2MS"         

Parameter name should be choosing carefully. They must be explicit, avoid using a single word like Size and prefer something like BakendServerVmSize. Parameter Name is like a variable. In any programming language, a naming convention is used to identify variables. I use the pascal case for the parameter name. This makes them more readable (ie. FrontServersVmSize). But be careful when naming parameters to not use any PowerShell parameters from New-AzResourceGroupDeployment.

Along with a good naming convention, descriptions are a great way to document a parameter and how it can be set. Every parameter should have a description. It enables team collaboration as it limits confusion.

        "ManagedIdentity": { 
            "type": "string", 
            "defaultValue": "yes", 
            "metadata": { 
                "description": "Specify if the function needs a Managed System Identity or not" 

It’s just a plain text sentence describing how the parameter will be used during the deployment.

It's possible to create a template with only one type of data, String but ARM offers a little more. There are a few types of data, String, SecureString, Integer, boolean, object, array, and secure object.

About secureString and secureObject, there are used for sensitive data like password or API key or anything that should not be seen after the deployment. The value of these two data types can’t be read in Azure.

But if you use sensible values in the template parameter files, you can extract sensible information from your repositories or where the parameter file is stored.

To avoid that use Keyvault in the template parameters file instead of storing sensible data.


One of the best practice is to limit the number of parameters event if you need to deploy several resources. I use object values for that purpose. It helps to group values related to one resource and it makes it more readable for everyone.

JSON objects are just a key/pair construction where you can use other datatype like string, integer but also arrays and of course other JSON Objects

If I want to deploy an Azure Postgres Server, I should have a location, version, server name, tags, … as parameters. These parameters can be grouped in only one parameter.

In the template

                "description":"Azure PostGreSql Parameters in a Json object" 

In the parameter file

                    "displayname":"Prod database server" 

In the resources section

 "location": "[parameters('PostGreSqlSetup').location]",

But there is a drawback using an object as a parameter. The ability to control inputs. Parameters can include basic control, with these controls, it's possible to limit an integer value between two bounds (minValue and maxValue) or the length of string or an array (minLength and maxLength).
The most used control is allowedValue. It prevents a user to deploy unwanted resources (like an oversized VM) or unrealistic value (like a 10 petabytes hard drive) and it helps contributors and users to better understand what is needed for the parameter. But there is more, it enables autocompletion in PowerShell when using new-AzResourceGroupDeployment without a template parameter file.

The last thing about parameters, if a value of a parameter doesn't change at all, there is no need to use it as a parameter. It should be placed in the variables section. It can be the case of the resource location if there is a constraint to always use the same.


I use the variables section to store static parameters but also to make any computation needed in the resources section. It helps to remove complexity in the last section, and it makes the template easier to read and update.

For example, location, often I use the same location as the resource group. There is no need to use a parameter for that and no needs to make a calculation for each resource.


This is where I use template function (again avoid using template functions in a resource). There are two types of functions, template functions and user-defined functions (not covered here).

Template functions are used to manipulate data. They are grouped by type, String, array, numeric, comparison, logical, deployment, and resource. You can find the complete list here

For the DefaultLocation variable, I used the resourceGroup() function. It returns a JSON object to be used in the template.

  "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}", 
  "name": "{resourceGroupName}", 
  "location": "{resourceGroupLocation}", 
  "managedBy": "{identifier-of-managing-resource}", 
  "tags": { 
  "properties": { 
    "provisioningState": "{status}" 

You can access any properties by using the dot notation like in PowerShell when you need property from an object. If you need the name of the resource group


The two most used template functions used in any template are …. Parameters and Variables. Yes, theses 2 envy used keywords are in fact templates functions they simply return the value of a parameter or a variable.

In the resources section, they are used like this


It’s possible to make complex computation in a template but I see that the most used template functions (after parameters and variables) are concat, if, uniqueString, ResourceId, and ResourceGroup.

Concat() return the result of the concatenation of several strings.

"applicationGatewayName":"[concat(‘appGateway’ , variables('deploymentID'))]", 

If() function return a value based on a test.

"ManagedSystemIdentity":"[if(equals(parameters('ManagedIdentity'), 'yes'), 'SystemAssigned', 'None')]" 

ResourceId() return the unique identifier of a resource you need to use in the resources section. The resource does not need to reside inside the deployment Resource Group. In can be in another Resource Group or another subscription.

"vnetID":"[resourceId(parameters('NetRg'), 'Microsoft.Network/virtualNetworks', parameters('VnetName'))]", 

I use the variables section to keep consistency with object naming. Take a VM, you can input the VM Name and with it create the name of the virtual interface, the public IP, the name of VHD and any other related components. In this case, I often use the ToLower() function.


For me, the goal variables section is to take away the complexity of calculation from the resources section. If in the resources section you need to perform the computation to set a properties value it makes the section more difficult to read and if this value is used more than one time it increase the chance of error.

The resource section is the place where Azure resources to be deployed are defined. The syntax can be “simple”. You need to provide a resource name, resource type, and an API Version and some other parameters. Hopefully, Microsoft provides a really good documentation for all resources

The name of the resource is a unique name within the template you can refer to in another part of the template (the dependsOn clause can use it for example). It can be static or it can be created dynamically.

The type referee to the provider used in the deployment. It’s always in the form of a Provider NameSpace/Resource type. Once you know theses 2 elements it's easy to build the template. Sometimes is a little more complicated because the provider name and the resource type are not obvious. Look at AKS, in the Azure portal you will find Kubernetes Service or AKS. But the provider name is MicrosoftContainerService and the resource type is ManagedCluster.

So The MS ARM Template documentation will be your guide.

As for parameter, you can add a comment in each resource. It’s recommended to use it. It helps to understand the first intention and can give clues on what the template will deploy and how it was built.

            "comments":"This storage account used by the function App", 

One of the goals of a template is also to be reusable. Difference between environments, like prod, qual, Dev isn’t only about name or size, it’s also about the number of items to deploy. Take a production environment, VM can have multiple hard drives where the dev only uses the OS drive. There can be different also to the number of VM or networks to deploy.

Array and copy are the perfect answer to the situation where 0 or more resource or properties must be deployed.

For example, if multiple data disk needs to be created based on the nbrDisk parameter.

 "copy": [ 
              "name": "dataDisksDeploy", 
              "count": "[parameters('NbrDisk')]", 
              "input": {             
                "diskSizeGB": "[parameters('DataDiskSizeGB')]", 
                "lun": "[copyIndex('dataDisksDeploy')]", 
                "name": "[concat(variables('vmName'), '-vhd',copyIndex(), copyIndex('dataDisks'))]", 
                "createOption": "Empty" 

It can also be used with an array of objects.

in the parameter file


in the resources section


Adding less or more disks or VM is not the only thing that may change between environments. Depending on the deployment type, you can have an Application Gateway or a Load Balancer or any other components that will be useless in the DEV deployment.

To avoid using 2 template files for the same project it’s possible to add a condition for in a resource definition.


If the parameter ENV is equal to PROD, the Application Gateway is deployed if not the resource is not deployed.

There are many other technics to use with ARM templates. You can create your own function to make a custom calculation.

You can also use template linking. This last technic lets you modularize a project into several template files linked in a master template. Each child files need to be globally available as they need to be read by the ARM template manager and not your local workstation. They are placed in Azure blob storage and the master template file needs to send the SAS key. A child file is just a standard ARM template file with one or more resources to deploy. You also need to provide Parameters for the deployment.

     "type": "Microsoft.Resources/deployments", 
     "apiVersion": "2019-05-01", 
     "name": "provisionRessource", 
     "properties": { 
       "mode": "Incremental", 
       "templateLink": { 
       "parameters": { 
          "ResourceName":{"value": "[parameters('RessourceName')]"} 

You can see in this last example, the contentVersion element. It’s a mandatory element on every template not only the linked one. It’s a part of the documentation of an ARM template. Each time a modification is made you should increment the version. You can use the same notation as PowerShell module, Major Version, Minor Version, Build number.

Another thing useful for a template is the template metadata, where you can add comment information and author.

 "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 
  "contentVersion": "1.1.'", 
  "metadata": { 
   "comments": "Demo template for dev.to blog", 
    "author": "Olivier Miossec @omiossec_med" 

When we talk about Infrastructure as Code, it’s not only writing code, it is about adopting the same culture and the same tools developers use in their daily work. This includes using a source control management like GIT (you have it in Azure DevOps or in GitHub). It includes also a strategy to deal with updates and modifications. The modification should always test before going into the master branch and in the deployment phase.

There is, of course, the Test-AzResourceGroupDeployment cmdlet. But there are some other ways to perform tests. You can use Pester with PowerShell and read JSON sections to detect errors and unwanted behavior. There are some other tools like ArmTemplateValidation from Chris Gardner or ArmHelper from Barbara 4bes.

Arm Templates don’t need to be complex. It’s not simple but with some effort, you can build great projects with it. The key to success is collaboration and documentation. You need to think about ARM Templates as a software engineer and less than an Ops Engineers.

Do not be afraid of resources, they are well documented by Microsoft and always test your template before deployment. It will help to progress and will take always some of the most obvious errors.

Posted on by:

omiossec profile

Olivier Miossec


Microsoft Azure MVP, Passionate about Cloud and DevOps. Co-organizers of the French PowerShell & DevOps UG . Find me on https://github.com/omiossec


Editor guide