This post was created for the Festive Tech Calendar 2022 event.
Introduction
Festive greetings eager learners,
I'm excited to contribute to this year's Festive Tech Calendar with this blog post. As you might have guessed, it will be about the "new CLI kid on the block", Azure Developer CLI.
Now, there is a pletora of great blog posts and videos about this very topic on the Internet, so why bother? Well, I was thinking about exploring a possibility for using azd not by Devs but by IT Engineers. Sounds like heresy, right? Well, considering how much a traditional IT Pro role has changed with 'infra as code' and 'config as code' practices, cloud adoption, and automation of processes, I don't think it's the case anymore.
So, what are we trying to accomplish?
The mission
If you worked with Azure and tried to automate some IT processes for some time, you saw a gradual shift from Automation runbooks to PowerShell Functions and event-driven architecture. There are several patterns and use cases but the one we will focus on and try to "azdify" (yep, there is a term for this now) one particular Azure PowerShell Function that can automate shutdown of Azure VMs.
Our mission, if you choose to accept it, is to create a reusable azd template that will:
- provision all required components (Azure Function App, storage account, etc.)
- deploy the app (in this case a PowerShell code)
- enable monitoring, so we could troubleshoot any errors
- create a workflow that will provision & deploy for us automagically
The recipe
Our main cookbook will be this official guide on Microsoft Learn.
It doesn't make sense to start from scratch completely, so we could find an existing template (Microsoft or community provided) and cherry-pick elements we could re-use and refactor for our needs. You could either use azd-templates topic on GitHub or use this new Azwesome AZD web gallery.
After a long search, I decided to use this template and modify it (quite heavily) for our scenario.
A small snag
Since this article is being written while I am playing with the tool, and I don't know the outcome, there is one potential snag: PowerShell is not among officially supported languages. I am still hoping I'll be able to deploy the function with azd, but if not, I could still do it using VS Code or a CLI.
The kitchen
We need a solid dev environment with an IDE and tools that will help us on the way. Here are the specs of my Windows 11 "kitchen":
- GitHub CLI v2.20.2
- Azure CLI v2.43.0
- Git v2.39.0
- Windows Terminal v1.15.2875.0
- Bicep CLI v0.12.40.16777
- PowerShell v7.3.0.0
- Visual Studio Code (stable) v1.74.0
- Azure Functions Core Tools v4.x. Read this article to understand, what version you should install and from where.
If you like using winget, here is your "shopping list":
winget install --id GitHub.cli
winget install --id Microsoft.AzureCLI
winget install --id Microsoft.Bicep
winget install --id Microsoft.WindowsTerminal
winget install --id Microsoft.Bicep
winget install --id Microsoft.PowerShell
winget install --id Microsoft.VisualStudioCode
winget install -e --id Microsoft.AzureFunctionsCoreTools
We cannot forget about our main hero, azd, right? Since it is in Preview, you can't use winget yet. Pick a method based on your OS, I am doing:
powershell -ex AllSigned -c "Invoke-RestMethod 'https://aka.ms/install-azd.ps1' | Invoke-Expression"
We will also need several useful VS Code extensions on top:
code --install-extension ms-azuretools.vscode-azurefunctions
code --install-extension ms-azuretools.vscode-bicep
code --install-extension ms-vscode.powershell
code --install-extension ms-vscode.azure-account
code --install-extension ms-azuretools.azure-dev
code --install-extension ms-azuretools.vscode-azureresourcegroups
code --install-extension redhat.vscode-yaml
I also made a few things to ensure consistency between my defaults on GitHub and my local Git configuration:
git config --global init.defaultBranch main
Let's start baking
Since our goal is to publish our solution on GitHub, we want to start the development in the correct way by creating a project and cloning it locally. You could do it on github.com but I am big fan of "CLI first" approach:
gh repo create azd-function-pwsh-itpro --public --clone --add-readme --description 'Azure Developer CLI template for IT Process automation'
git pull origin main
Init
Now when we have the repo created, cloned, and in sync, let's start by scaffolding the repo with a basic "azd template" structure. For the sake of traceability, I will create an Issue and a branch first:
gh issue create -t "Bootstrap azd template" --body "Use azd init to scaffold an empty structure for the template." --assignee "@me"
gh issue develop 1 --name "pazdedav/issue1" --checkout
Now it's time to initialize the project by using an empty template and set a few parameters like Azure location, environment name (a prefix for the resource group that will be created), and your subscription Id.
azd init --location westeurope --subscription 00000000-0000-0000-0000-000000000000 --environment festcal --no-prompt
TIP: The documentation doesn't tell what value should be used for
--template
parameter for an empty template. What worked for me was to add--no-prompt
argument instead of specifying a template. Then you won't get any prompt.
Notice, that there is a new .azure
directory with some config files being generated by the tool. The content doesn't contain any secrets per se, but this directory is added to .gitignore
anyway.
So far so good, so let's commit all files we got via az init
to our branch, review the changes (ideally done by someone else 🌞) through PR, and merge our code:
git add .
git commit -m "Add azd project scaffold."
git push origin pazdedav/issue1
gh pr create --title "Add azd project scaffold." --body "Closes #1" --base main
gh pr merge --merge --subject "Add azd project scaffold." --delete-branch
Adding infrastructure code
In this step, we will add infra
directory and cherry-pick some existing templates from our reference repository (see "The recipe" for more information).
Let's create a new issue and a branch:
gh issue create -t "Add Bicep IaC" --body "Add Bicep modules and main template." --assignee "@me"
gh issue develop 3 --name "pazdedav/issue3" --checkout
Note: If you wonder, why I skipped "issue 2", I discovered that GitHub includes PRs into the automatic numbering system, so the new issue that was created with
gh issue create
command returned "Creating issue in pazdedav/azd-function-pwsh-itpro. https://github.com/pazdedav/azd-function-pwsh-itpro/issues/3".
The azd project introduced with version 0.2.0-beta.2 a new structure of Bicep infra code based on modules and the template we are using as a reference has this new structure (with app
and core
subdirectories under the infra
directory) aligned already. And since it contains all Azure resources we need for our project, I will simply download the codebase as a ZIP file and extract the content of the infra
directory to my project. It should look like this:
The template includes some additional components we don't need (like a database), so we need to refactor the main.bicep
and main.parameters.json
files to ensure, we will only provision what is required for Azure PowerShell Function. At the same time, I would like to keep the monitoring part in place, so we can get more insights about how the function works (or doesn't work).
Before we start with refactoring, it's good practice to stage and commit all the copied files to the local repo.
git add .
git commit -m "Add infra code from the sample"
It would be difficult to write about all changes I have done, so I am sharing the state of three files I changed here:
main. Bicep:
targetScope = 'subscription'
@minLength(1)
@maxLength(64)
@description('Name of the the environment which is used to generate a short unique hash used in all resources.')
param environmentName string
@minLength(1)
@description('Primary location for all resources')
param location string
// Optional parameters to override the default __azd__ resource naming conventions. Update the main.parameters.json file to provide values. e.g.,:
// "resourceGroupName": {
// "value": "myGroupName"
// }
param apiServiceName string = ''
param applicationInsightsDashboardName string = ''
param applicationInsightsName string = ''
param appServicePlanName string = ''
param keyVaultName string = ''
param logAnalyticsName string = ''
param resourceGroupName string = ''
param storageAccountName string = ''
@description('Id of the user or app to assign application roles')
param principalId string = ''
var abbrs = loadJsonContent('./abbreviations.json')
var resourceToken = toLower(uniqueString(subscription().id, environmentName, location))
var tags = { 'azd-env-name': environmentName }
// Organize resources in a resource group
resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
name: !empty(resourceGroupName) ? resourceGroupName : '${abbrs.resourcesResourceGroups}${environmentName}'
location: location
tags: tags
}
// The application backend
module api './app/api.bicep' = {
name: 'api'
scope: rg
params: {
name: !empty(apiServiceName) ? apiServiceName : '${abbrs.webSitesFunctions}api-${resourceToken}'
location: location
tags: tags
applicationInsightsName: monitoring.outputs.applicationInsightsName
appServicePlanId: appServicePlan.outputs.id
keyVaultName: keyVault.outputs.name
storageAccountName: storage.outputs.name
appSettings: {
}
}
}
// Give the API access to KeyVault
module apiKeyVaultAccess './core/security/keyvault-access.bicep' = {
name: 'api-keyvault-access'
scope: rg
params: {
keyVaultName: keyVault.outputs.name
principalId: api.outputs.SERVICE_API_IDENTITY_PRINCIPAL_ID
}
}
// Create an App Service Plan to group applications under the same payment plan and SKU
module appServicePlan './core/host/appserviceplan.bicep' = {
name: 'appserviceplan'
scope: rg
params: {
name: !empty(appServicePlanName) ? appServicePlanName : '${abbrs.webServerFarms}${resourceToken}'
location: location
tags: tags
sku: {
name: 'Y1'
tier: 'Dynamic'
}
}
}
// Backing storage for Azure functions backend API
module storage './core/storage/storage-account.bicep' = {
name: 'storage'
scope: rg
params: {
name: !empty(storageAccountName) ? storageAccountName : '${abbrs.storageStorageAccounts}${resourceToken}'
location: location
tags: tags
}
}
// Store secrets in a keyvault
module keyVault './core/security/keyvault.bicep' = {
name: 'keyvault'
scope: rg
params: {
name: !empty(keyVaultName) ? keyVaultName : '${abbrs.keyVaultVaults}${resourceToken}'
location: location
tags: tags
principalId: principalId
}
}
// Monitor application with Azure Monitor
module monitoring './core/monitor/monitoring.bicep' = {
name: 'monitoring'
scope: rg
params: {
location: location
tags: tags
logAnalyticsName: !empty(logAnalyticsName) ? logAnalyticsName : '${abbrs.operationalInsightsWorkspaces}${resourceToken}'
applicationInsightsName: !empty(applicationInsightsName) ? applicationInsightsName : '${abbrs.insightsComponents}${resourceToken}'
applicationInsightsDashboardName: !empty(applicationInsightsDashboardName) ? applicationInsightsDashboardName : '${abbrs.portalDashboards}${resourceToken}'
}
}
// App outputs
output APPLICATIONINSIGHTS_CONNECTION_STRING string = monitoring.outputs.applicationInsightsConnectionString
output AZURE_KEY_VAULT_ENDPOINT string = keyVault.outputs.endpoint
output AZURE_KEY_VAULT_NAME string = keyVault.outputs.name
output AZURE_LOCATION string = location
output AZURE_TENANT_ID string = tenant().tenantId
output REACT_APP_API_BASE_URL string = api.outputs.SERVICE_API_URI
output REACT_APP_APPLICATIONINSIGHTS_CONNECTION_STRING string = monitoring.outputs.applicationInsightsConnectionString
main.parameters.json:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"environmentName": {
"value": "${AZURE_ENV_NAME}"
},
"location": {
"value": "${AZURE_LOCATION}"
},
"principalId": {
"value": "${AZURE_PRINCIPAL_ID}"
}
}
}
api.bicep:
param name string
param location string = resourceGroup().location
param tags object = {}
param allowedOrigins array = []
param applicationInsightsName string = ''
param appServicePlanId string
param appSettings object = {}
param keyVaultName string
param serviceName string = 'api'
param storageAccountName string
module api '../core/host/functions.bicep' = {
name: '${serviceName}-functions-powershell-module'
params: {
name: name
location: location
tags: union(tags, { 'azd-service-name': serviceName })
allowedOrigins: allowedOrigins
alwaysOn: false
appSettings: appSettings
applicationInsightsName: applicationInsightsName
appServicePlanId: appServicePlanId
keyVaultName: keyVaultName
runtimeName: 'powershell'
runtimeVersion: '7.2'
storageAccountName: storageAccountName
scmDoBuildDuringDeployment: false
}
}
output SERVICE_API_IDENTITY_PRINCIPAL_ID string = api.outputs.identityPrincipalId
output SERVICE_API_NAME string = api.outputs.name
output SERVICE_API_URI string = api.outputs.uri
I want to validate my infra code by running the following checks:
az bicep build --file infra/main.bicep
az deployment sub validate --location westeurope --template-file infra/main.bicep --parameters environmentName=festcal location=westeurope
The first one "transpiles" our Bicep templates into a single JSON ARM deployment template and displays all errors and warnings found by Bicep linter. The second command checks our infra code against ARM API (also known as pre-flight). You need to be signed in to Azure in the console for the second command to work.
Since our Bicep code creates a Resource Group and provisions all resources, we are using 'subscription' as a scope for the deployment.
I didn't get any warnings or errors, so we could try to create our environment with azd:
azd provision
Voilà. We have our environment completely provisioned up in Azure:
Note: the last __azd_ command also updates the
.env
file in.azure
directory with all Bicep outputs we declared in the code but don't worry all files under the.azure
directory are excluded from Git tracking, so they won't be committed and pushed to origin._
Now let's commit, push, and merge everything to GitHub:
git add .
git commit -t "Refactor infra code."
git push origin pazdedav/issue3
gh pr create --title "Add infrastructure code to the template." --body "Closes #3" --base main
gh pr merge --merge --subject "Add infra code to the template." --delete-branch
Adding PowerShell function
Now we can scaffold our functions project (using Azure Functions Core Tools), add some code, and see if azd can deploy it or not.
Let's create a new issue and a branch:
gh issue create -t "Add PowerShell Function" --body "Scaffold a new function project and add PowerShell code." --assignee "@me"
gh issue develop 5 --name "pazdedav/issue5" --checkout
We want to keep our project neat and structured, so the function code should be placed in this path: src/api
. Let's create this path and initialize the project:
mkdir src\api
cd src\api
func init --worker-runtime powershell --managed-dependencies
We can see, what files were generated by func init
in our editor:
For local functions development, you could either install a local storage emulator, or update local.settings.json
file to use an existing storage account in Azure. I don't want to install even more tools to my development machine, so I will use the storage account we provisioned a while ago. Simply run the following command with the Function App name you got:
func azure functionapp fetch-app-settings func-api-c4dvlakzaax5o
Now we need some actual PowerShell function code. I didn't want to use the boilerplate code, so I was looking for a good example for our "IT Pros" scenario in the Serverless Library, and I chose this one created by Eamon O'Reilly. It's a timer-based function that can stop Azure VMs on a schedule.
First, we need to create a new function:
func new --name StopVMOnTimer --template "Timer trigger"
This function will need Az modules to work. Luckily, PowerShell Functions have a feature called "managed dependencies" that can download required modules for us. All we need is to:
- check in the PowerShell Gallery what version we want to add. Currently, it is v9.2.0.
- edit the
requirements.psd1
file, so it looks like this:
# This file enables modules to be automatically managed by the Functions service.
# See https://aka.ms/functionsmanageddependency for additional information.
#
@{
# For latest supported version, go to 'https://www.powershellgallery.com/packages/Az'.
# To use the Az module in your function app, please uncomment the line below.
'Az' = '9.*'
}
Depending on how often we want this function to run, we might want to update the schedule
key in function.json
configuration file. I want to shutdown my VMs every day at 7 PM, so this is how it looks in my case:
{
"bindings": [
{
"name": "Timer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 0 19 * * *"
}
]
}
Now let's replace the boilerplate code in run.ps1
file with the same file from Eamon's repo, so it looks like this:
# Input bindings are passed in via param block.
param($Timer)
# Specify the VMs that you want to stop. Modify or comment out below based on which VMs to check.
$VMResourceGroupName = "test-vms-rg"
#$VMName = "ContosoVM1"
$TagName = "AutomaticallyStop"
# Stop on error
$ErrorActionPreference = 'stop'
# Check if managed identity has been enabled and granted access to a subscription, resource group, or resource
$AzContext = Get-AzContext -ErrorAction SilentlyContinue
if (-not $AzContext.Subscription.Id) {
Throw ("Managed identity is not enabled for this app or it has not been granted access to any Azure resources. Please see https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity for additional details.")
}
try {
# Get a single vm, vms in a resource group, or all vms in the subscription
if ($null -ne $VMResourceGroupName -and $null -ne $VMName) {
Write-Information ("Getting VM in resource group " + $VMResourceGroupName + " and VMName " + $VMName)
$VMs = Get-AzVM -ResourceGroupName $VMResourceGroupName -Name $VMName
}
elseif ($null -ne $VMResourceGroupName) {
Write-Information("Getting all VMs in resource group " + $VMResourceGroupName)
$VMs = Get-AzVM -ResourceGroupName $VMResourceGroupName
}
else {
Write-Information ("Getting all VMs in the subscription")
$VMs = Get-AzVM
}
# Check if VM has the specified tag on it and filter to those.
If ($null -ne $TagName) {
$VMs = $VMs | Where-Object { $_.Tags.Keys -eq $TagName }
}
# Stop the VM if it is running
$ProcessedVMs = @()
foreach ($VirtualMachine in $VMs) {
$VM = Get-AzVM -ResourceGroupName $VirtualMachine.ResourceGroupName -Name $VirtualMachine.Name -Status
if ($VM.Statuses.Code[1] -eq 'PowerState/running') {
Write-Information ("Stopping VM " + $VirtualMachine.Id)
$ProcessedVMs += $VirtualMachine.Id
Stop-AzVM -Id $VirtualMachine.Id -Force -AsJob | Write-Information
}
}
# Sleep here a few seconds to make sure that the command gets processed before the script ends
if ($ProcessedVMs.Count -gt 0) {
Start-Sleep 10
}
}
catch {
throw $_.Exception.Message
}
I made two small modifications:
- updated the
$VMResourceGroupName
variable totest-vms-rg
- uncommented the
$TagName
variable, so only VMs with this resource tag will eventually be stopped
Test environment in Azure
If we want to test this function, we need to have an actual environment with a Resource Group (called test-vms-rg
) and some VMs with that AutomaticallyStop
tag.
We also need to make sure our function can perform "VM Stop" operations using PowerShell. Since we are using Managed Identity and our azd provision
step created a system-assigned managed identity for us, it is only a matter of grabbing correct objectId and do a role assignment.
Something like this:
az group create --name test-vms-rg --location westeurope
$assigneeObjectId = az ad sp list --display-name func-api-c4dvlakzaax5o --query "[0].id" --output tsv
az role assignment create --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --role "Virtual Machine Contributor" --scope /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/test-vms-rg
Ideally, we should have at least one VM in our Resource Group to test the functionality. Let's create one quickly:
az vm create --resource-group test-vms-rg --name myVM01 --image UbuntuLTS --admin-username azureuser --generate-ssh-keys --tags AutomaticallyStop=true
Test locally
One of many Azure Functions benefits is the ability to test them locally before you publish them to Azure. For all functions other than HTTP and Event Grid triggers, you can test your functions locally using REST by calling a special endpoint called an administration endpoint. It has the following Uri format: http://localhost:{port}/admin/functions/{function_name}
Calling this endpoint with an HTTP POST request on the local server triggers the function.
Let's run the function and use Invoke-WebRequest
cmdlet to trigger the function. We will need to open two windows in Windows Terminal for this:
First terminal session:
func start
Second terminal session:
$headers = @{
'Content-Type' = 'application/json'
}
$uri = "http://localhost:7071/admin/functions/StopVMOnTimer"
Invoke-RestMethod -Uri $uri -Method Post -Headers $headers
For some reason, I got "400 (Bad Request)" response and the log running in the first terminal did not provide any clues, what I am missing.
We will still try to deploy this function to Azure.
Update azd manifest and deploy
Ok, let's update the azure.yaml
file and check, if we can use azd deploy
or not:
Remember, what I wrote earlier about a potential snag? It seems we need to wait for PowerShell language support for Functions.
For the time being, we will use func
CLI to publish our function to Azure, more specifically with:
func azure functionapp publish func-api-c4dvlakzaax5o
I feel good about the progress we made, so let's commit, push, and merge our code again:
git add .
git commit -m "Add function code."
git push origin pazdedav/issue5
gh pr create --title "Add PowerShell function code to the template." --body "Closes #5" --base main
gh pr merge --merge --subject "Add application code to the template." --delete-branch
Configure a GitHub workflow
One missing step is to enable integration with GitHub Actions, so we can automate both the provisioning and deployment of our solution.
Since we were tracking our work from GH issues, through PRs, to merges before, we won't skip this step for our last exercise:
gh issue create -t "Add GitHub workflow." --body "Add GitHub workflow yaml file." --assignee "@me"
gh issue develop 7 --name "pazdedav/issue7" --checkout
This is what we will do:
- create a
.github
directory with aworkflows
subdirectory - copy the azure-dev.yml file from the sample repo, we have been working with
$uri = "https://github.com/Azure-Samples/todo-csharp-sql-swa-func/raw/main/.github/workflows/azure-dev.yml"
$workflowFile = (Invoke-WebRequest -Uri $uri).Content
mkdir .github\workflows
New-Item -Path ".github\workflows" -Name "azure-dev.yml" -ItemType "file" -Value $workflowFile
This workflow requires you to create a couple of secrets, before it can run successfully, as you can see in the workflow file. We could create them manually together with a Service Principal like this (don't do it like this, keep on reading):
gh secret set -f .\.azure\festcal\.env --app actions
az ad sp create-for-rbac --name azd-festcal-temp --role contributor --scopes /subscriptions/0c310ab4-e68f-45e7-b7e2-72fbbb32e891 --output json > gh_spn.json
$ghSpnValue = Get-Content -Path .\gh_spn.json -Raw
gh secret set AZURE_CREDENTIALS --body $ghSpnValue --app actions
Remove-Item .\gh_spn.json
Fortunatelly, azd can do everything for us, we simply need to run this command:
azd pipeline config
As you can see, everything is set:
We could say "Y" to get the pipeline started, but I'd like to do it manually, so I choose "n".
Let's close this last exercise with a couple of steps:
git add .
git commit -m "Add GitHub workflow file."
git push origin pazdedav/issue7
gh pr create --title "Add GitHub workflow to the template." --body "Closes #7" --base main
gh pr merge --merge --subject "Add azure-dev.yml to the template." --delete-branch
As expected, our pipeline run will fail due to the missing support for PowerShell in azd:
Summary
We could say this experiment wasn't strictly speaking successful. Our pipeline is failing, and we couldn't get the local test running.
On the other hand, we (hopefully) learned a lot about how azd works under the hood with a bunch of CLI examples using git
, gh
, func
, PowerShell, and az
.
If you managed to get to the end of this post, I hope it wasn't a waste of time for you.
Have a great Christmas holiday and all the best in 2023.
Cleanup
If you followed along and created resources in Azure, it's a good idea to clean things up to avoid an unexpected bill from Microsoft.
Azd has a simple command for it:
azd down
You might be prompted a few times to confirm your plan but in the end you should see something like this:
Other useful resources
For those of you who want to learn more about this awesome tool, here is a couple of resources I found especially useful:
- Azure Developer CLI YouTube channel
- Series of blog posts on Dev.to published by Christian Lechner
Top comments (0)