Azure DevOps as workflow automation for service management

Azure DevOps makes a good use case for situations where you need workflow management service for common tasks required by service management process. Example below showcases process of setting up workflow for Rename VM hypothetical task requested by service management tool.

Scenario which is being automated is request to rename VM in Azure which is currently unsupported by native control pane and require set of manual/semi-automated execution by personnel.

Entire process is documented in detailed below. Basic steps are

  • Run powershell to export current VM to a file
  • Delete original VM
  • Verify validity of generated template
  • Deploy template


Traditionally rename VM tasks are accomplished by removing original VM while preserving original disks and NIC and then recreating new VM as close as possible to original one. This approach is suboptimal since a lot of original metadata about original VM is lost (for example host caching for disks, tags, extensions etc). Approach being taken below instead relies on pulling current resource schema for VM (ARM template) and redeploy it with new name. Highlighted lines below are required to account for situations when VM was created from market place image. Output of powershell will be template file with sanitized inputs to be recreated with custom name

param (
      [Parameter(Mandatory = $true)] [string] $vmName,
      [Parameter(Mandatory = $true)] [string] $resourceGroupName,
      [Parameter(Mandatory = $true)] [string] $newVMName
$ErrorActionPreference = "Stop"
$resource = Get-AzVM -ResourceGroupName $resourceGroupName -VMName $vmName 
Export-AzResourceGroup -ResourceGroupName $resource.ResourceGroupName -Resource $resource.Id -IncludeParameterDefaultValue -IncludeComments -Path .\template.json -Force
$resource | Stop-AzVM -Force
$resource | Remove-AzVM -Force
$templateTextFile = [System.IO.File]::ReadAllText(".\template.json")
$TemplateObject = ConvertFrom-Json $templateTextFile -AsHashtable
$ = "Attach"
$TemplateObject | ConvertTo-Json -Depth 50 | Out-File (".\template.json")

Azure DevOps

Create classic build pipeline (until Yaml build pipeline allow UI editing I would personally stay away from them).

  • Add following variables (vmName, newVMName, resourceGroupName) to build pipeline which will identify VM name, new VMName, resource group name for VM being worked on. Allow setting of those variable at queue time.
  • Add Azure powershell task to execute powershell file script mentioned above and pass parameters set above to it and make sure it’s set as Powershell core

Add Azure Resource Group Deployment task to verify validity of generated template. Please note highlighted parameters below.

  • Add another Azure Resource Group Deployment task to perform actual rename. Settings are the same as previous step, just deployment mode shall be set to Incremental

This shall complete Build pipeline. You can test it manually by providing values for 3 parameters directly from Azure DevOps UI.

Integration with service management

Azure DevOps provides REST API to perform actions against service. Documentation available here.

To call API you need to generate PAT token first for your or service account by going to Azure DevOps and choosing PAT. The only permission need is Build - Read & Execute

To invoke build via API one have to call URI similar to following ( Below is POST contents of the body of request identifying build by number and parameters which will be passed to build at queue time.

"parameters": "{\"vmName\": \"VM1\",	\"newVMName\": \"VM2\",	\"resourceGroupName\": \"temp\"}"

Response of build request would contain link to get status of the build as well which front-end service can call to get status of the build

Azure Private Link in action

Azure networking team just introduced preview of Azure Private Link ( It promises to bring functionality previously unavailable for bridging gap in networking between PaaS and VNETs as well as between VNETs in different tenants/subscriptions.

There are 2 distinctive use cases for Private Link:

  1. Private Link for accessing Azure PaaS Services
  2. Private Link to Private Link Service connection for connectivity across tenants and subscriptions and even overlapping IP address across VNETs

Private Link for accessing Azure PaaS Services

Traditionally if you wanted to access PaaS services securely within VNET you’d need enable VNET service endpoint which will in turn enable routing of requests from within your VNET directly to your PaaS service. PaaS will see your requests coming from private IP range of your VNET as opposed public IP address before the enablement. You still go through public IP of PaaS service though as a result, just not route through edge.

Private Link solution creates endpoint with local IP address on your subnet through which you can access your PaaS service. You will in fact see Network Interface resource being created with associated IP address once your enable this resource.

It will be similar to reverse NAT from networking point of view.

Example is below where I created storage account called privatelinkMSDN which does not have integration into VNETs so by default it will deny all connections to blobs externally or internally.

Accessing blob externally will produce HTTP error as expected due to IP filtering on storage account.

Trying to resolve name externally produces external IP address of service

PS C:\Users\174181> resolve-dnsname -Type A                                                                                                                                                                                                                                                                                                                                                                     

Name                           Type   TTL   Section    NameHost                                                                                                                                                                  ----                           ----   ---   -------    --------
                                                                                                                                                     CNAME  53    Answer
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           Name       :                                                                                                                                                                          QueryType  : A                                                                                                                                                                                                                  TTL        : 52                                                                                                                                                                                                                Section    : Answer                                                                                                                                                                                                             IP4Address :                                                                                                                                                                                                                                                        

Creating of Private Endpoint is not covered here since it’s well documented at Microsoft. End result is shown below. Following resources are created as result of creation of Private Endpoint:

  1. DNS zone named as with record pointing to your Private Endpoint
  2. Private Endpoint itself
  3. Network Interface resource associated with Private Endpoint
  4. Private IP address associated with Network Interface

While externally this URL resolves to external IP address, resolving the same name within VNET delegates resolution to private DNS zone and provides internet IP address of NIC card and hence provides access to image in blob as expected.

PS C:\Users\cloudadmin> resolve-dnsname -Type A                                                                                             
Name                                           Type   TTL   Section    IPAddress
----                                           ----   ---   -------    --------- A      1800  Answer

Private Link Service connection

Initial Configuration I’m working with is described below

  1. Azure Tenant 1 ( which is associated with Subscription 2. This will be hypothetical ISV customer which provides services to tenant 2 below (like VDI for example). Subscription 1 contains VNET called MSDN-VNET with address space.
  2. Azure tenant 2 ( which is associated with Subscription 2. This is customer who would like to privately connect to your services. Subscription 2 contains VNET called NTT-VNET with address space (please note it’s the same address space as VNET in Subscription 1)

There is no trust between 2 tenants (that is there no guest accounts in either directory from other directory), so essentially it’s completely separate Azure Environments.

Traditionally to connect from Azure 2 to Azure 1 you’d have to either:

  1. Expose your services via public IP address with restrictive NSG rules on it (poor security and additional cost due to ingress traffic charges)
  2. Create VNET to VNET connectivity via VPN gateway (costly, can not have overlapped IP address space, cumbersome to setup and administer)
  3. Create VNET peering between VNETS (can not have overlapped IP address space)

Solution consists of parts depicted on image below:

In Subsription 2 you create:

  • Private Link Service (PLS) which will be used as endpoint connection target for your customers
  • Network Interface resource with IP addresses which will be used for NAT (
  • Standard Load balancer with load balancing rule
  • Backend pool with IIS ( which you want to provide access to your customer

In Subscription 1 you create

  • Private Endpoint which will connect to PLS in Subscription 2
  • Network Interface with IP ( which will be used for connectivity to PLS

Client 1 living in Subscription 1 can connect to IIS resource in Subscription 2 via IP of IIS is configured to respond with information about client connecting to it. Opening web page on serves page from IIS web server identifying that HTTP connection originates from

PS C:\Users\cloudadmin> (Invoke-WebRequest

Azure lighthouse vs guest tenant management

Traditionally if you have to manage customers environment you had 2 choices:

  1. Ask customer to add your account from your tenant as guest user to their Azure Active Directory and assign specific RBAC roles afterwards on resources
  2. Customer would have to create an account for you in their tenant. You’d have to maintain 2 different username/passwords as a result and perform logon/logoff in management for each tenant

Traditional approach

For demo purposes following are initial input parameters:

  • MSDN subscription called “Customer Subscription” ( 8211cd03-4f97-4ee6-af42-38cad1387992) in “” tenant (c0de79f3-23e2-4f18-989e-d173e1d403d6).
  • I want to manage this subscription from my main tenant with account
  • Add your account ID into Role in customers subscription
  • Email will be dispatched with invitation and require me to accept via following link
  • Once invitation is accepted I can see new tenant is available for me to switch to in portal
  • Switching to tenant allows me to view managed subscription

Problems with traditional approach:

  1. Requires end user interaction to accept invitation to manage customers environment
  2. Can only invite individual team members and not groups
  3. Partner has to switch between tenants to manage their environment (can not see for example all VMs from all managed tenants) or execute single Azure Automation RunBook across all tenants
  4. Customer have to deal with user lifecycle management, that is remove user or add user anytime something happens on partner side

Lighthouse approach

New way of managing this process is outlined below.

You can onboard customer either through Azure Marketplace or ARM deployment. I will be using ARM deployment below since one have to be Azure MSP partener to publish to marketplace.

JSON files for this post located here.

You need to gather following information before onboarding a customer

  1. Tenant ID of your MSP Azure AD
  2. Principal ID of your MSP Azure AD group
  3. Role Definition ID which is set by Azure and available here

For my specific requirements values are below: role definitinon ID is Contributor which has ID of b24988ac-6180-42a0-ab88-20f7382dd24c, Group ID e361eaed-1a02-4b06-9e12-04417f6e2a46 from tenant 65e4e06f-f263-4c1f-becb-90deb8c2d9ff

      "$schema": "",
      "contentVersion": "",
      "parameters": {
            "mspName": {
                  "value": "NTTData Consulting"
            "mspOfferDescription": {
                  "value": "Managed Services"
            "managedByTenantId": {
                  "value": "65e4e06f-f263-4c1f-becb-90deb8c2d9ff"
            "authorizations": {
                  "value": [
                              "principalId": "e361eaed-1a02-4b06-9e12-04417f6e2a46",
                              "principalIdDisplayName": "Hyperscale Team",
                              "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c"

I deploy from cloudshell since it’s already correctly logs me into correct tenant. Switch to correct subscription before running ARM deployments

PS /home/gregory> Select-AzSubscription -SubscriptionId 8211cd03-4f97-4ee6-af42-38cad1387992

Name                                     Account                                         SubscriptionName                               Environment                                    TenantId
----                                     -------                                         ----------------                               -----------                                    --------
Customer Subscription (8211cd03-4f97-4e… MSI@50342                                       Customer Subscription                          AzureCloud                                     fb172512-c74c-4f0d-bb83-3e70586312d5

PS /home/gregory> New-AzDeployment -Name "MSP" -Location 'Central US' -TemplateFile ./template.json -TemplateParameterFile ./template.parameters.json
DeploymentName          : MSP
Location                : centralus
ProvisioningState       : Succeeded
Timestamp               : 9/3/19 3:24:26 PM
Mode                    : Incremental
TemplateLink            :
Parameters              :
                          Name                   Type                       Value
                          =====================  =========================  ==========
                          mspName                String                     NTTData Consulting
                          mspOfferDescription    String                     Managed Services
                          managedByTenantId      String                     65e4e06f-f263-4c1f-becb-90deb8c2d9ff
                          authorizations         Array                      [
                              "principalId": "e361eaed-1a02-4b06-9e12-04417f6e2a46",
                              "principalIdDisplayName": "Hyperscale Team",
                              "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c"

Outputs                 :
                          Name              Type                       Value
                          ================  =========================  ==========
                          mspName           String                     Managed by NTTData Consulting
                          authorizations    Array                      [
                              "principalId": "e361eaed-1a02-4b06-9e12-04417f6e2a46",
                              "principalIdDisplayName": "Hyperscale Team",
                              "roleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c"

DeploymentDebugLogLevel :

Login to your customer environment and check that you see now “NTTData Consulting” in service providers

Now if you want to add additional access (like accessing second subscription) you can do it right from portal without need for ARM deployment. For example below I’m adding access to specific resource group in separate subscription to be managed by MSP.

In my MSP panel I can now see both access to entire subscription and access to specific resource group in another

You shall be able to see resources in portal just like if your account was part of customers tenant

For example I added tags to existing storage account and it appears as I was guest account in customers AD.

Automation at scale in Azure with Powershell Azure functions

Code for article below is located at

My current task was to execute certain script within big number of VMs (700+) on periodic schedule to pull Metadata information from Azure dataplane ( ). This data is available ONLY within running VM and there is no way to access it any other way. Specifically data about ScheduledEvents ( ) which informs VM if Azure initiated reboot is pending in one way or another (detailed info at

Microsoft provides solution called “Azure Scheduled Events Service” ( ) which has severe drawbacks. Namely:

  1. You have to download and install service on all machines
  2. It relies on Invoke-RestMethod cmdlet to query metadata services and hence not supported powershell 2.0 and hence by default will not run on Windows 2008
  3. It only runs on Windows obviously so none of UNIX machines will be covered
  4. It logs data into local Application Log which is completely useless since now you have to figure out how to centralize and query this information
  5. There is no centralized alerting on those events as result of point 4 above

My solution which is outlined below is relying on Azure Resources to install/maintain/query/alert on health events without the need for dedicated agents.

Solution consists of following moving parts

  1. Azure Powershell function
  2. Azure Storage Queue
  3. Azure Log Analytics Account
  4. Azure monitor

General flow is below

Azure powershell function executed on timer or via HTTP request which is populates storage queue with all VM names in subscriptions, their resource group and powerstate of Machine

Azure App Service where powershell function is hosted on has a scale out condition to jump to 8 instances upon seeing storage queue being populated which in return provides around 160 concurrently executing workers

Second Azure powershell function is bound to storage queue and spins up upon presence of queue messages. It reads queue message, pulls VM and check it’s operating system version and based on that executes either shell or powershell script to pull metadata service via Invoke-AzVMRunCommand

Upon success or error script write to LogAnalytics workspace data being returned

Azure monitor is setup to act upon Azure Log Analytics query.


Create Function App which will host 2 functions mentioned above. Example is below. Don’t use consumption plan since it does not scale well with powershell and choose at least S2 size since you will be able to use multiprocessor capabilities to scale out locally and in addition to scale app service out based on queue as well.

Go to storage account which was created and create 2 queues to hold messages and message rejects (poison).

Copy storage account connection string from this storage account, this will be required for function setup

Create Log Analytics workspace to hold messages

Record values of WorkspaceID as well as primary key to be used later in function

Update local.settings.json in your Function folder to contain settings you copied earlier. Mine example is below

  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=mymetadatafuncta57d;AccountKey=9/jxdL3jdsrKED+ddQHByebGkzozxiLHrNeRUrvGWhO8//dzGm9m184n0VymQBTBlkfzIPkbx1+nTSXA/6HlZQ==",
    "FUNCTIONS_WORKER_RUNTIME": "powershell",
    "LogAnalyticsWorkspaceID": "02f2eb14-85d2-4069-9a1a-6b8cd91d783c",
    "LogAnalyticsSharedKey": "D0P2Z9D4U3k8xJFLzBnLg/Ns3oyEsEj4ivVxq5buGQN5BtYND/nleWGfrsc5SD6wajW/SbtqpvvgWCjQCfPdlw==",
    "QueueName": "metadataservicequeue",

Deploy function to Azure from VSCode

Once function is deployed try to execute PopulateQueueWithVmNamesHTTP. You are expected to see failure since Function shall not be having necessary permissions to access Azure resources.

2019-08-20T21:07:27.528 [Information] INFORMATION: getting Queue Account info
2019-08-20T21:07:28.062 [Information] INFORMATION: getting all VM Account info
2019-08-20T21:07:29.804 [Error] ERROR: No account found in the context. Please login using Connect-AzAccount.
Microsoft.Azure.WebJobs.Script.Rpc.RpcException : Result: ERROR: No account found in the context. Please login using Connect-AzAccount.
Exception: No account found in the context. Please login using Connect-AzAccount.

Assign system assigned identity to your Function by going to Identity option in Platform feature

Add Identity to Reader and Virtual Machine Contributor roles in subscription. Reader role is needed to pull list of all VMs in subscription and Contributor role one needs to be able to execute scripts on VMs

You shall see successfull output now with details of what queue messages were created

2019-08-20T21:25:46  Welcome, you are now connected to log-streaming service. The default timeout is 2 hours. Change the timeout with the App Setting SCM_LOGSTREAM_TIMEOUT (in seconds). 
2019-08-20T21:25:49.448 [Information] Executing 'Functions.PopulateQueueWithVMNamesHTTP' (Reason='This function was programmatically called via the host APIs.', Id=3d49429c-63c9-4b8e-998b-d05514863f09)
2019-08-20T21:25:55.744 [Information] INFORMATION: PowerShell HTTP trigger function processed a request.
2019-08-20T21:25:55.761 [Information] INFORMATION: getting Storage Account info
2019-08-20T21:25:57.910 [Information] INFORMATION: getting Queue Account info
2019-08-20T21:25:58.183 [Information] INFORMATION: getting all VM Account info
2019-08-20T21:26:01.662 [Information] INFORMATION: Generating queue messages
2019-08-20T21:26:01.766 [Information] INFORMATION: Loop finished
2019-08-20T21:26:01.770 [Information] INFORMATION: Added 1 count {
"VMName" : "GregDesktop",
"ResourceGroup": "DEVTESTLAB-RG",
"State" : "VM running"
} to queue 1 records process
2019-08-20T21:26:01.920 [Information] Executed 'Functions.PopulateQueueWithVMNamesHTTP' (Succeeded, Id=3d49429c-63c9-4b8e-998b-d05514863f09)

You shall also see this queue message in your storage account

If you monitor logs for MetadataFunction you’ll see it wake up and process messages posted in queue

019-08-20T23:12:07.244 [Information] INFORMATION: Finished executing Invoke-AzureRMCommand with parameters GregDesktop, DEVTESTLAB-RG, VM running, return is {"DocumentIncarnation":0,"Events":[]} )
2019-08-20T23:12:07.255 [Information] INFORMATION: Outputing following to Log Analytics [
        "Return" : "{\"DocumentIncarnation\":0,\"Events\":[]}",
        "VMName" : "GregDesktop",
        "ResourceGroup" : "DEVTESTLAB-RG"

2019-08-20T23:12:07.588 [Trace] PROGRESS: Reading response stream... (Number of bytes read: 0)
2019-08-20T23:12:07.589 [Trace] PROGRESS: Reading web response completed. (Number of bytes read: 0)
2019-08-20T23:12:07.596 [Information] OUTPUT: 200
2019-08-20T23:12:07.644 [Information] Executed 'Functions.MetadataFunction' (Succeeded, Id=21111100-7a23-4374-93f1-9dfa5df76011)

You’ll see also output posted to LogAnalytics workspace custom folder called MetaDataLog

You can then setup alerting on scheduled redeploy events via executing Kusto query below and tying Monitor action to it

| project VMName_s, TimeGenerated,  ResourceGroup, Return_s
| summarize arg_max(TimeGenerated, *) by VMName_s
| where Return_s contains "Redeploy"
| order by TimeGenerated desc 


  1. Consumption plan is impossible to use due to scalability of powershell running on single core instances provided by consumption plan. I was unable to use it in any form or capacity until I switched to App Service plan instead. (
  2. Increase value for parameter PSWorkerInProcConcurrencyUpperBound to increase concurrency since function is not CPU or IO bound. Mine is set to 20
  3. Go to Application Service plan also configure Scale Out/In rule to scale number of instances based on size of queue. Mine is set to 8. So once application is triggered you’ll get 160 instances of powershell executing in parallel
  4. Project consists of 2 functions to populate queue. One is HTTP triggered and another one executed on timer.

Optimizing Azure Inventory with Powershell Azure Functions and Power BI

In previous post I showed how to build Azure inventory system by utilizing Azure Automation Account, Azure Logic App and SharePoint.

Drawbacks of that approach:

  1. Solution is pretty cumbersome due to the use Azure powershell for query for necessary information and takes up to 30 mins to execute
  2. Neither Azure Logic App no Azure Automation are well integrated into source control and hence makes it difficult to move around as well use modern technologies for CI/CD
  3. Use of SharePoint Excel causing additional headache for concurrency since each item is inserted one by one into Excel online which is causing occasional timeouts and needs to be handled in retry option in Azure Logic App
  4. Result Excel file is single dimensional database and non interactive which is sub-optimal
  5. Report is always stale since it’s last view of state when report was successfully run, so if you run it weekly you might be looking at stale data

Alternative solution which fixes drawbacks above are based on completely different set of technologies. Azure Logic App is replaced with Azure Function, Excel is replaced with Power BI, Azure powershell calls replaced with Azure Resource Graph.

Flow of this setup is as follows:

  1. Azure powershell function is created with system assigned identity to query Azure Resource Graph
  2. Resulting JSON is ingested as web source in to Power BI report

Code for function and power bi file is available here. This is just proof of concept setup and hence you’d need to modify exact query script as well as BI dashboard to fit your needs. Current incarnation just outputs disk information data.


Create Function App. It shall be powershell function on Windows

Create system assigned identity for application and assign read permissions to subscription

If you are using Azure Blueprints you can create this assignment in your management group

Deploy function app from VSCode

Test application which shall return information from all subscriptions identity have access to

Create function URL which would be called later via Power BI by clicking “Get Function URL”

Open PBIX file from the same repo and click on File/Options and Settings/Data Source settings

Click “Change source” and point to URL you copied in step above

You shall report similar to below. It’s interactive so you can click on VM and see what disk is used by that VM, or you can click on location and only filter VMs from that specific location, etc.

Building better Azure inventory system

Azure portal provides semi-usable ability to inventory your VMs for various folks at you company but have significant limitations, namely:

  • Users will still require be at least in ReadOnly role in your subscription
  • Available information is not well structured (for example you can not see in list size of OS disk drive for each machine)
  • There is no historical information available about state of environment since what you see in snapshot in time of inventory

Solution below will allow to output required information in Excel file hosted in Office 365. Following are working pieces of solution

  1. Azure logic app which runs on recurrent schedule
  2. Azure automation job which pulls necessary information out of Azure and output JSON for Azure Logic App to consume
  3. Office 365 workspace which will hold resulting Excel file written as part of LogicApp job

Automation Job

Automation job consists of 2 run books. Get-AzureInventory graphical powershell runbook which calls child jobGet-VMs

Get-AzureInventory runbook shown below is used to pull Automation Account credentials and login to target subscription. Download this file ( and import it into Automation Account. This runbook calls child runbook called Get-VMs ( ) download and import it into Automation account as well.

The output of this job is JSON file which will be consumed downstream by Azure LogicApp and sent to Excel online.

Example of output is below.

        "VMName":  "WinVM",
        "Location":  "canadacentral",
        "ResourceGroup":  "TEST",
        "OSType":  "Windows",
        "PowerState":  "VM running",
        "BootDiagnostics":  null,
        "OSDiskSizeGB":  127,
        "NumberOfDataDisks":  2,
        "Offer":  "WindowsServer",
        "Publisher":  "MicrosoftWindowsServer",
        "VMSize":  "Standard_DS1_v2",
        "DataDisksSize":  "10|10",
        "VnetName":  "Test-vnet",
        "Subnet":  "default",
        "privateIPs":  "",
        "publicIPs":  "",
        "EnvironmentTag":  "Production",
        "VMCores":  1,
        "VMmemory":  3.5
        "VMName":  "LinuxVM",
        "Location":  "westus",
        "ResourceGroup":  "TEST",
        "OSType":  "Linux",
        "PowerState":  "VM running",
        "BootDiagnostics":  null,
        "OSDiskSizeGB":  30,
        "NumberOfDataDisks":  0,
        "Offer":  "UbuntuServer",
        "Publisher":  "Canonical",
        "VMSize":  "Standard_B1ms",
        "DataDisksSize":  "",
        "VnetName":  "Testvnet646",
        "Subnet":  "default",
        "privateIPs":  "",
        "publicIPs":  "",
        "EnvironmentTag":  "Development",
        "VMCores":  1,
        "VMmemory":  2

You can modify script Get-VMs to your liking. For example current script looks for 4 specific tags and outputs into JSON which might be different in your environment. The same goes for custom output of DataDisk sizes etc.

Azure Logic App

Azure Logic App is what automates entire process of pulling information out of Azure via Automation Runbook job, massaging it and outputing it into Excel online. Steps are shown below.

App consists of following major steps

  1. Recurrent execution
  2. Instantiating and getting results of Automation job in JSON
  3. Copying Excel template file to populate from JSON obtained in step 2
  4. Copying file into History folder for historical reasons

Create following folder structure in Sharepoint online. You can get Excel file here (

Add recurrence step with whatever recurrence you desire

Add Azure Automation job you created in previous step

Add Get Job Output job and pass JobID from previous step

Add Parse JSON step which will convert output of Automation job into LogicApp artifact. Paste following JSON into schema ( )

Add Copy File Sharepoint task to copy template file into root folder

Add foreach block and map Excel fields to results of Parse JSON statement

Add step to copy file to history folder for historical reasons

If everything was done correctly then running Azure Logic App job manually will populate excel file with information from Azure

Proper Azure policy to verify Azure hybrid benefit enabled

Azure policies allows Azure admins to enforce or verify how Azure resources are deployed in environment. It relies on Azure Policy definition file written in JSON which stipulates what condition resource shall adhere to pass or fail a policy and what effect it will have on resource (deny or audit)

This is very good way to prevent certain things from happening before it happens (like deploying resources in unapproved locations etc) which are not possible to accomplish with plain RBAC controls.

For this specific case I needed to ensure that all VMs in subscription are enabled with Azure Hybrid Benefit which saves up to 40% of Windows licensing costs if company already has EA agreement with Microsoft and will not pay double licensing costs for OS.

Searching Microsoft samples actually yield a result which worked fine for sometime untill I was alerted that some machines are still passing a test despite the fact that they are not using Hybrid Benefit.

Looking at policy it becomes apparent what the issue is. Definition of original policy is below and applicable only to images created from Azure gallery Windows images. So if you created VM through ASR or created VM from custom image this policy will not apply. Another issue is that policy does not apply to Windows client machine if you happen to have those in your environment since license types of those is named differently.

 "if": {
                "allOf": [
                        "field": "type",
                        "in": [
                        "field": "Microsoft.Compute/imagePublisher",
                        "equals": "MicrosoftWindowsServer"
                        "field": "Microsoft.Compute/imageOffer",
                        "equals": "WindowsServer"
                        "field": "Microsoft.Compute/imageSKU",
                        "in": [
                        "field": "Microsoft.Compute/licenseType",
                        "notEquals": "Windows_Server"

To find our what all aliases are used for specific resource you can execute following powershell statement (Get-AzPolicyAlias -NamespaceMatch 'Microsoft.Compute').Aliases. It will list all aliases available to be using in Microsoft.Compute resource provider. To identify Windows only boxes we can narrow down the search to osType property

PS Azure:\> (Get-AzPolicyAlias -NamespaceMatch 'Microsoft.Compute').Aliases  | where Name -match ostype | select Name


From the list you can clearly see that Microsoft.Compute/virtualMachines/storageProfile.osDisk.osType is properly alias to use in matching. You can verify aliases on existing VMs you can use Azure Resource Graph in preview portal by executing following query

where type=~'Microsoft.Compute/virtualMachines'
| where name =~ 'pr7-material'
| project aliases

So resulting Azure Policy Definition rule shall look like below which will identify all Windows VMs regardless how they were created in your subscription and ensure hybrid benefit is enabled on them.

    "policyRule": {
            "field": "type",
            "equals": "Microsoft.Compute/virtualMachines"
            "field": "Microsoft.Compute/virtualMachines/storageProfile.osDisk.osType",
            "equals": "Windows"
            "allOf": [
                "field": "Microsoft.Compute/licenseType",
                "notEquals": "Windows_Server"
                "field": "Microsoft.Compute/licenseType",
                "notEquals": "Windows_Client"

Below result catching windows box created not from gallery failing audit as a result