Using Event Grid/Azure Functions to automatically assign tag:owner to Azure resources

Solution outlined below will help with identifying resource owners for Azure resources. Frequent issue in any Azure environment is to figure out why some resource exists and who created it. Unless Azure policy was enabled to enforce tagging rules there is no built in mechanism to find this information easily.

Create Azure Functions App

Azure function is used to react to Event Grid event tied to Azure Activity Log and hence a new function app shall be created as first. Use Powershell Core as runtime stack.

The rest of Functions requirements can be left at default. Once App is created we need to assign it managed identity (either System or User assigned). Assign role of Tag Contributor to this object to Azure subscription.

Create function of type Azure Event Grid Trigger

Cut and paste following code into run.ps1

param($eventGridEvent, $TriggerMetadata)
Write-Output ($eventGridEvent.data | convertto-JSON -depth 50)

$caller = "{0} ({1})" -f $eventGridEvent.data.claims.name, $eventGridEvent.data.claims."http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name"
if ($null -eq $eventGridEvent.data.claims.name) {
    if ($eventGridEvent.data.authorization.evidence.principalType -eq "ServicePrincipal") {
        $caller = (Get-AzADServicePrincipal -ObjectId $eventGridEvent.data.authorization.evidence.principalId).DisplayName
        if ($null -eq $caller) {
            Write-Host "MSI may not have permission to read the applications from the directory"
            $caller = $eventGridEvent.data.authorization.evidence.principalId
            
        }
    }
}
Write-Host "Caller: $caller"
$resourceId = $eventGridEvent.data.resourceUri
Write-Host "ResourceId: $resourceId"

if (($null -eq $caller) -or ($null -eq $resourceId)) {
    Write-Host "ResourceId or Caller is null"
    exit;
}

$ignore = @("providers/Microsoft.Resources/deployments", "providers/Microsoft.Resources/tags")

foreach ($case in $ignore) {
    if ($resourceId -match $case) {
        Write-Host "Skipping event as resourceId contains: $case"
        exit;
    }
}

$tags = (Get-AzTag -ResourceId $resourceId).Properties

if (!($tags.TagsProperty.ContainsKey('Creator')) -or ($null -eq $tags)) {
    $tag = @{
        Creator = $caller
    }
    Update-AzTag -ResourceId $resourceId -Operation Merge -Tag $tag
    Write-Host "Added creator tag with user: $caller"
}
else {
    Write-Host "Tag already exists"
}

Switch to Function overview and make sure it’s enabled

Edit requirements.psd1 file in Function to import powershell modules required for function to run (Az.Resources, Az.Accounts) and restart Function App for settings to take effect.

Event Grid setup

Event Grid facilitates tying together event of resource creation to Azure resource creation event. Navigate to Event Subscription pane and create event subscription to Azure subscription.

Azure Active Directory setup

Azure Activity Log contains UPN name of who created resources for end users in Azure Active Directory but only object ID for resources created by service principal. You have to assign Global Reader role to managed identity of Function in Azure AD to be able to convert ObjectID into service principal name.

At this point you shall be able to test functionality by creating a new resources and monitoring function execution

Use Azure App Service as simple layer 7 router with rewrite rules

Windows hosted Azure App service comes with URL rewrite module and ARR module installed. Combination of both make App Service available as cheap and effective L7 load balancer. It’s available on lowest possible SKUs and provides all the benefits of PaaS service you expected from more expensive cousins (Application Gateway and Azure FrontDoor).

Configuration of app service to become L7 gateway consists of 2 steps.

  1. Configure AppService instance to enable proxy feature of ARR via XDT file transofmation
  2. Configure URL rewrite rule to perform rewrite incoming traffic

For test purposes of article below I will configure my app service instance to rewrite all requests to http://www.w3schools.com but restrict http://www.w3schools.com/js to be accessible only by certain IP addresses.

Steps

  • Create windows based App Service in Azure, you can choose cheapest plan, I chose D1 at $10 month
  • Create webapp inside app service plan above with .NET framework 4.8 as runtime

Create file named applicationhost.xdt with contents below which will instruct appservice runtime to enable proxy functionality as well as add additional HTTP headers for second leg requests for additional functionality

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">;
<system.webServer>
<proxy xdt:Transform="InsertIfMissing" enabled="true" preserveHostHeader="false" reverseRewriteHostInResponseHeaders="true" />
<rewrite>
<allowedServerVariables>
<add name="HTTP_ACCEPT_ENCODING" xdt:Transform="Insert" />
<add name="HTTP_X_ORIGINAL_HOST" xdt:Transform="Insert" />
</allowedServerVariables>
</rewrite>
</system.webServer>
</configuration>
  • Upload file to root of app service.
    • Go to Advanced Tools/Debug Console/Powershell
    • Upload file into /site folder
  • Create and upload 2 files below to wwwroot

Web.config file which restrict access to /js folder to only IP addresses specified

<configuration>
  <system.webServer>  
    <rewrite>  
        <rules>  
         <rule name="Deny access to /js directory" stopProcessing="true">
            <match url="^js/(.*)" />
            <conditions logicalGrouping="MatchAll">
              <add input="{REMOTE_ADDR}" pattern="47.188.89.221" negate="true" />
            </conditions>
            <action type="Rewrite" url="/deny.html"/>  
          </rule>  
        <rule name="Rewrite rquests to default azure websites domain" stopProcessing="true">
            <match url="(.*)" />
            <action type="Rewrite" url="https://www.w3schools.com/{R:0}" appendQueryString="true" />  
          </rule>  
        </rules>  
    </rewrite>  
  </system.webServer>  
</configuration>

Deny.html which will be shown when somebody trying to access /js folder from forbidden IP address

<H1>No access from your IP address</H1>

My website was deployed to https://l7-webapp.azurewebsites.net/

Test below confirms that website access works but attemping access any content under /js folder fails with specified error

PS C:\Users\artis> Invoke-WebRequest https://l7-webapp.azurewebsites.net/ | select content

Content
-------
…

PS C:\Users\artis> Invoke-WebRequest https://l7-webapp.azurewebsites.net/js | select content

Content
-------
<H1>No access from your IP address</H1>

Send windows containers logs to different Log Analytics workspaces in AKS

AKS built in monitoring agent sends stdout logs to single Log Analytics workspace. Business needs requirements might require you to separate logs from different namespaces to different Log Analytics workspaces for regulatory or security reasons. Currently it’s impossible without hosting application in separate AKS instances (with increased costs and complexity as a result).

Solution below will allow you to send stdout from windows containers to dedicated Log Analytics table (you can assign RBAC permissions per table) or workspace as required.

Deployment YAML below deploys daemonset based on fluentd on all windows nodes and service account to be able to query kubernetes for enriching logs. It also deploys client1 and client2 demo workloads which needs to be separated into 2 different Log Analytics tables.

Daemonset is based on public fluentd:v1.13-windows-ltsc2019-1 image and installs both log analytics and kubernetes plugins during bootstrap

image: fluent/fluentd:v1.13-windows-ltsc2019-1
command: ["cmd"]
args:
  [
   "/c",
   "gem install fluent-plugin-azure-loganalytics fluent-plugin-kubernetes_metadata_filter &",
    "fluentd",
    "-c",
    "C:\\fluent\\conf\\K8\\fluentd.conf",
  ]

Daemonset maps docker logs produced on each host for fluentd for processing

 - name: fluentd
          volumeMounts:
            - name: config-volume
              mountPath: "c:\\fluent\\conf\\K8\\"
            - name: varlog
              mountPath: /var/log
            - name: progdatacontainers
              mountPath: /ProgramData/docker/containers

Configuration for fluentd provided at runtime via configmap, directly traffic for each namespaces to different tables within Log Analytics workspace (100,110).

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentbit
  namespace: kube-logging
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: fluent-bit-read
rules:
  - apiGroups: [""]
    resources:
      - namespaces
      - pods
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: fluent-bit-read
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: fluent-bit-read
subjects:
  - kind: ServiceAccount
    name: fluentbit
    namespace: kube-logging
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-logging
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      nodeSelector:
        beta.kubernetes.io/os: windows
      tolerations:
        - key: "windows"
          operator: "Equal"
          value: "2019"
          effect: "NoSchedule"
      serviceAccountName: fluentbit
      containers:
        - name: fluentd
          volumeMounts:
            - name: config-volume
              mountPath: "c:\\fluent\\conf\\K8\\"
            - name: varlog
              mountPath: /var/log
            - name: progdatacontainers
              mountPath: /ProgramData/docker/containers
          image: fluent/fluentd:v1.13-windows-ltsc2019-1
          command: ["cmd"]
          args:
            [
              "/c",
              "gem install fluent-plugin-azure-loganalytics fluent-plugin-kubernetes_metadata_filter &",
              "fluentd",
              "-c",
              "C:\\fluent\\conf\\K8\\fluentd.conf",
            ]
          env:
            - name: K8S_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
      volumes:
        - name: config-volume
          configMap:
            name: fluentd-configmap
        - name: varlog
          hostPath:
            path: /var/log
        - name: progdatacontainers
          hostPath:
            path: /ProgramData/docker/containers
---
apiVersion: v1
data:
  fluentd.conf: |
    <match fluent.**>
      @type null
    </match>
    #Target Logs (ex:nginx)
    <source>
      @type tail
      @id in_tail_container_logs_client1
      path /var/log/containers/*client1*.log
      pos_file /var/log/containers/fluentd-containers.client1.pos
      tag kubernetes.client1.*
      read_from_head false
      format json
      time_format %Y-%m-%dT%H:%M:%S.%N%Z
    </source>
    <source>
      @type tail
      @id in_tail_container_logs_client2
      path /var/log/containers/*client2*.log
      pos_file /var/log/containers/fluentd-containers.client2.pos
      tag kubernetes.client2.*
      read_from_head false
      format json
      time_format %Y-%m-%dT%H:%M:%S.%N%Z
    </source>
    <filter kubernetes.**>
      @type kubernetes_metadata
      @id filter_kube_metadata
    </filter>
    <filter kubernetes.**>
      @type grep
      <exclude>
        key log
        pattern /Reply/
      </exclude>
    </filter>
    <match kubernetes.client1.**>
      @type azure-loganalytics
      customer_id d5ebf0b9-636b-41e9-b99a-d6f5f0f513f7
      shared_key fFNmE60918QH7M9C9BRFlOd4KAlsmM8uXUYYhJbYNArbLa56kKA8EK4FvgKuROhG2TbKa96JMo5NOYA7CduOYQ==
      log_type clientone
    </match>
    <match kubernetes.client2.**>
      @type azure-loganalytics
      customer_id d5ebf0b9-636b-41e9-b99a-d6f5f0f513f7
      shared_key fFNmE60918QH7M9C9BRFlOd4KAlsmM8uXUYYhJbYNArbLa56kKA8EK4FvgKuROhG2TbKa96JMo5NOYA7CduOYQ==
      log_type clienttwo
    </match>
kind: ConfigMap
metadata:
  name: fluentd-configmap
  namespace: kube-logging
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logspewer
  namespace: client1
  labels:
    app: logspewer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: logspewer
  template:
    metadata:
      labels:
        app: logspewer
    spec:
      containers:
        - name: logspewer
          image: mcr.microsoft.com/powershell:nanoserver-1809
          command: ["pwsh"]
          args:
            - -c
            - while (1) {'{0} {1}' -f (Get-Date), $env:Computername; Start-sleep 10}
      nodeSelector:
        kubernetes.io/os: windows
      tolerations:
        - key: "windows"
          operator: "Equal"
          value: "2019"
          effect: "NoSchedule"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: logspewer
  namespace: client2
  labels:
    app: logspewer
spec:
  replicas: 3
  selector:
    matchLabels:
      app: logspewer
  template:
    metadata:
      labels:
        app: logspewer
    spec:
      containers:
        - name: logspewer
          image: mcr.microsoft.com/powershell:nanoserver-1809
          command: ["pwsh"]
          args:
            - -c
            - while (1) {'{0} {1}' -f (Get-Date), $env:Computername; Start-sleep 10}
      nodeSelector:
        kubernetes.io/os: windows
      tolerations:
        - key: "windows"
          operator: "Equal"
          value: "2019"
          effect: "NoSchedule"

End result of this setup is having stdout from windows containers appearing in different tables in Log Analytics workspace

Moving Azure VM into availability set after VM creation

VMs created in Azure can be put into availability set (AS) only at creation time and require complete recreation from scratch if you want to add them to availability set if required.

Script below will allow to move VM into AS after it was created. There is bunch of similar script on internet but all of them relying on PS/CLI to attach/reattach NIC etc. Script below instead relies on native Azure functionality of exporting ARM representation of VM and modifying it with adding availability set to VM. Advantage of that method that a bunch of properties of resource is preserved which will be otherwise lost with PS/CLI approach (tags, extensions, caching info for disks etc)

File is available here or below (https://raw.githubusercontent.com/artisticcheese/artisticcheesecontainer/master/update-as.ps1)

#Requires -Version 7
#Script below would allow to add VM to availability set after VM was already deployed without it. 
#Process consists of exporting existing VM tempalte, modifying it's parameters and then importing it again
#Availability set shall already exist and be in the same resource group as VM resides
#Following paramters are are required: $vmName -> name of VM, $resourceGroupNam -> Name of resource group where VM and availability set is located, $availabilitySet -> name of availability set
#Once script is run Vm is removed and you are left with .\template.deploy.json file which you need to create a new deployment from with New-AzResourceGroupDeployment
#Example
#.\Update-AvailabilitySet.ps1 -vmName MyVm1 -resourceGroupName MyResourceGroup-RG -availabilitySet myAvailabilitySet
# New-AzResourceGroupDeployment -TemplateFile .\template.deploy.json -ResourceGroupName myResourceGroup-RG



[CmdletBinding()]
param (
   [Parameter(Mandatory = $true)] [string] $vmName,
   [Parameter(Mandatory = $true)] [string] $resourceGroupName,
   [Parameter(Mandatory = $true)] [string] $availabilitySet
)
$VerbosePreference = "Continue"
if ($null -eq (Get-AzContext)) { Login-AzAccount }
$ErrorActionPreference = "Stop"
$resource = Get-AzVM -ResourceGroupName $resourceGroupName -VMName $vmName 
$fileName = Join-Path (Get-Location) ".\template.json"
Export-AzResourceGroup -ResourceGroupName $resource.ResourceGroupName -Resource $resource.Id -IncludeParameterDefaultValue -IncludeComments -Path $fileName -Force
$templateTextFile = [System.IO.File]::ReadAllText($fileName)
$TemplateObject = ConvertFrom-Json $templateTextFile -AsHashtable
$computerObject = $TemplateObject.resources.where{ $_.type -eq "Microsoft.Compute/virtualMachines" }   
$computerObject[0].apiVersion = "2020-06-01"
if ($null -eq $computerObject.properties.availabilitySet) {
   $computerObject.properties.Add("availabilitySet", "")
}
$computerObject.properties.availabilitySet = @{ "id" = "[resourceId('Microsoft.Compute/availabilitySets', '$availabilitySet')]" }      
$computerObject.properties.storageProfile.dataDisks.ForEach{ $_.createOption = "Attach" }
$computerObject.properties.storageProfile.osDisk.createOption = "Attach"
$computerObject.properties.storageProfile.Remove("imageReference")
$computerObject.properties.storageProfile.osDisk.Remove("name")
$computerObject.properties.Remove("osProfile")
$TemplateObject | ConvertTo-Json -Depth 50 | Out-File -Path (Join-path (Get-Location) ".\template.deploy.json")
$resource | Stop-AzVM -Force
$resource | Remove-AzVM -Force
if ($env:POWERSHELL_DISTRIBUTION_CHANNEL -eq "CloudShell") {
   New-AzResourceGroupDeployment -TemplateFile (Join-path (Get-Location) ".\template.deploy.json") -ResourceGroupName $resourceGroupName
}