Using VSTS for complete CI/CD pipeline for multi-arch docker images

I needed to have multi-arch docker images to return all environment variables to screen as well as in response headers (so they be used in Fiddler or similar tools to extract data). Idea and implementation is inspired by Stefan Sherer’s whoami image avaialable here

My base image is which is multi-arch itself and hence allows me to have single DOCKERFILE for both UNIX and Windows builds. Entire code and additional artifacts are available at following Github repo in whoami folder.

DOCKERFILE is below which is the same file used for both Windows and UNIX builds.

Building this DOCKERFILE on Windows – will pull current nanoserver based image and on UNIX – current UNIX based image with no code changes necessary to DOCKERFILE itself or build process.

You can see how image works by instantiating Windows and UNIX containers in Azure Container Instance (cloud shell will work fine) and examining headers.

Image in addition to response HTTP headers output information to HTML as well which might be useful for troubleshooting/demo purposes


CI/CD pipeline in VSTS

Build and Release pipeline are exported as JSON files and available in Github repo. File names are is build definition and is release defintion

Build consists of following steps:

  1. Download source code and artifacts from GitHub
  2. Run steps on hosted UNIX agent (provided for free by VSTS)
    • Build from DOCKERFILE
    • Push to Docker Hub
  3. Run steps on hosted Windows agent (provided for free by VSTS)
    • Build from DOCKERFILE
    • Push to Docker Hub
    • Rebase images for 1709 and 1803 images (work in progress currently)
  4. Run RegEx task to replace static data in manifest file which will be used to create multi-arch image with current BuildVersion
  5. Publish manifest as artifact for release pipeline
 This is how it looks like in UI

You can also enable real CI (instead of manually invoking build) in Options by checking “Enable Continuous Integration”


Result of successful build is YAML manifest file identifying image tags for UNIX/Windows images in docker hub


Example of manifest file is below

Release (CD) pipeline consists of following steps

  1. Install manifest tool from chocolatey on agent (courtesy of Stefan again)
  2. Download build artifact (manifest file) which contains information about current image tags
  3. Running tool from step 1 to update docker hub with latest image

CD is automatically triggered by successful build




ARM template for deploying windows based docker swarm in Azure

Below is ARM template as well as instructions how to deploy fully managed docker swarm into Azure based off Windows hosts for both managers and workers.

Solution along with all required files is available at following GitHub repo artisticcheese/dockerswarmarm

Clone that repo and follow with steps below

End Results

Result of  following through with steps below will be:

  • Virtual Machine Scale set with worker nodes which is joined to swarm
  • VM hosting docker swarm manager role
  • Application Gateway which will point to worker nodes for layer 7 load balancing as well HTTP/HTTPS termination, web application firewall etc
  • Azure load balancer with mapped entries for RDP access to worker nodes
  • Azure Key Vault which will hold secrets
  • Azure Automation Account which will hold DSC configurations for both worker nodes and for a swarm manager

Operation of ARM template and resources

  1. ARM template consists of main template and nested template. Main template deploys
    • Virtual Machine Scale Set (VMSS) with worker nodes
    • Application Gateway with backend pointing to VMSS for HTTP/HTTPS based termination for L7 load balancing
    • Azure load balancer which points to VMSS for RDP access to worker nodes and alternative way to load balance on L4
    • Network security group allowing RDP connectivity to both swarm manager and VMSS
    • DSC configuration to be tied to VMSS with swarm manager IP which is output of nested template below
  2. Nested template contains deployment of artifacts for swarm manager
    • Deploys swarm manager VM
    • DSC configuration for swarm manager which deploys:
      • xNetworking module to Automation Account (firewall operation)
      • cChoco (third party software installation)
      • cDSCdockerswarm module (operation automation of docker swarm)
    • Create configuration for node in Automation Account and compiles it
    • Output of nested template is internal IP for swarm manager VM which is used in main template to compile DSC configuration for VMSS

Once ARM template is completely deployed following steps will be performed on both swarm manager VM and VMSS machines:

  1. Swarmanager VM boots up and registers with Automation Account with provided automation account key and pulls DSC configuration. You can find DSC script here. DSC makes sure
    • Pulls TLS server CA, cert, key as well as TLS client cert and key from automation account. Put those into specified local file system location so local docker daemon will use those for secure local TLS endpoint
    • Configures environment variable DOCKER_CERT_PATH to point to client TLS certs above
    • Disables Windows firewall
    • Uses cDockerSwarm resource to initialize swarm
    • Installs following packages via cChocoPackageInstallerSet resource
      • Classic-Shell
      • 7zip
      • visualstudiocode
      • sysinternals
  2. VMSS nodes boot up and connect and register with Automation Account with provided automation account key and pull respective DSC configuration. You can find DSC script here. DSC performs following:
    • Copies TLS client certificates from Automation Account and saves them to local file system
    • Configures environmental variable DOCKER_CERT_PATH to point to folder where TLS client certs were saved.
    • Disables Windows Firewall
    • Uses cDockerSwarm resource to connect to existing swarm and promote themselves to managers if number of managers are below specified threshold


Before ARM template can be executed some prerequisites needs to be created manually. The reason they are done manually is because this is something you want to take care like pets rather than cattle. Both Azure Automation and KeyVault is not worth automating via ARM template.

Create resource group to hold all closely guarded artifacts for docker swarm. This resource group will hold Azure KeyVault as well as Automation Account

PS C:\gd\Documents\dockerswarmarm> New-AzureRmResourceGroup -Location SouthCentralUS -Name Utility-RG

ResourceGroupName : Utility-RG
Location : southcentralus
ProvisioningState : Succeeded
Tags :
ResourceId : /subscriptions/b55607ab-c703-4044-a526-72bd701b0d48/resourceGroups/UtilityRG

Create KeyVault to store all the secrets in a Group. Make sure you use unique name for a vault

PS C:\gd\Documents\dockerswarmarm> New-AzureRmKeyVault -VaultName GregKeyVault -ResourceGroupName Utility-RG -Location SouthCentralUS -EnabledForTemplateDeployment

Vault Name : GregKeyVault
Resource Group Name : Utility-RG
Location : SouthCentralUS
Resource ID : /subscriptions/b55607ab-c703-4044-a526-72bd701b0d48/resourceGroups/Utility-RG/providers/Microsoft.KeyVault/vaults/GregKeyVault
Vault URI :
Tenant ID : c0de79f3-23e2-4f18-989e-d173e1d403d6
SKU : Standard
Enabled For Deployment? : False
Enabled For Template Deployment? : True
Enabled For Disk Encryption? : False
Soft Delete Enabled? :
Access Policies :
 Tenant ID : c0de79f3-23e2-4f18-989e-d173e1d403d6
 Object ID : 6c19805a-8757-42ae-92de-02897cd7ccf9
 Application ID :
 Display Name : Gregory Suvalian (
 Permissions to Keys : get, create, delete, list, update, import, backup, restore, recover
 Permissions to Secrets : get, list, set, delete, backup, restore, recover
 Permissions to Certificates : get, delete, list, create, import, update, deleteissuers, getissuers, listissuers, managecontacts,
 manageissuers, setissuers, recover
 Permissions to (Key Vault Managed) Storage : delete, deletesas, get, getsas, list, listsas, regeneratekey, set, setsas, update

Create WindowsPasswordSecret  and add it to KeyVault which will be used for login to both swarmmanager nodes and VMSS machines

PS C:\gd\Documents\dockerswarmarm> Set-AzureKeyVaultSecret -VaultName GregKeyVault -Name WindowsPasswordSecret -SecretValue (ConvertTo-SecureString A123456! -AsPlainText -Force)

Vault Name : gregkeyvault
Name : WindowsPasswordSecret
Version : fbaf487667d5495e8b15c6d564f53e38
Id :
Enabled : True
Expires :
Not Before :
Created : 4/19/2018 4:10:07 PM
Updated : 4/19/2018 4:10:07 PM

Create new Azure Automation account which will be used both as pull server as well as reporting server for all nodes in docker swarm

PS C:\gd\Documents\dockerswarmarm> New-AzureRMAutomationAccount -Name AzureAutomation -Location SouthCentralUS -ResourceGroupName Utility-RG

SubscriptionId : b55607ab-c703-4044-a526-72bd701b0d48
ResourceGroupName : Utility-RG
AutomationAccountName : AzureAutomation
Location : SouthCentralUS
State : Ok
Plan : Basic
CreationTime : 4/19/2018 11:14:56 AM -05:00
LastModifiedTime : 4/19/2018 11:14:56 AM -05:00
LastModifiedBy :
Tags : {}

Get PrimaryKey from Automation Account and create secret in KeyVault to provide to swarm nodes during build to pull their information

PS C:\> $PrimaryKey = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation).PrimaryKey
PS C:\> Set-AzureKeyVaultSecret -VaultName GregKeyVault -Name AzureAutomationKey -SecretValue (ConvertTo-SecureString $PrimaryKey -AsPlainText -Force)

Vault Name : gregkeyvault
Name : AzureAutomationKey
Version : 2bbab4453863413880d1607f06dc3c18
Id :
Enabled : True
Expires :
Not Before :
Created : 4/19/2018 4:34:32 PM
Updated : 4/19/2018 4:34:32 PM
Content Type :
Tags :

VMSS members is using TLS connection to swarm manager to pull information how to join swarm. Swarm manager docker daemon is TLS secured. For this architecture to work we would need 5 files. You can find details how to create those at following post (

Total of 5 files will be required to be added to Automation Account. I have them under /certs folder for server and under /certs/clientcerts for VMSS worker nodes.

Create Automation variables which hold RSA keys for communication between nodes and swarm manager to pull secrets out.

PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>> -Name ca -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\ca.pem | Out-string) -Encrypted:$true

Value :
Encrypted : True
ResourceGroupName : Utility-RG
AutomationAccountName : AzureAutomation
Name : ca
CreationTime : 4/19/2018 11:59:31 AM -05:00
LastModifiedTime : 4/19/2018 11:59:31 AM -05:00
Description :

PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>>  -Name privatekey -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\key.pem | Out-string) -Encrypted:$true

Value                 :
Encrypted             : True
ResourceGroupName     : Utility-RG
AutomationAccountName : AzureAutomation
Name                  : privatekey
CreationTime          : 4/19/2018 5:34:49 PM -05:00
LastModifiedTime      : 4/19/2018 5:34:49 PM -05:00
Description           :

PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>>  -Name servercert -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\cert.pem | Out-string) -Encrypted:$true

Value                 :
Encrypted             : True
ResourceGroupName     : Utility-RG
AutomationAccountName : AzureAutomation
Name                  : servercert
CreationTime          : 4/19/2018 5:35:10 PM -05:00
LastModifiedTime      : 4/19/2018 5:35:10 PM -05:00
Description           :

PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>> -Name VMSSclientkey -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\clientcerts\key.pem | Out-string) -Encrypted:$true

Value :
Encrypted : True
ResourceGroupName : Utility-RG
AutomationAccountName : AzureAutomation
Name : VMSSclientkey
CreationTime : 4/19/2018 12:02:14 PM -05:00
LastModifiedTime : 4/19/2018 12:02:14 PM -05:00
Description :

PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>> -Name VMSSclientcert -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\clientcerts\cert.pem | Out-string) -Encrypted:$true

Value :
Encrypted : True
ResourceGroupName : Utility-RG
AutomationAccountName : AzureAutomation
Name : VMSSclientcert
CreationTime : 4/19/2018 12:02:39 PM -05:00
LastModifiedTime : 4/19/2018 12:02:39 PM -05:00
Description :

Get following information required to populate ARM template parameters file:

  • KeyVault Resource ID (Get-AzureRMKeyVault | select ResourceId)
  • Automation Account Endpoint (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation | select Endpoint)

ARM template and instructions

ARM template expects following pieces of information

  • vmssName Name what scale set will be called and what each nodes will have as a prefix to their name
  • instanceCount Number of nodes to be created by default in VMSS
  • adminPassword Password which will be assigned to administrators account for worker nodes and swarm manager, by default pulled from KeyVault (KeyVault Resource ID above)
  • registrationURL URL to be used for DSC registration from steps above (Automation Account Endpoint)
  • registrationKey Key to register nodes with DSC pull server (PrimaryKey obtained above)
  • hostVMProfile What type of server to be used for virtual machine scale set
  • LicenseType Whether to use Hybrid Benefit for servers which are being deployed
  • AutomationAccountName Name of automation account obtained from steps above
  • AutomationAccountRGName Resource Group name of automation account
  • WorkerNodeDSCConfigURL This is URL DSC script which contains desired state for worker nodes
  • SwarmManagerNodeDSCConfigURL This is URL DSC script which contains desired state for swarm manager
  • swarmanagerdeploymenturi This is URL for nested deployment of swarm manager

Example of template file is below with relevant information filled in.

Deployment script is below which creates initial resource group and then deploys entire solution into it.

Entire solution deployment takes about 1.5 hours due to requirement to do certain steps sequentially since you can not for example create VMSS for worker nodes before swarm manager is initialized etc.

Working with deployed swarm

You can find out if your entire swarm is properly deployed by examining Automation Account and verifying that all nodes have green checkmark next identifying successful verification of DSC configuration.


Log into manager and verify that your swarm looks healthy and showing 3 initial nodes.

Since application gateway and Azure load balancer is only bound to VMSS then it’s necessary to drain primary swarm manager node to prevent it from hosting any containers

Create service with global distribution mode and host based port mapping
docker service create –name iis –publish published=80,target=80,mode=host –mode global microsoft/iis:windowsservercore-1709

Verify that service was successfully created and distributed by checking number of active replicas (you shall be having 2/2 since out of 3 nodes only 2 are active)

Check if worker nodes respond to HTTP calls

Access those containers through provisioned Application Layer Gateway

Expand Virtual Machine Scale set to 5 members. Since new machines in virtual machines scale set are using the same ARM template as original 2 they will automatically provision all necessary software and join swarm.

You can verify  that nodes joined to docker swarm and service is running on all nodes


Monitoring windows docker containers using Application Insights Status Monitor

There is severe shortage of tools for monitoring windows containers in general and specifically from inside running container OS. Steps outlined below will allow you to get basic OS health information (perfmon counters) as well as application level monitors from ASP.NET application.

Overview of the steps a below

  1. Create application insights resource in Azure
  2. Install and enable Application Insights status monitor into container
  3. Modify configuration file to add additional monitoring counters
  • Create application insights resource in Azure

New-AzureRmApplicationInsights -ResourceGroupName artisticcheese -Name appinsights -Location SouthCentralUS -Kind web

After resource is created find instrumentation key for it to supply to status monitor during runtime

PS Azure:\> Get-AzureRmApplicationInsights -Name appinsights -ResourceGroupName artisticcheese | select InstrumentationKey


  • Install Application Insights Status Monitor inside container

Application Insights Status Monitor is distributed via WebPI installer which is dependent on UI part for installation which obviously is not going to work neither for server core installation or windows docker container. Below is extracted necessary parts to make it work in windows container (shall work in server core install as well)


  • Test application and integration

If you onboarded Azure AppInsights successfully you shall be able to see your live container under Live Metric Stream. It will show your current vital status of your container like memory use/CPU and Request information in real time.


Make couple of hits to application to populate Application Insights data in Azure.

PS C:\> Invoke-WebRequest | select Content

Sustenance! Your health is always the best prescription. 

If you go to any of the captured transactions now you shall be able to see breakdown of entire pipeline (including call to a backend dependency)


API which is being used (quote of the day) is limited to 10 hits an hour, continue hitting on web service 10 more time and you shall be start seeing failures in calls reflected inside AppInsights as well



  • Modify configuration file for adding additional monitoring data

You can add additonal perfmon data to be logged to application insights portal in addition to default set for ASP.NET. Open ApplicationInsights.config file in root of your application and check commented part under <Add Type=”Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector”> tag

You can check how this works by checking repo ( Image is also published to docker hub as artisticcheese/appinsights. Since I put my appinsights instrumentation key there it’s not going to help much but you can clone my repo and put your own key and see how it works.


Step by step process of setting up geographically distributed SQL HA cluster on Azure


2 VMs setup with SQL 2016 in different geo regions and 2 different VNETs.


Create connection between 2 VNETs. There is 2 ways to do that. VNET peering and VNET gateway. Example below is using VNET peering

  • Go to CanadaCentral VNET and click on peering and click “Add”

  • Create VNET peering between CanadaCentral and US West Central

  • Do the same from US West Central VNET

  • Verify connectivity between subnets by logging to SQLCanada and testing connection to port 1433 of SQLWestCentral
PS C:\Users\cloudadmin&amp;gt; Test-NetConnection -port 1433

ComputerName :
RemoteAddress :
RemotePort : 1433
InterfaceAlias : Ethernet
SourceAddress :
TcpTestSucceeded : True

Enable SQL Always On High Availability Group

Create failover cluster

  • Install Failover cluster tools and features on Windows on both SQL servers
PS C:\Users\cloudadmin&amp;gt; Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

Success Restart Needed Exit Code Feature Result
------- -------------- --------- --------------
True No Success {Failover Clustering, Remote Server Admini...
  • Install DNS service on both nodes (external DNS service can be used as well)
PS C:\Users\cloudadmin&amp;gt; install-windowsfeature -Name dns -IncludeManagementTools
WARNING: The following recommended condition is not met for DNS: No static IP addresses were found on this computer. If
the IP address changes, clients might not be able to contact this server. Please configure a static IP address on this
computer before installing DNS Server.

Success Restart Needed Exit Code Feature Result
------- -------------- --------- --------------
True No Success {DNS Server, DNS Server Tools, Role Admini...
  • Open DNS management tool on SQLCanada and create a new primary zone

Name it anything you want, in my case it’s “cluster.local”

Enable “non secure dynamic updates”

Right click on properties of newly created zone and switch to “Zone Transfers Tab” and click to allow transfer “to any server”

Go to Virtual Network of CanadaCentral and click on DNS servers tab/Choose custom and enter internal IP addresses of SQLCanada server and SQLWestCentral

Do the same for USWestCentral VNET

Reboot both SQL servers for new DNS servers to take effect

Create DNS zone on SQLWestCentral, right click on “Forward Look up zone/New Zone”, choose “Secondary as Type”

Enter zone name

Put IP address of SQLCanada

Finish wizard and you will see zone with records in it

  • Open failover cluster manager on SQLCanada

  • Choose “Create Cluster”

  • Enter server name in wizard when prompted

  • Run through verify cluster configuration wizard
  • If everything is successful, choose a cluster name

  • Wizard shall be completed now

Configure cloud witness

  • Create storage account in any region in Azure

  • Go to “Access Keys” and copy storage account name as well as Key

  • Go back Failover Cluster Manager, right click on cluster and choose “Configure Cluster Forum Settings”

  • Choose “Select Quorum Witness” in wizard

  • Choose “configure a cloud witness”

  • Paste in values you copied from storage account settings page earlier. Finish wizard

Configure SQL HAG

  • On both SQLWestCentral and SQLCanada server add the same DNS suffix in properties for computer

  • Configure static IP addresses on both computers and in Azure portal

Example below from SQLWestCentral

  • Add SQLWestCentral server to Failover cluster. Go to Failover Cluster Manager and choose “Add node”

  • Run the validation steps and make sure there are no errors reported (ignore Active Directory errors)
  • Add node to cluster. End result shall show both nodes online and healthy

Create certificates for mirroring endpoint

Since servers are not domain joined you would need to use certificate based authentication between instances.

  • On SQLCanada server executed following TSQL
  • Create Encrypted certificate in master database
USE master;
WITH SUBJECT = 'HOST_A certificate for database mirroring',
EXPIRY_DATE = '11/30/2113';
  • Delete existing endpoint if it exists
FROM sys.database_mirroring_endpoints;
DROP endpoint Hadr_endpoint
  • Create new endpoint based off certificate
CREATE ENDPOINT Endpoint_Mirroring
  • Export certificate and copy to server SQLWestCentral
BACKUP CERTIFICATE SQLcanada TO FILE = 'C:\sqlcanada.cer';
  • Perform similar steps above on SQLWestCentral (drop existing endpoint on it first with `DROP endpoint hadr_endpoint`)
USE master;
--Create the database Master Key, if needed.
-- Make a certifcate on sqlwestcentral server instance.
WITH SUBJECT = 'HOST_B certificate for database mirroring',
EXPIRY_DATE = '11/30/2113';
--Create a mirroring endpoint for the server instance on sqlwestcentral .
CREATE ENDPOINT Endpoint_Mirroring
--Backup sqlwestcentral certificate.
BACKUP CERTIFICATE sqlwestcentral TO FILE = 'C:\sqlwestcentral.cer';
--Using any secure copy method, copy C:\sqlwestcentral.cer to SQLcanada.
  • On SQLCanada create login for SQLWestCentral
USE master;
CREATE LOGIN SQLWestCentral_login
WITH PASSWORD = '1Sample_Strong_Password!@#';
  • Create user for that login
USE master;
 CREATE USER SQLWestCentralUser FOR LOGIN SQLWestCentral_login;
  • Associate certificate you exported from SQLWestCentral login with this user
USE master;
AUTHORIZATION sqlwestcentraluser
FROM FILE = 'C:\sqlwestcentral.cer'
  • Grant CONNECT permission to login to remote mirroring endpoint
USE master;
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO [SQLWestCentral_login];
  • Perfom similar steps on SQLWestCentral
USE master;
--On SQLWestCentral, create a login for SQLCanada.
CREATE LOGIN SQLCanada_login WITH PASSWORD = 'AStrongPassword!@#';
--Create a user, SQLCanada_user, for that login.
CREATE USER SQLCanada_user FOR LOGIN SQLCanada_login
--Asscociate this SQLCanada with the user, SQLCanada_user.
FROM FILE = 'C:\sqlcanada.cer';
--Grant CONNECT permission for the server instance on SQLcanada.
GRANT CONNECT ON ENDPOINT::Endpoint_Mirroring TO SQLCanada_login
  • Run following TSQL on both servers to allow local system account access to create High Availabilty group
  • Go to SQL Server configuration manager and restart SQL server, then right click on server and go to properties/AlwaysOn High Availability on both servers

  • Configure SQL Server to run as “Local System” on both nodes

  • Open SSMS and restore your DB as usual

  • Go to Always On High Availability setting and choose “New Availability Group Wizard”

  • Select DB

  • Click “Add Replica”

  • Accept defaults on next screen

  • If everything is correct your new HAG as well Database SQLHA shall appear under Always On High Availability on both servers

  • You can monitor health of high availability group by right clicking on group and choosing “Show Dashboard”

  • You can use script below to verify connectivity and health of replication. It periodically checks same query executed against sql cluster

import-module sqlserver
function QueryDB{
$mirrorConnString = "Data Source=sqlcanada;Failover Partner=sqlwestcentral;Initial Catalog=sqlha;user id=contained_user;password=A1234567890!;"
(Invoke-Sqlcmd -Query "SELECT @@SERVERNAME" -ConnectionString $mirrorConnString -AbortOnError:$false).Column1 + " " + (Invoke-Sqlcmd -Query "SELECT COUNT(*) AS Count FROM ID" -ConnectionString $mirrorConnString).Count
while ($true)
start-sleep 3

Extending windows authentication in docker containers in accessing cross container resources


Below is walkthrough how to enable application pool identity in IIS to access SQL server running in separate container and on separate host with Integrated Windows Authentication.

Please follow my previous blog post here to setup your environment for integrated windows authentication.

My environment is consists of following:

  • Active Directory Domain running on single domain controller called DC1 and called ad.local
  • Host called IIS and SQL with windows containers feature installed
  • The rest of setup is the same as in blog post mentioned above

All the scripts and docker related files are in github repo here

IIS setup

IIS dockerfile is called iis.dockerfile.

It’s based off microsot/iis image with addition of installation of ASP.NET and copying 2 files over which will be used to access SQL server running in container on separate host.

ENTRYPOINT is modifed to add ad.local\Domain Admins group to local Administrators. I was surprised to find out that actual container actually considers itself a rightfull member of active directory domain and you can perform similar tasks you would expect on domain joined member server inside container.

For example you can use WMI straight up from host into running container using your logged on domain credentials to host. Example below showing me using my domain account ad.local\gregoryto execute WMI against running Windows container which reports itself as domain joined member server with the name of containerhostas you can see no -credential parameter is needed to be specified.

sql.aspx file container single line code which outputs information about what account is used to authenticate against remote SQL server and what is IP address of that server.

Image is available dockerhub if you don’t want to build it yourself and called artisticcheese/crosscontaineriis

SQL setup

SQL server is based off  microsoft/mssql-server-windows-developer image with modification done to ENTRYPOINT for the image to create and add containerhost$ GMSA account to sysadmin role on server. It’s expessed in 2 lines added to ENTRYPOINT start.ps1 file

invoke-sqlcmd -Query 'create login [ad\containerhost$] from windows'

invoke-sqlcmd -Query 'ALTER SERVER ROLE sysadmin ADD MEMBER [ad\containerhost$]'
Image is available on dockerhub if you don’t want to build it yourself artisticcheese/crosscontainersql

 Setting up environment and results

Run container on IIS host

docker run -d --rm -p 80:80 -h containerhost --name iis --security-opt "credentialspec=file://win.json" artisticcheese/crosscontaineriis
Run SQL server on SQL host
docker run -d --rm -h containerhost --name sql -e sa_password=A123456! -e ACCEPT_EULA=Y --security-opt "credentialspec=file://win.json" artisticcheese/crosscontainersql
Accessing sql.aspx file on IIS host shall result in following message being displayed showing that IIS running inside container with default application pool identity successfully connected

Storing arbitrary text file in Azure Key Vault as secrets (SSH keys, CER files etc)

Azure KeyVault provides auditable, RBAC controlled access to Azure primitive like secrets which by default usually a simple string consisting of password or connection string and similar.

It’s possible to store complete text files in secrets which is useful if you want to store SSH keys and such and still have all the benefits of Azure Key Vault.

Powershell way

To store any text file in AzureKeyVault secret Set-AzureKeyVaultSecret cmdlet shall be used and contents of the file shall be passed as SecureString to SecretValue parameter.

For example following powershell script will store file rootCA.cer file as secret in Vault

To retrieve it we can use help of PSCredentialObject to convert securestring to plaintext and save it as a file.

You can save it then to file system and have identical certificate to then one which is uploaded

[PSCredential]::new(“user”,(Get-AzureKeyVaultSecret -Name rootca -VaultName MyKeyVault).SecretValue).GetNetworkCredential().Password | out-file ‘c:\test\retrieved.cer’ -Encoding utf8

Azure CLI way

Somewhat easier way to perform entire manipulation can be done with Azure CLI

To upload secret

To download secret

PS C:\>az keyvault secret download –name rootca –vault-name mykeyvault –file C:\test\retrieved.cer

Use group claims in for easy authorization in Azure Active Directory

Azure Active Directory application manifest by default do not populate claims pertaining to user group membership to save on network traffic and possible group bloat. In a lot of cases it’s not a major concern for well managed Azure Active Directory environment.

Enabling groupClaims along with other claims greatly simplify Authorization which otherwise would require use of Microsoft graph for user authorization.

Example below is showing how to enable group claims in Azure Active Directory enabled application on example of Azure Function but can also be used for any other type of application.

To start create Azure function app


Navigate to newly created function and choose “Authentication/Authorization” link.


Enable App Service Authentication and choose Azure AD and settings below.


Add new Function by pressing + sign choose “Custom Function” link


Choose “HTTP Trigger – C#” type, name your function and choose Authorization level of “Function”


Paste following code into function

Click on “Get Function URL” and paste resulting URL in new browser window with no cookies etc (Incognito mode in Chrome)


Login with your AD account and resulting page will contain no group information since default claim set does not include group memberships.chrome_2017-10-12_13-24-44

Navigate to your Azure Active Directory/Application Registration pane and choose your application


Click on “Manifest” on top verify that your groupMembershipClaim​ is set to default null


Click “edit” and change it to one of 2 values: SecurityGroup or All

First one returns only security groups while setting of All returns both security groups as well Distribution Lists. (

Save the file and navigate in new incognito window to function URL and authenticate again. This time you shall be able to see GroupSIDs populated.


You can find SID to actual group mapping inside your Azure Active Directory.


This setup allows you to perform Role based authorization without resorting to complicated steps of calling Graph API etc.