Automating login to AzureRM for multiple contexts

If you work with AzureRM module and have to frequently switch context, it’s worth automating the process since standard authentication through pop up dialog can become tedious at some point. Steps below will use your credentials directly without a need to go through pop up dialog prompt. Credentials are saved securely in credentials manager (https://docs.microsoft.com/en-us/windows/desktop/secauthn/credentials-management)

Below is modified script by Willem Kasdorp (https://blogs.technet.microsoft.com/389thoughts/2018/02/11/logging-on-to-azure-for-your-everyday-job/) to accommodate for non interactive login.

  1. Create following script somewhere on file system. This script will be dot sourced from your profile script to automate login to multiple Azure accounts. In my case I called it login.ps1
#Requires -Module CredentialManager
function profile_logon_azure { 
    param (
        [Parameter(mandatory = $true)]
        [string]$parentfolder,
        [Parameter(mandatory = $true)]
        [string] $accountname
    )

    $validlogon = $false
    $contextfile = Join-Path $parentfolder "$accountname.json"
    if (-not (Test-Path $contextfile)) {
        Write-Host "No existing Azure Context file in '$parentfolder', please log on now for account '$accountname'." -ForegroundColor Yellow
    }
    else {
        $context = Import-AzureRmContext $contextfile -ErrorAction stop
        #
        # check for token expiration by executing an Azure RM command that should always succeed.
        #
        Write-Host "Imported AzureRM context for account '$accountname', now checking for validity of the token." -ForegroundColor Yellow
        $validlogon = (Get-AzureRmSubscription -SubscriptionName $context.Context.Subscription.Name -ErrorAction SilentlyContinue) -ne $null
        if ($validlogon) {
            Write-Host "Imported AzureRM context '$contextfile', current subscription is: $($context.Context.Subscription.Name)" -ForegroundColor Yellow
        }
        else {
            Write-Host "Logon for account '$accountname' has expired, please log on again." -ForegroundColor Yellow
        }
    }
    if (-not $validlogon) {

        $credential = (Get-StoredCredential -Target $accountName -AsCredentialObject)
        if ($null -eq $credential) {
            Write-Host "No stored credentials exists for $accountname, create one" -ForegroundColor Yellow
            exit     
        }
    
        $account = $null
        $password = ConvertTo-SecureString -String $credential.Password -AsPlainText -Force
        $account = Add-AzureRmAccount -Credential (New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $credential.UserName, $password)
        if ($account) {
            Save-AzureRmContext -Path $contextfile -Force
            Write-Host "logged on successfully, context saved to $contextfile." -ForegroundColor Yellow
        }
        else {
            Write-Host "log on to AzureRM for account '$accountname' failed, please retry." -ForegroundColor Yellow
        }
    }
}

2. Edit your profile script to dotsource script above and add Function names to login to your Azure environments. Pass name of credential object and where to save context file to be used between sessions.

code $profile.CurrentUserAllHost

My profile script is below to login to Customer1 and Customer2

Write-Host "Loading profile script" -ForegroundColor Yellow
. C:\gd\Documents\profile\login.ps1
function Login-Customer1 {
    profile_logon_azure -parentfolder C:\gd\Documents\profile\ -accountname "customer1"
}
function Login-Customer2 {
    profile_logon_azure -parentfolder C:\gd\Documents\profile\ -accountname "customer2"
}

3. Add generic credential to your credential manager for both `customer 1` and `customer2`

4. Restart your powershell session and you shall be able to login to any of your contexts by just executing Login-Customer1 or Login-Customer2

Advertisements

Docker image layers lessons learned

Common misconception that deleting files during docker build at later image layers reducing final image size. For example if you have Dockerfile like below, you would think that total image size will be similar to base image since we deleted downloaded file at later stage of image build.

FROM mcr.microsoft.com/windows/nanoserver:1809
ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]
RUN del 100mb.zip

To see effect on deletion had on final image size it’s good to start with just ADD statement in dockerfile

FROM mcr.microsoft.com/windows/nanoserver:1809
ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]

Resulting build is in fact showing that image size increased by 100 MB compared to base image like below 

PS C:\docker\LayerTest> docker build -t layers:delete .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM mcr.microsoft.com/windows/nanoserver:1809
 ---> a5034827da99
Step 2/3 : ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]
Downloading [==================================================>]  104.9MB/104.9MB
 ---> Using cache
 ---> feba369ecb4e
Step 3/3 : RUN del 100mb.zip
 ---> Running in 3307758a70ce
Removing intermediate container 3307758a70ce
 ---> 74d6679e81cf
Successfully built 74d6679e81cf
Successfully tagged layers:delete
PS C:\docker\LayerTest> docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
layers                                 add                 feba369ecb4e        48 seconds ago      410MB
mcr.microsoft.com/windows/nanoserver   1809                a5034827da99        2 weeks ago         305MB

As expected image increased by about 100 MB with addition of the file. Now let’s try to delete that file with `RUN DEL 100MB.zip`. Common misconception is that this will remove file inside image and hence total size will decrease back to original base image size. The results are below.

FROM mcr.microsoft.com/windows/nanoserver:1809
ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]
RUN del 100mb.zip

Results of the build below which are showing that not only final image size did not decrease but it’s in fact increased by 1 MB!

PS C:\docker\LayerTest> docker build -t layers:delete .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM mcr.microsoft.com/windows/nanoserver:1809
 ---> a5034827da99
Step 2/3 : ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]
Downloading [==================================================>]  104.9MB/104.9MB
 ---> Using cache
 ---> feba369ecb4e
Step 3/3 : RUN del 100mb.zip
 ---> Running in 3307758a70ce
Removing intermediate container 3307758a70ce
 ---> 74d6679e81cf
Successfully built 74d6679e81cf
Successfully tagged layers:delete
PS C:\docker\LayerTest> docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
layers                                 delete              74d6679e81cf        4 seconds ago       411MB
layers                                 add                 feba369ecb4e        48 seconds ago      410MB
mcr.microsoft.com/windows/nanoserver   1809                a5034827da99        2 weeks ago         305MB

You can see what was done on image layers with docker history <imgid> command like below, showing that last layer did not change anything but added 1 MB to total size.

PS C:\docker\LayerTest> docker history 74
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
74d6679e81cf        17 minutes ago      cmd /S /C del 100mb.zip                         1.05MB
feba369ecb4e        17 minutes ago      cmd /S /C #(nop) ADD aa41da93b6f56103ecbc3fd…   105MB
a5034827da99        3 weeks ago         Install update 1809_amd64                       61.6MB
<missing>           2 months ago        Apply image 1809_RTM_amd64                      244MB

The take out of this is that you can not decrease total size of the image in any following layers, you can only INCREASE it. That means you need to keep each created layer as small possible by cleaning up file inside the same RUN statement. Example above can be rewritten as below where build process downloads files and then deletes it. Since it’s done inside single layer total size of image does not change.

FROM mcr.microsoft.com/windows/nanoserver:1809
RUN curl http://ipv4.download.thinkbroadband.com/100MB.zip --output 100MB.zip &\
    del 100MB.zip
PS C:\docker\LayerTest> docker build -t layers:curl .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM mcr.microsoft.com/windows/nanoserver:1809
 ---> a5034827da99
Step 2/2 : RUN curl http://ipv4.download.thinkbroadband.com/100MB.zip --output 100MB.zip &    del 100MB.zip
 ---> Running in 58026752da9e
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  100M  100  100M    0     0  6023k      0  0:00:17  0:00:17 --:--:-- 7782k
Removing intermediate container 58026752da9e
 ---> b9de8a8a5077
Successfully built b9de8a8a5077
Successfully tagged layers:curl
PS C:\docker\LayerTest> docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
layers                                 curl                b9de8a8a5077        7 seconds ago       307MB
layers                                 delete              74d6679e81cf        15 minutes ago      411MB
layers                                 add                 feba369ecb4e        16 minutes ago      410MB
mcr.microsoft.com/windows/nanoserver   1809                a5034827da99        2 weeks ago         305MB

This is especially important if you use MSI/EXE based installation in full servercore image since MSI based installer leave uninstallation/reinstallation files behind which inflate image size unneccessary. Compare common scenario below. You add MSI package to your image, you run installation, you delete MSI installer. Example below adds MariaDB installation package to base servercore image, installs it and then deletes installation file. Which is all wrong based on discussion above.

FROM mcr.microsoft.com/windows/servercore:ltsc2019
WORKDIR prep
ADD ["https://downloads.mariadb.com/Bundles/TX/mariadb-tx-3.0-10.3.11-windows.zip", "mariadb.zip"]
RUN powershell -Command Expand-Archive mariadb.zip
RUN powershell -command "Start-Process -filepath 'msiexec' -ArgumentList @('/i', 'c:\prep\mariadb\mariadb-tx-3.0-10.3.11-windows\mariadb-10.3.11-winx64.msi', '/qn') -PassThru | wait-process"
RUN del /f /s /q .

Resulting image history showing up that each layer significantly increase total image size.

IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
d26180fd56b8        25 seconds ago      cmd /S /C del /f /s /q .                        5.22MB
17b2bad012a1        3 minutes ago       cmd /S /C powershell -command "Start-Process…   482MB
bd9161ad3507        8 minutes ago       cmd /S /C powershell -Command Expand-Archive…   99.3MB
dd3ccba3a5ff        9 minutes ago       cmd /S /C #(nop) ADD 7d838d807796b908c08ade6…   71.8MB
94bfd0c4d09f        12 minutes ago      cmd /S /C #(nop) WORKDIR C:\prep                41kB
670f5c41d658        3 weeks ago         Install update ltsc2019_amd64                   509MB
<missing>           2 months ago        Apply image 1809_RTM_amd64                      3.47GB

Better version of the same process to confine entire process to single layer like below

FROM mcr.microsoft.com/windows/servercore:ltsc2019
WORKDIR prep
RUN curl "https://downloads.mariadb.com/Bundles/TX/mariadb-tx-3.0-10.3.11-windows.zip" --output mariadb.zip &  \
    powershell -command "Expand-Archive mariadb.zip" & \
    powershell -command "Start-Process -filepath 'msiexec' @('/i', 'c:\prep\mariadb\mariadb-tx-3.0-10.3.11-windows\mariadb-10.3.11-winx64.msi', '/qn') -PassThru | Wait-Process" & \
    del /f /s /q . & \

Resulting image showing that final image size of installation process decreased from 661 MB to 447 MB

IMAGE               CREATED              CREATED BY                                      SIZE                COMMENT
e727a4dd3642        About a minute ago   cmd /S /C curl "https://downloads.mariadb.co…   447MB
94bfd0c4d09f        18 minutes ago       cmd /S /C #(nop) WORKDIR C:\prep                41kB
670f5c41d658        3 weeks ago          Install update ltsc2019_amd64                   509MB
<missing>           2 months ago         Apply image 1809_RTM_amd64                      3.47GB

Now back to notice that MSI installer always leaves cleanup binaries behind. This binaries are located in c:\windows\installer folder. You can verify that they are there by checking contents of that folder inside better image like below.

PS C:\docker\LayerTest> docker run --rm e7 cmd /c dir c:\windows\installer
 Volume in drive C has no label.
 Volume Serial Number is 069E-146F

 Directory of c:\windows\installer

11/16/2018  09:05 PM        54,956,032 6ee9.msi
12/02/2018  05:00 PM            20,480 SourceHash{D02C77A8-80E6-4CA2-8028-BC2AF9BE21B1}
               2 File(s)     54,976,512 bytes

This files can deleted since no uninstallation will ever be performed inside docker. So final Dockerfile will look like below which results in extra 55MB savings.

FROM mcr.microsoft.com/windows/servercore:ltsc2019
WORKDIR prep
RUN curl "https://downloads.mariadb.com/Bundles/TX/mariadb-tx-3.0-10.3.11-windows.zip" --output mariadb.zip &  \
    powershell -command "Expand-Archive mariadb.zip" & \
    powershell -command "Start-Process -filepath 'msiexec' @('/i', 'c:\prep\mariadb\mariadb-tx-3.0-10.3.11-windows\mariadb-10.3.11-winx64.msi', '/qn') -PassThru | Wait-Process" & \
    del /f /s /q . & \
    del /f /s /q c:\windows\installer\*

By optimizing image build we went from 661MB file to 392MB with no change in functionality

Enabling Remote App on VMBus connected VM

Steps below will allow you to use RemoteApp connection (where your remote application will appear as stand alone application instead of entire desktop for remote desktop connection) over VMBus connection on local machine. This allows you to connect to instances of your VMs which are on segregated network or for that matter completely disconnected. Steps below were performed on Windows 10 client OS connecting to Windows 10 client OS running inside Hyper-V on the same machine.

Here is current Hyper-V state of my workstation

Steps

  • Login into your VM and create registry settings as below which will allow powershell.exe to be launched as remote app remotely

 

  • Create RDP file like below. Replace GUID in first line with output of powershell command Get-VM above

  • Launch your RDP file as usual. First password prompt is for your desktop and second one will be for actual VM

mstsc_2018-11-08_11-08-03

Your will see powershell window launched as remote app (which will be indicated with overlay icon in your taskbar

2018-11-08_11-09-34

Once your have window running you can create child processes by just launching them from powershell prompt with start command like start cmd.exe or start notepad.exe which will launch those 2 instances on your desktop as a separate applications.

2018-11-08_11-11-51

You still have full access to normal RDP functions like shared clipboard, printers etc as well but has advantage of multi-monitor support and additional real estate support since you are only desktop space for applications you need and nothing else

 

 

 

Enabling Azure Site Recovery with encrypted disks

As per (https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-support-matrix) as of September 2018 replication VMs which are using encrypted disks is not supported for ASR scenarios.

chrome_2018-09-06_10-11-26

This is due to the fact that KeyVault which contains encryption key is not being part of ASR replication mechanism and hence it’s impossible to decrypt volumes upon failover.

If you would ASR encrypted machine and boot it up in failover location you will be presented with following screen informing that encryption keys are missing from boot configuration.

vmconnect_2018-09-05_18-01-54

Solution presented below will allow you to use ASR in conjuction with encrypted disks. Solution works around ASR inability to work with encrypted disks by establishing alternative KeyVault in the same location of ASR and replicating neccessary secrets for volume decryption. Subsequent recovery plan remaps keys from original keyvault to failover keyvault which allows machine to boot properly.

Steps

  • Create KeyVault in Failover location.
  • Create automation account in Failover location.
  • Add Azure KeyVault module to automation accountvmconnect_2018-09-05_15-52-55
  • Allow Automation account to access keyvault. For this:
    1. Navigation to automation account and not what is Azure Run As Account isvmconnect_2018-09-05_16-00-30vmconnect_2018-09-05_16-01-54
    2. Add this account to both Source Kevault and Failover keyvault with all permissions in accessing Secretsvmconnect_2018-09-05_16-07-45
  • Create following runbook in Automation account. Purpouse of this runbook is to replicate all secrets between Source and Failover kevault

  • Start runbook and pass names of Source and Failover Keyvault as parameters

vmconnect_2018-09-05_15-49-40

  • Make sure it completes successfully by examining output of a Job and Status

vmconnect_2018-09-05_16-12-59

Verify that secrets exist now in failover keyvault

  • Create 2 additional runbooks. ReencryptVMrunbook contains procedure for post-action for failover plan covering encrypted machines and runs after failover completed.

ReEncryptVM.ps1

EncryptVMOSDisk runbook is script which performs actual process of shutting down VM and updating location of secret key for decryption to point to location of Failover keyvault

EncryptVMOSDisk.ps1

  • Create recovery plan as usual

vmconnect_2018-09-05_16-19-10

  • Create separate group for encrypted machines (in this case it’s sql and app servers)

vmconnect_2018-09-05_16-33-42

  • Choose post action for Group2 and set it to reencrypt VM

vmconnect_2018-09-05_16-35-42

  • Create variable in Automation Account with name of Failover KeyVault holding keys

vmconnect_2018-09-05_18-19-30

  • Perform test failover to test functionality. If procedure worked as expected you shall be output of the job successful in automation account and you can verify disk encryption status by executing powershell statement ​`​Get-BitLockerVolume -MountPoint c: | fl * -force`

vmconnect_2018-09-05_20-29-05

 

Migrating from docker swarm to Service Fabric for windows container orchestration

As of July 2018 Microsoft official support statement for windows containers designating ONLY Azure Service Fabric as supported platform for operation of windows containers. My company has been running windows containers in docker swarm for more then a year now and have to migrate to Service Fabric as a result of this. Below some observations about this process and caveats and limitations I encountered along the way.

What is Azure Service Fabric

Azure Service Fabric despite the name actually downloadable and installable application which can be run anywhere and on any OS (not only Azure). Bigger part of Azure itself runs on Service Fabric (SQL PaaS for example is one of those). Recently Service Fabric also added support for container hosting and orchestration (this is bolted on feature and not something Service Fabric was designed from ground up to support so it’s a little rough around the edges but fully functional and supported nonetheless).

As far as container support in comparison with docker swarm following things are missing or done differently

  1. No ingress routing mesh, so all ports shall be published directly on a host and hence you have to rely on external load balancer for high availability
  2. As result of number 1 you can have only 1 type of container per host. This usually is not a big issue since per microsoft ingress routing mesh is good only for development environment and for production environment direct port mapping on host is prefered
  3. No built in support for secrets, instead configuration files shall be used
  4. No way to have host variables directly passed to container

Things which Azure Service Fabric have which docker swarm does not

  1. Support for Windows Active Directory credentials to authenticate to cluster
  2. UI for administration is built in and support Windows accounts and hence you can use your corporate RBAC for access to cluster
  3. Multitude of environment variables by default piped into container which provides better visibility into underlying host and application environment (for example container hostname is one of those)
  4. Ability to map certificate deployed to container hosts via environment variables passed to a container.
  5. Ability to specify as a part of service fabric docker deployment additional parameters

Azure Service Fabric Deployment

Instructions below are for deploying SF to virtual machines. I use Azure DevTest labs to deploy single domain controller for domain sf.local and have 3 container hosts named containerhost00-02

All container hosts have data drive mapped as F drive where docker-root will be located. This will rely on option provided by option above (custom arguments passed to dockerd) which is not available in docker swarm.

Download and install Azure Service Fabric SDK on your workstation. This will provide neccessary modules for SF administration from your workstation.

To deploy service fabric download Service Fabric package and extract somewhere. I put it on c:\sf, contents of package is described here.

Azure Service Fabric can be secured both with using certificates (similar to docker swarm) as well as Windows Authentication in Active Directory environment (no workgroup scenarios). Steps below targeting service fabric security based off Windows Authentication.

Template JSON for this scenario is located in your package folder and called  ClusterConfig.Windows.MultiMachine.json. Details about all settings are found here.

Settings to modify in template are below

  • name for cluster name
  • nodename which will correspond to UI how each node is represented in cluster
  • ipaddresswhich will be either IP or hostname
  • properties\diagnosticstorecan be either Azure storage account or file share accessible from all SF machines
  • properties\security\windowsidentities\clusteridentity provide option to specify which computer group will be used for intercluster communication
  • properties\security\windowsidentities\clientIdentities provides option to specify which AD groups are used for cluster administration. IsAdmin property specify if this group can make changes to Service Fabric
  • nodetypes specifies which ports are use for communication with a cluster

For my specific scenario I created a computer group called SF-HOSTS and added by SF computer accounts to it (you need to reboot hosts for this to take effect). Created a folder for diagnostics on DC1 and shared it as diagstore. All my container hosts also have F drive and that’s where I’d want to host by Service Fabric binaries as well as docker images. Have to create folders on root of F drive called images for docker images as well as SF to host Service Fabric binaries. Complete template file for this scenario is below

To deploy service fabric first run template verification script and pass path to your template path as a parameter


PS C:\Users\cloudadmin\Documents\sf\artisticcheesecontainer\sf> C:\sf\TestConfiguration.ps1 .\ClusterConfig.Windows.MultiMachine.json
Trace folder already exists. Traces will be written to existing trace folder: C:\Users\cloudadmin\Documents\sf\artisticcheesecontainer\sf\DeploymentTraces
Running Best Practices Analyzer...
Best Practices Analyzer completed successfully.

LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : True
FirewallAvailable : True
RpcCheckPassed : True
NoConflictingInstallations : True
FabricInstallable : True
DataDrivesAvailable : True
NoDomainController : True
Passed : True

Actual installation can not be started with C:\sf\CreateServiceFabricCluster.ps1 .\ClusterConfig.Windows.MultiMachine.json

If no errors has been thrown then you shall be able to connect to UI for Service Fabric but navigating to any cluster member default port 19080

RemoteDesktopManager64_2018-07-12_09-46-43

To deploy your first container from UI go to Applications and right click on “Actions” button left and choose “compose” application.

You can also deploy from command line using powershell and compose file.

Connect to service fabric endpoint with


PS C:\sf> Connect-ServiceFabricCluster -ConnectionEndpoint localhost:19000 -WindowsCredential
True

 

ConnectionEndpoint : {localhost:19000}
FabricClientSettings : {
ClientFriendlyName : PowerShell-69620b90-99d6-4d85-a4c6-ef8c7a604c88
PartitionLocationCacheLimit : 100000
PartitionLocationCacheBucketCount : 1024
ServiceChangePollInterval : 00:02:00
ConnectionInitializationTimeout : 00:00:02
KeepAliveInterval : 00:00:20
ConnectionIdleTimeout : 00:00:00
HealthOperationTimeout : 00:02:00
HealthReportSendInterval : 00:00:00
HealthReportRetrySendInterval : 00:00:30
NotificationGatewayConnectionTimeout : 00:00:30
NotificationCacheUpdateTimeout : 00:00:30
AuthTokenBufferSize : 4096
}
GatewayInformation : {
NodeAddress : containerhost00.sf.local:19000
NodeId : 85772935593a0315f92e3293832c5fe9
NodeInstanceId : 131758801394347228
NodeName : vm0
}

You can find a status of deployment which was done via UI


PS C:\sf> Get-ServiceFabricComposeDeploymentStatus

DeploymentName : whoami
ApplicationName : fabric:/whoami
ComposeDeploymentStatus : Ready
StatusDetails :

You can start application upgrade via following powershell


PS C:\sf> Start-ServiceFabricComposeDeploymentUpgrade -DeploymentName whoami -Compose C:\users\cloudadmin\Documents\sf\artisticcheesecontainer\sf\docker-stack.yml -Monitored -FailureAction Rollback

DeploymentName : whoami
ComposeFilePaths : {C:\users\cloudadmin\Documents\sf\artisticcheesecontainer\sf\docker-stack.yml}
UpgradeKind : Rolling
ForceRestart : False
UpgradeMode : Monitored
UpgradeReplicaSetCheckTimeout : 49710.06:28:15
FailureAction : Rollback
HealthCheckRetryTimeout : 00:10:00
HealthCheckWaitDuration : 00:00:00
UpgradeDomainTimeout : 10675199.02:48:05.4775807
UpgradeTimeout : 10675199.02:48:05.4775807
ConsiderWarningAsError :
MaxPercentUnhealthyPartitionsPerService :
MaxPercentUnhealthyReplicasPerPartition :
MaxPercentUnhealthyServices :
MaxPercentUnhealthyDeployedApplications :
ServiceTypeHealthPolicyMap :

You can check status of upgrade by executing powershell below


PS C:\sf> Get-ServiceFabricComposeDeploymentUpgrade -DeploymentName whoami

 

DeploymentName : whoami
ApplicationName : fabric:/whoami
TargetApplicationTypeVersion : v8
StartTimestampUtc : 7/12/2018 4:55:11 PM
UpgradeState : RollingForwardCompleted
UpgradeStatusDetails : Deployment upgraded to version: v8.
UpgradeDuration : 00:03:00
CurrentUpgradeDomainDuration : 00:00:00
NextUpgradeDomain :
UpgradeDomainsStatus : { "UD0" = "Completed";
"UD1" = "Completed";
"UD2" = "Completed" }
UpgradeKind : Rolling
UpgradeMode : Monitored
FailureAction : Rollback
ForceRestart : False
UpgradeReplicaSetCheckTimeout : 49710.06:28:15
HealthCheckWaitDuration : 00:00:00
HealthCheckStableDuration : 00:02:00
HealthCheckRetryTimeout : 00:10:00
UpgradeDomainTimeout : 10675199.02:48:05.4775807
UpgradeTimeout : 10675199.02:48:05.4775807
ConsiderWarningAsError :
MaxPercentUnhealthyPartitionsPerService :
MaxPercentUnhealthyReplicasPerPartition :
MaxPercentUnhealthyServices :
MaxPercentUnhealthyDeployedApplications :
ServiceTypeHealthPolicyMap :

It also can be seen in UI

RemoteDesktopManager64_2018-07-12_12-51-17

Using VSTS for complete CI/CD pipeline for multi-arch docker images

I needed to have multi-arch docker images to return all environment variables to screen as well as in response headers (so they be used in Fiddler or similar tools to extract data). Idea and implementation is inspired by Stefan Sherer’s whoami image avaialable here https://github.com/StefanScherer/whoami

My base image is https://hub.docker.com/r/microsoft/dotnet/ which is multi-arch itself and hence allows me to have single DOCKERFILE for both UNIX and Windows builds. Entire code and additional artifacts are available at following Github repo in whoami folder.

DOCKERFILE is below which is the same file used for both Windows and UNIX builds.

Building this DOCKERFILE on Windows – will pull current nanoserver based image and on UNIX – current UNIX based image with no code changes necessary to DOCKERFILE itself or build process.

You can see how image works by instantiating Windows and UNIX containers in Azure Container Instance (cloud shell will work fine) and examining headers.

Image in addition to response HTTP headers output information to HTML as well which might be useful for troubleshooting/demo purposes

ApplicationFrameHost_2018-06-09_09-25-26

CI/CD pipeline in VSTS

Build and Release pipeline are exported as JSON files and available in Github repo. File names are

https://github.com/artisticcheese/artisticcheesecontainer/blob/master/whoami/WhoamI-ASP.NET%20Core-CI.json is build definition and https://github.com/artisticcheese/artisticcheesecontainer/blob/master/whoami/Whoami-Release.json is release defintion

Build consists of following steps:

  1. Download source code and artifacts from GitHub
  2. Run steps on hosted UNIX agent (provided for free by VSTS)
    • Build from DOCKERFILE
    • Push to Docker Hub
  3. Run steps on hosted Windows agent (provided for free by VSTS)
    • Build from DOCKERFILE
    • Push to Docker Hub
    • Rebase images for 1709 and 1803 images (work in progress currently)
  4. Run RegEx task to replace static data in manifest file which will be used to create multi-arch image with current BuildVersion
  5. Publish manifest as artifact for release pipeline
 This is how it looks like in UI
ApplicationFrameHost_2018-06-09_09-37-56

You can also enable real CI (instead of manually invoking build) in Options by checking “Enable Continuous Integration”

ApplicationFrameHost_2018-06-09_09-39-20

Result of successful build is YAML manifest file identifying image tags for UNIX/Windows images in docker hub

ApplicationFrameHost_2018-06-09_09-41-37

Example of manifest file is below

Release (CD) pipeline consists of following steps

  1. Install manifest tool from chocolatey on agent (courtesy of Stefan again)
  2. Download build artifact (manifest file) which contains information about current image tags
  3. Running tool from step 1 to update docker hub with latest image

CD is automatically triggered by successful build

ApplicationFrameHost_2018-06-09_09-48-10

 

ARM template for deploying windows based docker swarm in Azure

Below is ARM template as well as instructions how to deploy fully managed docker swarm into Azure based off Windows hosts for both managers and workers.

Solution along with all required files is available at following GitHub repo artisticcheese/dockerswarmarm

Clone that repo and follow with steps below

End Results

Result of  following through with steps below will be:

  • Virtual Machine Scale set with worker nodes which is joined to swarm
  • VM hosting docker swarm manager role
  • Application Gateway which will point to worker nodes for layer 7 load balancing as well HTTP/HTTPS termination, web application firewall etc
  • Azure load balancer with mapped entries for RDP access to worker nodes
  • Azure Key Vault which will hold secrets
  • Azure Automation Account which will hold DSC configurations for both worker nodes and for a swarm manager

Operation of ARM template and resources

  1. ARM template consists of main template and nested template. Main template deploys
    • Virtual Machine Scale Set (VMSS) with worker nodes
    • Application Gateway with backend pointing to VMSS for HTTP/HTTPS based termination for L7 load balancing
    • Azure load balancer which points to VMSS for RDP access to worker nodes and alternative way to load balance on L4
    • Network security group allowing RDP connectivity to both swarm manager and VMSS
    • DSC configuration to be tied to VMSS with swarm manager IP which is output of nested template below
  2. Nested template contains deployment of artifacts for swarm manager
    • Deploys swarm manager VM
    • DSC configuration for swarm manager which deploys:
      • xNetworking module to Automation Account (firewall operation)
      • cChoco (third party software installation)
      • cDSCdockerswarm module (operation automation of docker swarm)
    • Create configuration for node in Automation Account and compiles it
    • Output of nested template is internal IP for swarm manager VM which is used in main template to compile DSC configuration for VMSS

Once ARM template is completely deployed following steps will be performed on both swarm manager VM and VMSS machines:

  1. Swarmanager VM boots up and registers with Automation Account with provided automation account key and pulls DSC configuration. You can find DSC script here. DSC makes sure
    • Pulls TLS server CA, cert, key as well as TLS client cert and key from automation account. Put those into specified local file system location so local docker daemon will use those for secure local TLS endpoint
    • Configures environment variable DOCKER_CERT_PATH to point to client TLS certs above
    • Disables Windows firewall
    • Uses cDockerSwarm resource to initialize swarm
    • Installs following packages via cChocoPackageInstallerSet resource
      • Classic-Shell
      • 7zip
      • visualstudiocode
      • sysinternals
  2. VMSS nodes boot up and connect and register with Automation Account with provided automation account key and pull respective DSC configuration. You can find DSC script here. DSC performs following:
    • Copies TLS client certificates from Automation Account and saves them to local file system
    • Configures environmental variable DOCKER_CERT_PATH to point to folder where TLS client certs were saved.
    • Disables Windows Firewall
    • Uses cDockerSwarm resource to connect to existing swarm and promote themselves to managers if number of managers are below specified threshold

Prerequisites

Before ARM template can be executed some prerequisites needs to be created manually. The reason they are done manually is because this is something you want to take care like pets rather than cattle. Both Azure Automation and KeyVault is not worth automating via ARM template.

Create resource group to hold all closely guarded artifacts for docker swarm. This resource group will hold Azure KeyVault as well as Automation Account

PS C:\gd\Documents\dockerswarmarm> New-AzureRmResourceGroup -Location SouthCentralUS -Name Utility-RG

ResourceGroupName : Utility-RG
Location : southcentralus
ProvisioningState : Succeeded
Tags :
ResourceId : /subscriptions/b55607ab-c703-4044-a526-72bd701b0d48/resourceGroups/UtilityRG

Create KeyVault to store all the secrets in a Group. Make sure you use unique name for a vault


PS C:\gd\Documents\dockerswarmarm> New-AzureRmKeyVault -VaultName GregKeyVault -ResourceGroupName Utility-RG -Location SouthCentralUS -EnabledForTemplateDeployment

Vault Name : GregKeyVault
Resource Group Name : Utility-RG
Location : SouthCentralUS
Resource ID : /subscriptions/b55607ab-c703-4044-a526-72bd701b0d48/resourceGroups/Utility-RG/providers/Microsoft.KeyVault/vaults/GregKeyVault
Vault URI : https://GregKeyVault.vault.azure.net
Tenant ID : c0de79f3-23e2-4f18-989e-d173e1d403d6
SKU : Standard
Enabled For Deployment? : False
Enabled For Template Deployment? : True
Enabled For Disk Encryption? : False
Soft Delete Enabled? :
Access Policies :
 Tenant ID : c0de79f3-23e2-4f18-989e-d173e1d403d6
 Object ID : 6c19805a-8757-42ae-92de-02897cd7ccf9
 Application ID :
 Display Name : Gregory Suvalian (artisticcheese_gmail.com#EXT#@artisticcheesegmail.onmicrosoft.com)
 Permissions to Keys : get, create, delete, list, update, import, backup, restore, recover
 Permissions to Secrets : get, list, set, delete, backup, restore, recover
 Permissions to Certificates : get, delete, list, create, import, update, deleteissuers, getissuers, listissuers, managecontacts,
 manageissuers, setissuers, recover
 Permissions to (Key Vault Managed) Storage : delete, deletesas, get, getsas, list, listsas, regeneratekey, set, setsas, update

Create WindowsPasswordSecret  and add it to KeyVault which will be used for login to both swarmmanager nodes and VMSS machines


PS C:\gd\Documents\dockerswarmarm> Set-AzureKeyVaultSecret -VaultName GregKeyVault -Name WindowsPasswordSecret -SecretValue (ConvertTo-SecureString A123456! -AsPlainText -Force)

Vault Name : gregkeyvault
Name : WindowsPasswordSecret
Version : fbaf487667d5495e8b15c6d564f53e38
Id : https://gregkeyvault.vault.azure.net:443/secrets/WindowsPasswordSecret/fbaf487667d5495e8b15c6d564f53e38
Enabled : True
Expires :
Not Before :
Created : 4/19/2018 4:10:07 PM
Updated : 4/19/2018 4:10:07 PM

Create new Azure Automation account which will be used both as pull server as well as reporting server for all nodes in docker swarm


PS C:\gd\Documents\dockerswarmarm> New-AzureRMAutomationAccount -Name AzureAutomation -Location SouthCentralUS -ResourceGroupName Utility-RG

SubscriptionId : b55607ab-c703-4044-a526-72bd701b0d48
ResourceGroupName : Utility-RG
AutomationAccountName : AzureAutomation
Location : SouthCentralUS
State : Ok
Plan : Basic
CreationTime : 4/19/2018 11:14:56 AM -05:00
LastModifiedTime : 4/19/2018 11:14:56 AM -05:00
LastModifiedBy :
Tags : {}

Get PrimaryKey from Automation Account and create secret in KeyVault to provide to swarm nodes during build to pull their information


PS C:\> $PrimaryKey = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation).PrimaryKey
PS C:\> Set-AzureKeyVaultSecret -VaultName GregKeyVault -Name AzureAutomationKey -SecretValue (ConvertTo-SecureString $PrimaryKey -AsPlainText -Force)

Vault Name : gregkeyvault
Name : AzureAutomationKey
Version : 2bbab4453863413880d1607f06dc3c18
Id : https://gregkeyvault.vault.azure.net:443/secrets/AzureAutomationKey/2bbab4453863413880d1607f06dc3c18
Enabled : True
Expires :
Not Before :
Created : 4/19/2018 4:34:32 PM
Updated : 4/19/2018 4:34:32 PM
Content Type :
Tags :

VMSS members is using TLS connection to swarm manager to pull information how to join swarm. Swarm manager docker daemon is TLS secured. For this architecture to work we would need 5 files. You can find details how to create those at following post (https://artisticcheese.wordpress.com/2017/06/10/using-pure-powershell-to-generate-tls-certificates-for-docker-daemon-running-on-windows/

Total of 5 files will be required to be added to Automation Account. I have them under /certs folder for server and under /certs/clientcerts for VMSS worker nodes.

Create Automation variables which hold RSA keys for communication between nodes and swarm manager to pull secrets out.


PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>> -Name ca -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\ca.pem | Out-string) -Encrypted:$true

Value :
Encrypted : True
ResourceGroupName : Utility-RG
AutomationAccountName : AzureAutomation
Name : ca
CreationTime : 4/19/2018 11:59:31 AM -05:00
LastModifiedTime : 4/19/2018 11:59:31 AM -05:00
Description :

PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>>  -Name privatekey -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\key.pem | Out-string) -Encrypted:$true

Value                 :
Encrypted             : True
ResourceGroupName     : Utility-RG
AutomationAccountName : AzureAutomation
Name                  : privatekey
CreationTime          : 4/19/2018 5:34:49 PM -05:00
LastModifiedTime      : 4/19/2018 5:34:49 PM -05:00
Description           :

PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>>  -Name servercert -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\cert.pem | Out-string) -Encrypted:$true

Value                 :
Encrypted             : True
ResourceGroupName     : Utility-RG
AutomationAccountName : AzureAutomation
Name                  : servercert
CreationTime          : 4/19/2018 5:35:10 PM -05:00
LastModifiedTime      : 4/19/2018 5:35:10 PM -05:00
Description           :

PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>> -Name VMSSclientkey -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\clientcerts\key.pem | Out-string) -Encrypted:$true

Value :
Encrypted : True
ResourceGroupName : Utility-RG
AutomationAccountName : AzureAutomation
Name : VMSSclientkey
CreationTime : 4/19/2018 12:02:14 PM -05:00
LastModifiedTime : 4/19/2018 12:02:14 PM -05:00
Description :

PS C:\> New-AzureRmAutomationVariable -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation `
>> -Name VMSSclientcert -Value (get-Content C:\gd\Documents\dockerswarmarm\certs\clientcerts\cert.pem | Out-string) -Encrypted:$true

Value :
Encrypted : True
ResourceGroupName : Utility-RG
AutomationAccountName : AzureAutomation
Name : VMSSclientcert
CreationTime : 4/19/2018 12:02:39 PM -05:00
LastModifiedTime : 4/19/2018 12:02:39 PM -05:00
Description :

Get following information required to populate ARM template parameters file:

  • KeyVault Resource ID (Get-AzureRMKeyVault | select ResourceId)
  • Automation Account Endpoint (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName Utility-RG -AutomationAccountName AzureAutomation | select Endpoint)

ARM template and instructions

ARM template expects following pieces of information

  • vmssName Name what scale set will be called and what each nodes will have as a prefix to their name
  • instanceCount Number of nodes to be created by default in VMSS
  • adminPassword Password which will be assigned to administrators account for worker nodes and swarm manager, by default pulled from KeyVault (KeyVault Resource ID above)
  • registrationURL URL to be used for DSC registration from steps above (Automation Account Endpoint)
  • registrationKey Key to register nodes with DSC pull server (PrimaryKey obtained above)
  • hostVMProfile What type of server to be used for virtual machine scale set
  • LicenseType Whether to use Hybrid Benefit for servers which are being deployed
  • AutomationAccountName Name of automation account obtained from steps above
  • AutomationAccountRGName Resource Group name of automation account
  • WorkerNodeDSCConfigURL This is URL DSC script which contains desired state for worker nodes
  • SwarmManagerNodeDSCConfigURL This is URL DSC script which contains desired state for swarm manager
  • swarmanagerdeploymenturi This is URL for nested deployment of swarm manager

Example of template file is below with relevant information filled in.

Deployment script is below which creates initial resource group and then deploys entire solution into it.

Entire solution deployment takes about 1.5 hours due to requirement to do certain steps sequentially since you can not for example create VMSS for worker nodes before swarm manager is initialized etc.

Working with deployed swarm

You can find out if your entire swarm is properly deployed by examining Automation Account and verifying that all nodes have green checkmark next identifying successful verification of DSC configuration.

ApplicationFrameHost_2018-04-20_16-12-20

Log into manager and verify that your swarm looks healthy and showing 3 initial nodes.

Since application gateway and Azure load balancer is only bound to VMSS then it’s necessary to drain primary swarm manager node to prevent it from hosting any containers

Create service with global distribution mode and host based port mapping
docker service create –name iis –publish published=80,target=80,mode=host –mode global microsoft/iis:windowsservercore-1709

Verify that service was successfully created and distributed by checking number of active replicas (you shall be having 2/2 since out of 3 nodes only 2 are active)

Check if worker nodes respond to HTTP calls

Access those containers through provisioned Application Layer Gateway

Expand Virtual Machine Scale set to 5 members. Since new machines in virtual machines scale set are using the same ARM template as original 2 they will automatically provision all necessary software and join swarm.

You can verify  that nodes joined to docker swarm and service is running on all nodes