Publishing build artifacts from Azure VSTS(DevOps) to OneDrive

Steps below will allow you to publish contents of your Azure VSTS (DevOps) Repo to Sharepoint online and by proxy to OneDrive as well.

There is no built-in task available in either build or release pipeline to push files to OneDrive so solution below relies on Azure Logic Apps to perform those functions.

Overall flow is below

  1. Azure DevOps completes build which packages code in ZIP file as build artifact
  2. Azure DevOps project calls Azure Logic App webhook 
  3. Azure Logic App retrieves results of build task and extracts to Sharepoint online documents folder

Steps in detail:

Create build pipeline in Azure Devops

Yaml file as well as UI representation is below. It packages files in scripts folder into ZIP file into subfolder called Powershell Scripts

chrome_2019-02-15_12-27-43


Create Azure LogicApp

Define a trigger of HTTP request type. Use following Request Body Schema. You can download schema from here https://gist.github.com/artisticcheese/2b5410ee65bc7b76273fdef47edd0c4b

Save trigger which will give you HTTP Post URL which you would need to use later in Azure DevOps project

Since you might have more then 1 build in your Azure Devops pipeline you need to have conditional logic in your LogicalApp to only publish on results of specific buildID.

Second step in LogicApp is “Condition” based on definition id number of your build. In my case it’s 5

Condition

Next steps are initialize 2 variable. First one will be holding buildID number as well authorization information for AzureDevops.

To create Authorization token you need to create PAT token in AzureDevops and encode :{token} into Base64. For example if my PAT token is a123 then go to https://www.base64encode.org/ and encode value of :a123 into OmExMjM=

Add Send an HTTP Request to Azure Devops as next step and modify parameters respectively for your values. Output of this step will allow to get URL to download artifact.

Add 2 steps to parse JSON and extract value of downloadURI. Schema can be download from here https://gist.github.com/artisticcheese/0cf1d9c4b35e9fe3d01ea408555c3d15

Last 3 steps are downloading artifact from URL and extracting it to Sharepoint site

Add webhook to AzureDevops

Go to project settings and add a service hook

Enter webhook URI you get in previous steps for LogicApp

Automating login to AzureRM for multiple contexts

If you work with AzureRM module and have to frequently switch context, it’s worth automating the process since standard authentication through pop up dialog can become tedious at some point. Steps below will use your credentials directly without a need to go through pop up dialog prompt. Credentials are saved securely in credentials manager (https://docs.microsoft.com/en-us/windows/desktop/secauthn/credentials-management)

Below is modified script by Willem Kasdorp (https://blogs.technet.microsoft.com/389thoughts/2018/02/11/logging-on-to-azure-for-your-everyday-job/) to accommodate for non interactive login.

  1. Create following script somewhere on file system. This script will be dot sourced from your profile script to automate login to multiple Azure accounts. In my case I called it login.ps1
#Requires -Module CredentialManager
function profile_logon_azure { 
    param (
        [Parameter(mandatory = $true)]
        [string]$parentfolder,
        [Parameter(mandatory = $true)]
        [string] $accountname
    )

    $validlogon = $false
    $contextfile = Join-Path $parentfolder "$accountname.json"
    if (-not (Test-Path $contextfile)) {
        Write-Host "No existing Azure Context file in '$parentfolder', please log on now for account '$accountname'." -ForegroundColor Yellow
    }
    else {
        $context = Import-AzureRmContext $contextfile -ErrorAction stop
        #
        # check for token expiration by executing an Azure RM command that should always succeed.
        #
        Write-Host "Imported AzureRM context for account '$accountname', now checking for validity of the token." -ForegroundColor Yellow
        $validlogon = (Get-AzureRmSubscription -SubscriptionName $context.Context.Subscription.Name -ErrorAction SilentlyContinue) -ne $null
        if ($validlogon) {
            Write-Host "Imported AzureRM context '$contextfile', current subscription is: $($context.Context.Subscription.Name)" -ForegroundColor Yellow
        }
        else {
            Write-Host "Logon for account '$accountname' has expired, please log on again." -ForegroundColor Yellow
        }
    }
    if (-not $validlogon) {

        $credential = (Get-StoredCredential -Target $accountName -AsCredentialObject)
        if ($null -eq $credential) {
            Write-Host "No stored credentials exists for $accountname, create one" -ForegroundColor Yellow
            exit     
        }
    
        $account = $null
        $password = ConvertTo-SecureString -String $credential.Password -AsPlainText -Force
        $account = Add-AzureRmAccount -Credential (New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $credential.UserName, $password)
        if ($account) {
            Save-AzureRmContext -Path $contextfile -Force
            Write-Host "logged on successfully, context saved to $contextfile." -ForegroundColor Yellow
        }
        else {
            Write-Host "log on to AzureRM for account '$accountname' failed, please retry." -ForegroundColor Yellow
        }
    }
}

2. Edit your profile script to dotsource script above and add Function names to login to your Azure environments. Pass name of credential object and where to save context file to be used between sessions.

code $profile.CurrentUserAllHost

My profile script is below to login to Customer1 and Customer2

Write-Host "Loading profile script" -ForegroundColor Yellow
. C:\gd\Documents\profile\login.ps1
function Login-Customer1 {
    profile_logon_azure -parentfolder C:\gd\Documents\profile\ -accountname "customer1"
}
function Login-Customer2 {
    profile_logon_azure -parentfolder C:\gd\Documents\profile\ -accountname "customer2"
}

3. Add generic credential to your credential manager for both `customer 1` and `customer2`

4. Restart your powershell session and you shall be able to login to any of your contexts by just executing Login-Customer1 or Login-Customer2

Docker image layers lessons learned

Common misconception that deleting files during docker build at later image layers reducing final image size. For example if you have Dockerfile like below, you would think that total image size will be similar to base image since we deleted downloaded file at later stage of image build.

FROM mcr.microsoft.com/windows/nanoserver:1809
ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]
RUN del 100mb.zip

To see effect on deletion had on final image size it’s good to start with just ADD statement in dockerfile

FROM mcr.microsoft.com/windows/nanoserver:1809
ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]

Resulting build is in fact showing that image size increased by 100 MB compared to base image like below 

PS C:\docker\LayerTest> docker build -t layers:delete .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM mcr.microsoft.com/windows/nanoserver:1809
 ---> a5034827da99
Step 2/3 : ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]
Downloading [==================================================>]  104.9MB/104.9MB
 ---> Using cache
 ---> feba369ecb4e
Step 3/3 : RUN del 100mb.zip
 ---> Running in 3307758a70ce
Removing intermediate container 3307758a70ce
 ---> 74d6679e81cf
Successfully built 74d6679e81cf
Successfully tagged layers:delete
PS C:\docker\LayerTest> docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
layers                                 add                 feba369ecb4e        48 seconds ago      410MB
mcr.microsoft.com/windows/nanoserver   1809                a5034827da99        2 weeks ago         305MB

As expected image increased by about 100 MB with addition of the file. Now let’s try to delete that file with `RUN DEL 100MB.zip`. Common misconception is that this will remove file inside image and hence total size will decrease back to original base image size. The results are below.

FROM mcr.microsoft.com/windows/nanoserver:1809
ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]
RUN del 100mb.zip

Results of the build below which are showing that not only final image size did not decrease but it’s in fact increased by 1 MB!

PS C:\docker\LayerTest> docker build -t layers:delete .
Sending build context to Docker daemon  2.048kB
Step 1/3 : FROM mcr.microsoft.com/windows/nanoserver:1809
 ---> a5034827da99
Step 2/3 : ADD ["http://ipv4.download.thinkbroadband.com/100MB.zip", "."]
Downloading [==================================================>]  104.9MB/104.9MB
 ---> Using cache
 ---> feba369ecb4e
Step 3/3 : RUN del 100mb.zip
 ---> Running in 3307758a70ce
Removing intermediate container 3307758a70ce
 ---> 74d6679e81cf
Successfully built 74d6679e81cf
Successfully tagged layers:delete
PS C:\docker\LayerTest> docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
layers                                 delete              74d6679e81cf        4 seconds ago       411MB
layers                                 add                 feba369ecb4e        48 seconds ago      410MB
mcr.microsoft.com/windows/nanoserver   1809                a5034827da99        2 weeks ago         305MB

You can see what was done on image layers with docker history <imgid> command like below, showing that last layer did not change anything but added 1 MB to total size.

PS C:\docker\LayerTest> docker history 74
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
74d6679e81cf        17 minutes ago      cmd /S /C del 100mb.zip                         1.05MB
feba369ecb4e        17 minutes ago      cmd /S /C #(nop) ADD aa41da93b6f56103ecbc3fd…   105MB
a5034827da99        3 weeks ago         Install update 1809_amd64                       61.6MB
<missing>           2 months ago        Apply image 1809_RTM_amd64                      244MB

The take out of this is that you can not decrease total size of the image in any following layers, you can only INCREASE it. That means you need to keep each created layer as small possible by cleaning up file inside the same RUN statement. Example above can be rewritten as below where build process downloads files and then deletes it. Since it’s done inside single layer total size of image does not change.

FROM mcr.microsoft.com/windows/nanoserver:1809
RUN curl http://ipv4.download.thinkbroadband.com/100MB.zip --output 100MB.zip &\
    del 100MB.zip
PS C:\docker\LayerTest> docker build -t layers:curl .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM mcr.microsoft.com/windows/nanoserver:1809
 ---> a5034827da99
Step 2/2 : RUN curl http://ipv4.download.thinkbroadband.com/100MB.zip --output 100MB.zip &    del 100MB.zip
 ---> Running in 58026752da9e
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  100M  100  100M    0     0  6023k      0  0:00:17  0:00:17 --:--:-- 7782k
Removing intermediate container 58026752da9e
 ---> b9de8a8a5077
Successfully built b9de8a8a5077
Successfully tagged layers:curl
PS C:\docker\LayerTest> docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
layers                                 curl                b9de8a8a5077        7 seconds ago       307MB
layers                                 delete              74d6679e81cf        15 minutes ago      411MB
layers                                 add                 feba369ecb4e        16 minutes ago      410MB
mcr.microsoft.com/windows/nanoserver   1809                a5034827da99        2 weeks ago         305MB

This is especially important if you use MSI/EXE based installation in full servercore image since MSI based installer leave uninstallation/reinstallation files behind which inflate image size unneccessary. Compare common scenario below. You add MSI package to your image, you run installation, you delete MSI installer. Example below adds MariaDB installation package to base servercore image, installs it and then deletes installation file. Which is all wrong based on discussion above.

FROM mcr.microsoft.com/windows/servercore:ltsc2019
WORKDIR prep
ADD ["https://downloads.mariadb.com/Bundles/TX/mariadb-tx-3.0-10.3.11-windows.zip", "mariadb.zip"]
RUN powershell -Command Expand-Archive mariadb.zip
RUN powershell -command "Start-Process -filepath 'msiexec' -ArgumentList @('/i', 'c:\prep\mariadb\mariadb-tx-3.0-10.3.11-windows\mariadb-10.3.11-winx64.msi', '/qn') -PassThru | wait-process"
RUN del /f /s /q .

Resulting image history showing up that each layer significantly increase total image size.

IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
d26180fd56b8        25 seconds ago      cmd /S /C del /f /s /q .                        5.22MB
17b2bad012a1        3 minutes ago       cmd /S /C powershell -command "Start-Process…   482MB
bd9161ad3507        8 minutes ago       cmd /S /C powershell -Command Expand-Archive…   99.3MB
dd3ccba3a5ff        9 minutes ago       cmd /S /C #(nop) ADD 7d838d807796b908c08ade6…   71.8MB
94bfd0c4d09f        12 minutes ago      cmd /S /C #(nop) WORKDIR C:\prep                41kB
670f5c41d658        3 weeks ago         Install update ltsc2019_amd64                   509MB
<missing>           2 months ago        Apply image 1809_RTM_amd64                      3.47GB

Better version of the same process to confine entire process to single layer like below

FROM mcr.microsoft.com/windows/servercore:ltsc2019
WORKDIR prep
RUN curl "https://downloads.mariadb.com/Bundles/TX/mariadb-tx-3.0-10.3.11-windows.zip" --output mariadb.zip &  \
    powershell -command "Expand-Archive mariadb.zip" & \
    powershell -command "Start-Process -filepath 'msiexec' @('/i', 'c:\prep\mariadb\mariadb-tx-3.0-10.3.11-windows\mariadb-10.3.11-winx64.msi', '/qn') -PassThru | Wait-Process" & \
    del /f /s /q . & \

Resulting image showing that final image size of installation process decreased from 661 MB to 447 MB

IMAGE               CREATED              CREATED BY                                      SIZE                COMMENT
e727a4dd3642        About a minute ago   cmd /S /C curl "https://downloads.mariadb.co…   447MB
94bfd0c4d09f        18 minutes ago       cmd /S /C #(nop) WORKDIR C:\prep                41kB
670f5c41d658        3 weeks ago          Install update ltsc2019_amd64                   509MB
<missing>           2 months ago         Apply image 1809_RTM_amd64                      3.47GB

Now back to notice that MSI installer always leaves cleanup binaries behind. This binaries are located in c:\windows\installer folder. You can verify that they are there by checking contents of that folder inside better image like below.

PS C:\docker\LayerTest> docker run --rm e7 cmd /c dir c:\windows\installer
 Volume in drive C has no label.
 Volume Serial Number is 069E-146F

 Directory of c:\windows\installer

11/16/2018  09:05 PM        54,956,032 6ee9.msi
12/02/2018  05:00 PM            20,480 SourceHash{D02C77A8-80E6-4CA2-8028-BC2AF9BE21B1}
               2 File(s)     54,976,512 bytes

This files can deleted since no uninstallation will ever be performed inside docker. So final Dockerfile will look like below which results in extra 55MB savings.

FROM mcr.microsoft.com/windows/servercore:ltsc2019
WORKDIR prep
RUN curl "https://downloads.mariadb.com/Bundles/TX/mariadb-tx-3.0-10.3.11-windows.zip" --output mariadb.zip &  \
    powershell -command "Expand-Archive mariadb.zip" & \
    powershell -command "Start-Process -filepath 'msiexec' @('/i', 'c:\prep\mariadb\mariadb-tx-3.0-10.3.11-windows\mariadb-10.3.11-winx64.msi', '/qn') -PassThru | Wait-Process" & \
    del /f /s /q . & \
    del /f /s /q c:\windows\installer\*

By optimizing image build we went from 661MB file to 392MB with no change in functionality

Enabling Remote App on VMBus connected VM

Steps below will allow you to use RemoteApp connection (where your remote application will appear as stand alone application instead of entire desktop for remote desktop connection) over VMBus connection on local machine. This allows you to connect to instances of your VMs which are on segregated network or for that matter completely disconnected. Steps below were performed on Windows 10 client OS connecting to Windows 10 client OS running inside Hyper-V on the same machine.

Here is current Hyper-V state of my workstation

Steps

  • Login into your VM and create registry settings as below which will allow powershell.exe to be launched as remote app remotely

 

  • Create RDP file like below. Replace GUID in first line with output of powershell command Get-VM above
  • Launch your RDP file as usual. First password prompt is for your desktop and second one will be for actual VM

mstsc_2018-11-08_11-08-03

Your will see powershell window launched as remote app (which will be indicated with overlay icon in your taskbar

2018-11-08_11-09-34

Once your have window running you can create child processes by just launching them from powershell prompt with start command like start cmd.exe or start notepad.exe which will launch those 2 instances on your desktop as a separate applications.

2018-11-08_11-11-51

You still have full access to normal RDP functions like shared clipboard, printers etc as well but has advantage of multi-monitor support and additional real estate support since you are only desktop space for applications you need and nothing else

 

 

 

Enabling Azure Site Recovery with encrypted disks

As per (https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-support-matrix) as of September 2018 replication VMs which are using encrypted disks is not supported for ASR scenarios.

chrome_2018-09-06_10-11-26

This is due to the fact that KeyVault which contains encryption key is not being part of ASR replication mechanism and hence it’s impossible to decrypt volumes upon failover.

If you would ASR encrypted machine and boot it up in failover location you will be presented with following screen informing that encryption keys are missing from boot configuration.

vmconnect_2018-09-05_18-01-54

Solution presented below will allow you to use ASR in conjuction with encrypted disks. Solution works around ASR inability to work with encrypted disks by establishing alternative KeyVault in the same location of ASR and replicating neccessary secrets for volume decryption. Subsequent recovery plan remaps keys from original keyvault to failover keyvault which allows machine to boot properly.

Steps

  • Create KeyVault in Failover location.
  • Create automation account in Failover location.
  • Add Azure KeyVault module to automation accountvmconnect_2018-09-05_15-52-55
  • Allow Automation account to access keyvault. For this:
    1. Navigation to automation account and not what is Azure Run As Account isvmconnect_2018-09-05_16-00-30vmconnect_2018-09-05_16-01-54
    2. Add this account to both Source Kevault and Failover keyvault with all permissions in accessing Secretsvmconnect_2018-09-05_16-07-45
  • Create following runbook in Automation account. Purpouse of this runbook is to replicate all secrets between Source and Failover kevault
  • Start runbook and pass names of Source and Failover Keyvault as parameters

vmconnect_2018-09-05_15-49-40

  • Make sure it completes successfully by examining output of a Job and Status

vmconnect_2018-09-05_16-12-59

Verify that secrets exist now in failover keyvault

  • Create 2 additional runbooks. ReencryptVMrunbook contains procedure for post-action for failover plan covering encrypted machines and runs after failover completed.

ReEncryptVM.ps1

EncryptVMOSDisk runbook is script which performs actual process of shutting down VM and updating location of secret key for decryption to point to location of Failover keyvault

EncryptVMOSDisk.ps1

  • Create recovery plan as usual

vmconnect_2018-09-05_16-19-10

  • Create separate group for encrypted machines (in this case it’s sql and app servers)

vmconnect_2018-09-05_16-33-42

  • Choose post action for Group2 and set it to reencrypt VM

vmconnect_2018-09-05_16-35-42

  • Create variable in Automation Account with name of Failover KeyVault holding keys

vmconnect_2018-09-05_18-19-30

  • Perform test failover to test functionality. If procedure worked as expected you shall be output of the job successful in automation account and you can verify disk encryption status by executing powershell statement ​`​Get-BitLockerVolume -MountPoint c: | fl * -force`

vmconnect_2018-09-05_20-29-05

 

Migrating from docker swarm to Service Fabric for windows container orchestration

As of July 2018 Microsoft official support statement for windows containers designating ONLY Azure Service Fabric as supported platform for operation of windows containers. My company has been running windows containers in docker swarm for more then a year now and have to migrate to Service Fabric as a result of this. Below some observations about this process and caveats and limitations I encountered along the way.

What is Azure Service Fabric

Azure Service Fabric despite the name actually downloadable and installable application which can be run anywhere and on any OS (not only Azure). Bigger part of Azure itself runs on Service Fabric (SQL PaaS for example is one of those). Recently Service Fabric also added support for container hosting and orchestration (this is bolted on feature and not something Service Fabric was designed from ground up to support so it’s a little rough around the edges but fully functional and supported nonetheless).

As far as container support in comparison with docker swarm following things are missing or done differently

  1. No ingress routing mesh, so all ports shall be published directly on a host and hence you have to rely on external load balancer for high availability
  2. As result of number 1 you can have only 1 type of container per host. This usually is not a big issue since per microsoft ingress routing mesh is good only for development environment and for production environment direct port mapping on host is prefered
  3. No built in support for secrets, instead configuration files shall be used
  4. No way to have host variables directly passed to container

Things which Azure Service Fabric have which docker swarm does not

  1. Support for Windows Active Directory credentials to authenticate to cluster
  2. UI for administration is built in and support Windows accounts and hence you can use your corporate RBAC for access to cluster
  3. Multitude of environment variables by default piped into container which provides better visibility into underlying host and application environment (for example container hostname is one of those)
  4. Ability to map certificate deployed to container hosts via environment variables passed to a container.
  5. Ability to specify as a part of service fabric docker deployment additional parameters

Azure Service Fabric Deployment

Instructions below are for deploying SF to virtual machines. I use Azure DevTest labs to deploy single domain controller for domain sf.local and have 3 container hosts named containerhost00-02

All container hosts have data drive mapped as F drive where docker-root will be located. This will rely on option provided by option above (custom arguments passed to dockerd) which is not available in docker swarm.

Download and install Azure Service Fabric SDK on your workstation. This will provide neccessary modules for SF administration from your workstation.

To deploy service fabric download Service Fabric package and extract somewhere. I put it on c:\sf, contents of package is described here.

Azure Service Fabric can be secured both with using certificates (similar to docker swarm) as well as Windows Authentication in Active Directory environment (no workgroup scenarios). Steps below targeting service fabric security based off Windows Authentication.

Template JSON for this scenario is located in your package folder and called  ClusterConfig.Windows.MultiMachine.json. Details about all settings are found here.

Settings to modify in template are below

  • name for cluster name
  • nodename which will correspond to UI how each node is represented in cluster
  • ipaddresswhich will be either IP or hostname
  • properties\diagnosticstorecan be either Azure storage account or file share accessible from all SF machines
  • properties\security\windowsidentities\clusteridentity provide option to specify which computer group will be used for intercluster communication
  • properties\security\windowsidentities\clientIdentities provides option to specify which AD groups are used for cluster administration. IsAdmin property specify if this group can make changes to Service Fabric
  • nodetypes specifies which ports are use for communication with a cluster

For my specific scenario I created a computer group called SF-HOSTS and added by SF computer accounts to it (you need to reboot hosts for this to take effect). Created a folder for diagnostics on DC1 and shared it as diagstore. All my container hosts also have F drive and that’s where I’d want to host by Service Fabric binaries as well as docker images. Have to create folders on root of F drive called images for docker images as well as SF to host Service Fabric binaries. Complete template file for this scenario is below

To deploy service fabric first run template verification script and pass path to your template path as a parameter


PS C:\Users\cloudadmin\Documents\sf\artisticcheesecontainer\sf> C:\sf\TestConfiguration.ps1 .\ClusterConfig.Windows.MultiMachine.json
Trace folder already exists. Traces will be written to existing trace folder: C:\Users\cloudadmin\Documents\sf\artisticcheesecontainer\sf\DeploymentTraces
Running Best Practices Analyzer...
Best Practices Analyzer completed successfully.

LocalAdminPrivilege : True
IsJsonValid : True
IsCabValid :
RequiredPortsOpen : True
RemoteRegistryAvailable : True
FirewallAvailable : True
RpcCheckPassed : True
NoConflictingInstallations : True
FabricInstallable : True
DataDrivesAvailable : True
NoDomainController : True
Passed : True

Actual installation can not be started with C:\sf\CreateServiceFabricCluster.ps1 .\ClusterConfig.Windows.MultiMachine.json

If no errors has been thrown then you shall be able to connect to UI for Service Fabric but navigating to any cluster member default port 19080

RemoteDesktopManager64_2018-07-12_09-46-43

To deploy your first container from UI go to Applications and right click on “Actions” button left and choose “compose” application.

You can also deploy from command line using powershell and compose file.

Connect to service fabric endpoint with


PS C:\sf> Connect-ServiceFabricCluster -ConnectionEndpoint localhost:19000 -WindowsCredential
True

 

ConnectionEndpoint : {localhost:19000}
FabricClientSettings : {
ClientFriendlyName : PowerShell-69620b90-99d6-4d85-a4c6-ef8c7a604c88
PartitionLocationCacheLimit : 100000
PartitionLocationCacheBucketCount : 1024
ServiceChangePollInterval : 00:02:00
ConnectionInitializationTimeout : 00:00:02
KeepAliveInterval : 00:00:20
ConnectionIdleTimeout : 00:00:00
HealthOperationTimeout : 00:02:00
HealthReportSendInterval : 00:00:00
HealthReportRetrySendInterval : 00:00:30
NotificationGatewayConnectionTimeout : 00:00:30
NotificationCacheUpdateTimeout : 00:00:30
AuthTokenBufferSize : 4096
}
GatewayInformation : {
NodeAddress : containerhost00.sf.local:19000
NodeId : 85772935593a0315f92e3293832c5fe9
NodeInstanceId : 131758801394347228
NodeName : vm0
}

You can find a status of deployment which was done via UI


PS C:\sf> Get-ServiceFabricComposeDeploymentStatus

DeploymentName : whoami
ApplicationName : fabric:/whoami
ComposeDeploymentStatus : Ready
StatusDetails :

You can start application upgrade via following powershell


PS C:\sf> Start-ServiceFabricComposeDeploymentUpgrade -DeploymentName whoami -Compose C:\users\cloudadmin\Documents\sf\artisticcheesecontainer\sf\docker-stack.yml -Monitored -FailureAction Rollback

DeploymentName : whoami
ComposeFilePaths : {C:\users\cloudadmin\Documents\sf\artisticcheesecontainer\sf\docker-stack.yml}
UpgradeKind : Rolling
ForceRestart : False
UpgradeMode : Monitored
UpgradeReplicaSetCheckTimeout : 49710.06:28:15
FailureAction : Rollback
HealthCheckRetryTimeout : 00:10:00
HealthCheckWaitDuration : 00:00:00
UpgradeDomainTimeout : 10675199.02:48:05.4775807
UpgradeTimeout : 10675199.02:48:05.4775807
ConsiderWarningAsError :
MaxPercentUnhealthyPartitionsPerService :
MaxPercentUnhealthyReplicasPerPartition :
MaxPercentUnhealthyServices :
MaxPercentUnhealthyDeployedApplications :
ServiceTypeHealthPolicyMap :

You can check status of upgrade by executing powershell below


PS C:\sf> Get-ServiceFabricComposeDeploymentUpgrade -DeploymentName whoami

 

DeploymentName : whoami
ApplicationName : fabric:/whoami
TargetApplicationTypeVersion : v8
StartTimestampUtc : 7/12/2018 4:55:11 PM
UpgradeState : RollingForwardCompleted
UpgradeStatusDetails : Deployment upgraded to version: v8.
UpgradeDuration : 00:03:00
CurrentUpgradeDomainDuration : 00:00:00
NextUpgradeDomain :
UpgradeDomainsStatus : { "UD0" = "Completed";
"UD1" = "Completed";
"UD2" = "Completed" }
UpgradeKind : Rolling
UpgradeMode : Monitored
FailureAction : Rollback
ForceRestart : False
UpgradeReplicaSetCheckTimeout : 49710.06:28:15
HealthCheckWaitDuration : 00:00:00
HealthCheckStableDuration : 00:02:00
HealthCheckRetryTimeout : 00:10:00
UpgradeDomainTimeout : 10675199.02:48:05.4775807
UpgradeTimeout : 10675199.02:48:05.4775807
ConsiderWarningAsError :
MaxPercentUnhealthyPartitionsPerService :
MaxPercentUnhealthyReplicasPerPartition :
MaxPercentUnhealthyServices :
MaxPercentUnhealthyDeployedApplications :
ServiceTypeHealthPolicyMap :

It also can be seen in UI

RemoteDesktopManager64_2018-07-12_12-51-17

Using VSTS for complete CI/CD pipeline for multi-arch docker images

I needed to have multi-arch docker images to return all environment variables to screen as well as in response headers (so they be used in Fiddler or similar tools to extract data). Idea and implementation is inspired by Stefan Sherer’s whoami image avaialable here https://github.com/StefanScherer/whoami

My base image is https://hub.docker.com/r/microsoft/dotnet/ which is multi-arch itself and hence allows me to have single DOCKERFILE for both UNIX and Windows builds. Entire code and additional artifacts are available at following Github repo in whoami folder.

DOCKERFILE is below which is the same file used for both Windows and UNIX builds.

Building this DOCKERFILE on Windows – will pull current nanoserver based image and on UNIX – current UNIX based image with no code changes necessary to DOCKERFILE itself or build process.

You can see how image works by instantiating Windows and UNIX containers in Azure Container Instance (cloud shell will work fine) and examining headers.

Image in addition to response HTTP headers output information to HTML as well which might be useful for troubleshooting/demo purposes

ApplicationFrameHost_2018-06-09_09-25-26

CI/CD pipeline in VSTS

Build and Release pipeline are exported as JSON files and available in Github repo. File names are

https://github.com/artisticcheese/artisticcheesecontainer/blob/master/whoami/WhoamI-ASP.NET%20Core-CI.json is build definition and https://github.com/artisticcheese/artisticcheesecontainer/blob/master/whoami/Whoami-Release.json is release defintion

Build consists of following steps:

  1. Download source code and artifacts from GitHub
  2. Run steps on hosted UNIX agent (provided for free by VSTS)
    • Build from DOCKERFILE
    • Push to Docker Hub
  3. Run steps on hosted Windows agent (provided for free by VSTS)
    • Build from DOCKERFILE
    • Push to Docker Hub
    • Rebase images for 1709 and 1803 images (work in progress currently)
  4. Run RegEx task to replace static data in manifest file which will be used to create multi-arch image with current BuildVersion
  5. Publish manifest as artifact for release pipeline
 This is how it looks like in UI
ApplicationFrameHost_2018-06-09_09-37-56

You can also enable real CI (instead of manually invoking build) in Options by checking “Enable Continuous Integration”

ApplicationFrameHost_2018-06-09_09-39-20

Result of successful build is YAML manifest file identifying image tags for UNIX/Windows images in docker hub

ApplicationFrameHost_2018-06-09_09-41-37

Example of manifest file is below

Release (CD) pipeline consists of following steps

  1. Install manifest tool from chocolatey on agent (courtesy of Stefan again)
  2. Download build artifact (manifest file) which contains information about current image tags
  3. Running tool from step 1 to update docker hub with latest image

CD is automatically triggered by successful build

ApplicationFrameHost_2018-06-09_09-48-10