Replacing ServiceMonitor.exe with IIS error log events in windows IIS container

ServiceMonitor.exe is used as default ENTRYPOINT entry official Microsoft IIS build. It does not seem to be doing much except for checking if service W3SVC is running. There is multitude of issues with this EXE based on github (https://github.com/Microsoft/iis-docker/issues). Bearing in mind very limited functionality that this EXE provides I decided it to replace with something more useful.
Starting with IIS 8.5 it’s possible to output IIS logs not only to log files but also to ETW events. You can consume those inside eventlog name called Microsoft-Windows-IIS-Logging/Logs which shall be enabled to receive those events. Steps below will allow to provide output of all errors being logged on your webserver to docker logs instead of default ENTRYPOINT of IIS image which does not provide any valuable information.
I also decided to start my images now in servercore instead of IIS since latter only installs WindowsFeature Web-Server and creates ENTRYPOINT for ServiceMonitor.exe. Neither of which is really necessary anyway.
To accomplish this 2 things would need to be changed in base Microsoft image.

  1. Enable IIS to log to ETW (in addition or insted of file logging)
  2. Enable log called Microsoft-IIS-Logging/logs

Step number 1 is accomplished by executing following powershell

Import-module WebAdministration
$splat = @{
    pspath = "MACHINE/WEBROOT/APPHOST"
    filter = "system.applicationHost/sites/siteDefaults/logFile"
    name =  "logTargetW3C"
    value = "File,ETW"
}
Set-WebConfigurationProperty @splat

Step 2 is below

$IISOpsLog = Get-WinEvent -ListLog Microsoft-IIS-Logging/logs
$IISOpsLog.IsEnabled = "true"
$IISOpsLog.SaveChanges()

Both entries are made inside website_config.ps1 file in artifacts directory.

This setup will output ETW events to Microsoft-IIS-Logging/logs which will be repeatadly read in powershell script called entrypoint.ps1 below

$VerbosePreference = "ignore"
$sleep = 5
while ($true)
{
    $datediff = (New-TimeSpan -Seconds $sleep).TotalMilliseconds
    $filter = "*/System/TimeCreated[timediff(@SystemTime) <= $datediff] and *[EventData/Data[@Name='sc-status'] >'400']"
    Get-WinEvent -MaxEvents 10 -FilterXPath $filter -ProviderName "Microsoft-Windows-IIS-Logging" -ErrorAction SilentlyContinue | 
    Select-Object @{Name = "time"; e = {$_.Properties[2].value}}, @{Name = "VERB"; e = {$_.Properties[8].value}}, 
    @{Name = "ClientIP"; e = {$_.Properties[3].value}}, @{Name = "URI"; e = {$_.Properties[9].value}}, 
    @{Name = "Query"; e = {$_.Properties[10].value}}, @{Name = "Status"; e = {$_.Properties[11].value}}, 
    @{Name = "host"; e = {$_.Properties[21].value}} | Format-Table
    Start-Sleep $sleep
}

I restrict number of events returned by query at source based on time since last request as well as only for specific eventcodes which identifies Web Server errors (status codes > 400)

The last step is to put this script into ENTRYPOINT in DOCKERFILE

ENTRYPOINT powershell.exe C:\startup\entrypoint.ps1

Entire code base along with additional files is available on Github page (https://github.com/artisticcheese/IISadmin)
Image on dockerhub is (https://hub.docker.com/r/artisticcheese/iis-admin/)

You can test functionality below

docker run -it artisticcheese/iis-admin

Issue request to non-existent file to local container

invoke-webrequest http://172.30.163.51/asda

You will see output in stdout of container

time     VERB ClientIP     URI   Query Status host         
----     ---- --------     ---   ----- ------ ----         
18:28:08 GET  172.30.160.1 /asda -        404 172.30.163.51

Advertisements

Using pure powershell to generate TLS certificates for Docker daemon running on Windows

Steps below will allow you to create necessary PKI infrastructure to secure you docker daemons with no requirement to download any external tools. Snippets below are not part of complete script but tidbits which you can use to procure both CA, server certificate as well as client certificate and reuse pieces for issuing additional certificates down the road.

Docker daemon requires 3 files on server to for secure TLS connection:

  • tlscacert which is Base64 encoded public key of CA certificate
  • tlscert which is Base64 encoded public key of server certificate
  • tlskey which Base64 encoded private key of server certificate

Docker daemon will list those keys in file called daemon.json under  $env:programdata\docker\config

Example of that file is below

{
"group": "Network Service",
"graph": "E:\\images",
"tlscacert": "C:\\ProgramData\\docker\\certs.d\\rootCA.cer",
"tlskey": "C:\\ProgramData\\docker\\certs.d\\privateKey.cer",
"hosts": [
"tcp://0.0.0.0:2376",
"npipe://"
],
"tlscert": "C:\\ProgramData\\docker\\certs.d\\serverCert.cer",
"tlsverify": true
}

Similar files will be required on client to connect to server, difference is certificate which will be used to connect which will have different EKU (Enhanced Key Usage) specified

  • tlscacert which is Base64 encoded public key of CA certificate
  • tlscert which is Base64 encoded public key of client certificate
  • tlskey which is Base64 encoded private key of client certificate
Docker client will use syntax below to connect to TLS secured docker endpoint
& docker --tlsverify --tlscacert=c:\test\rootca.cer --tlscert=c:\test\clientPublicKey.cer --tlskey=c:\test\clientPrivateKey.cer -H=tcp://containerhost1:2376 version

Creating CA certificate

Snippet below creates CA certificate and exports it’s public key to c:\test\rootCA.cer. Private key stays in your Windows Certificate Store and is exportable for your backup purpouses and reissuing new server and client certificates later. The only changeable parameter which you can modify for your environment is Subject.

        $splat = @{
        type = "Custom" ;
        KeyExportPolicy = "Exportable";
        Subject = "CN=Docker TLS Root";
        CertStoreLocation = "Cert:\CurrentUser\My";
        HashAlgorithm = "sha256";
        KeyLength = 4096;
        KeyUsage = @("CertSign", "CRLSign");
        TextExtension = @("2.5.29.19 ={critical} {text}ca=1")
    }
    $rootCert = New-SelfSignedCertificate @splat

After CA certificate is generated we need to export it’s public key to a file, the only changeable part here is Path

 $splat = @{
Path = "c:\test\rootCA.cer";
        Value = "-----BEGIN CERTIFICATE-----`n" + [System.Convert]::ToBase64String($rootCert.RawData, [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END CERTIFICATE-----";
        Encoding = "ASCII";
    }
    Set-Content @splat

Creating Server Certificate to secure TLS on container host

Code similar to generation of CA certificate with few notable changes, that is we provide which certificate is used to sign it as well as type of certificate, we export key after cert is generated. Changeable parameters are DNSName and Path

    $splat = @{
        CertStoreLocation = "Cert:\CurrentUser\My";
        DnsName = "swarmmanager1", "localhost", "containerhost1";
        Signer = $rootCert ;
        KeyExportPolicy = "Exportable";
        Provider = "Microsoft Enhanced Cryptographic Provider v1.0";
        Type = "SSLServerAuthentication";
        HashAlgorithm = "sha256";
        TextExtension = @("2.5.29.37= {text}1.3.6.1.5.5.7.3.1");
        KeyLength = 4096;
    }
    $serverCert = New-SelfSignedCertificate @splat
    $splat = @{
       Path = "c:\test\serverCert.cer";
       Value = "-----BEGIN CERTIFICATE-----`n" + [System.Convert]::ToBase64String($serverCert.RawData, [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END CERTIFICATE-----";
       Encoding = "Ascii"
}
   Set-Content @splat

Exporting private key for server certificate to a file

Last step for TLS connectivity for docker host to export private key to Base64 encoded file which has been pretty difficult with off the shelf powershell/.NET framework untill version 4.6 which provided method to export that key. Implementation is below

    $privateKeyFromCert = [System.Security.Cryptography.X509Certificates.RSACertificateExtensions]::GetRSAPrivateKey($serverCert)
    $splat = @{
        Path = "c:\test\privateKey.cer";
        Value = ("-----BEGIN RSA PRIVATE KEY-----`n" + [System.Convert]::ToBase64String($privateKeyFromCert.Key.Export([System.Security.Cryptography.CngKeyBlobFormat]::Pkcs8PrivateBlob), [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END RSA PRIVATE KEY-----");
        Encoding = "Ascii";
    }
    Set-Content @splat

Creating client certificate

Code below performs similar tasks in relevant to client certificate like server certificate tasks above

    $splat = @{
        CertStoreLocation = "Cert:\CurrentUser\My";
        Subject = "CN=clientCert";
        Signer = $rootCert ;
        KeyExportPolicy = "Exportable";
        Provider = "Microsoft Enhanced Cryptographic Provider v1.0";
        TextExtension = @("2.5.29.37= {text}1.3.6.1.5.5.7.3.2") ;
        HashAlgorithm = "sha256";
        KeyLength = 4096;
    }
    $clientCert = New-SelfSignedCertificate  @splat
    $splat = @{
        Path = "c:\test\clientPublicKey.cer" ;
        Value = ("-----BEGIN CERTIFICATE-----`n" + [System.Convert]::ToBase64String($clientCert.RawData, [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END CERTIFICATE-----");
        Encoding = "Ascii";
    }
    Set-Content  @splat
    $clientprivateKeyFromCert = [System.Security.Cryptography.X509Certificates.RSACertificateExtensions]::GetRSAPrivateKey($clientCert)
    $splat = @{
        Path = "c:\test\clientPrivateKey.cer";
        Value = ("-----BEGIN RSA PRIVATE KEY-----`n" + [System.Convert]::ToBase64String($clientprivateKeyFromCert.Key.Export([System.Security.Cryptography.CngKeyBlobFormat]::Pkcs8PrivateBlob), [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END RSA PRIVATE KEY-----");
        Encoding = "Ascii";
    }
    Set-Content  @splat

If everything worked correctly you are supposed to see 3 certificates on your machine listed below, which are CA certificate, server certificate and client certificate, you also shall have private key for each of those (indicated by key image).

mmc_2017-06-10_16-10-12

You shall also have 5 files created in c:\test folder like below

&amp;amp;amp;amp;amp;amp;nbsp;Directory: C:\test

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----        6/10/2017   4:09 PM           3314 clientPrivateKey.cer
-a----        6/10/2017   4:09 PM           1858 clientPublicKey.cer
-a----        6/10/2017   4:09 PM           3310 privateKey.cer
-a----        6/10/2017   4:09 PM           1812 rootCA.cer
-a----        6/10/2017   4:09 PM           1936 serverCert.cer

Add your CA root certificate to your trusted root certificate authorities in certmgr.msc. Make sure you copy and paste and not move CA root certificate since if you move you will not be able to sign any more keys. This is not not a requirement but will allow you to use native Windows tools in working with certificates instead of relying on file based store like openSSL does. For example you will be able to use HTTPS to call docker REST API both in browser and via Invoke-WebRequest

Deploy server certificate to docker container host

  • Open file named daemon.json under  $env:programdata\docker\config and paste lines in snippet below into it.
    "tlscacert":  "C:\\ProgramData\\docker\\certs.d\\rootCA.cer",
    "tlskey":  "C:\\ProgramData\\docker\\certs.d\\privateKey.cer",
    "hosts":  [
                  "tcp://0.0.0.0:2376",
                  "npipe://"
              ],
    "tlscert":  "C:\\ProgramData\\docker\\certs.d\\serverCert.cer",
    "tlsverify":  true
  • Copy files rootCA.cer, privateKey.cer, serverCert.cer to $env:programdata\docker\certs.d
  • Restart docker service Restart-Service docker

At this point you shall not be able to connect to daemon via HTTPS without providing a valid certificate. Try following  Invoke-WebRequest https://containerhost1:2376, it shall fail complaining that SSL client certificate is required for connection

The request was aborted: Could not create SSL/TLS secure channel.
At line:1 char:1
+ Invoke-WebRequest https://containerhost1:2376
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand

Attach client certificate to request by first finding thumbprint of your client cert in certificates manager and then use tab completion to iterate to it in get-item cert:\CurrentUser\My\, assign it to variable which you will attach to REST request $cert = get-item Cert:\CurrentUser\My\350A62B64152D9B85673E902A1F1C2CB6766598E

Issue the same request as above and it will succeed returning information about available images on remote system

PS >(Invoke-WebRequest https://containerhost1:2376/images/json -Certificate $cert -UseBasicParsing).Content | convertfrom-json

Containers : -1
Created : 1494389426
Id : sha256:242b8694ed621610a27746e0075c95e87f1a239e1800a4ea55e753010a49d9d5
Labels :
ParentId :
RepoDigests : {stefanscherer/dockertls-windows@sha256:5fe358a57cb31f18d2d148b0481898d530a5547c4d5d6f9ce5e0334ed8d3de19}
RepoTags : {stefanscherer/dockertls-windows:latest}
SharedSize : -1
Size : 1049291645
VirtualSize : 1049291645

One last thing is to try to use docker CLI to query the same information.

PS C:\admin> docker --tlsverify --tlscacert=c:\test\rootca.cer --tlscert=c:\test\clientPublicKey.cer --tlskey=c:\test\clientPrivateKey.c
er -H=tcp://containerhost1:2376 images

time="2017-06-10T16:46:14-05:00" level=info msg="Unable to use system certificate pool: crypto/x509: system root pool is not available on Windows"
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
stefanscherer/dockertls-windows   latest              242b8694ed62        4 weeks ago         1.05 GB

Full script is below

$ErrorActionPreference = "Stop"
if ([int](Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full"  -Name Release).Release -lt 393295) {
    throw "Your version of .NET framework is not supported for this script, needs at least 4.6+"
}
function GenerateCerts {
    $splat = @{
        type = "Custom" ;
        KeyExportPolicy = "Exportable";
        Subject = "CN=Docker TLS Root";
        CertStoreLocation = "Cert:\CurrentUser\My";
        HashAlgorithm = "sha256";
        KeyLength = 4096;
        KeyUsage = @("CertSign", "CRLSign");
        TextExtension = @("2.5.29.19 ={critical} {text}ca=1")
    }
    $rootCert = New-SelfSignedCertificate @splat
    $splat = @{
        Path = "c:\test\rootCA.cer";
        Value = "-----BEGIN CERTIFICATE-----`n" + [System.Convert]::ToBase64String($rootCert.RawData, [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END CERTIFICATE-----";
        Encoding = "ASCII";
    }
    Set-Content @splat
    $splat = @{
        CertStoreLocation = "Cert:\CurrentUser\My";
        DnsName = "swarmmanager1", "localhost", "containerhost1";
        Signer = $rootCert ;
        KeyExportPolicy = "Exportable";
        Provider = "Microsoft Enhanced Cryptographic Provider v1.0";
        Type = "SSLServerAuthentication";
        HashAlgorithm = "sha256";
        TextExtension = @("2.5.29.37= {text}1.3.6.1.5.5.7.3.1");
        KeyLength = 4096;
    }
    $serverCert = New-SelfSignedCertificate @splat
    $splat = @{
        Path = "c:\test\serverCert.cer";
        Value = "-----BEGIN CERTIFICATE-----`n" + [System.Convert]::ToBase64String($serverCert.RawData, [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END CERTIFICATE-----";
        Encoding = "Ascii"
    }
    Set-Content @splat

    $privateKeyFromCert = [System.Security.Cryptography.X509Certificates.RSACertificateExtensions]::GetRSAPrivateKey($serverCert)
    $splat = @{
        Path = "c:\test\privateKey.cer";
        Value = ("-----BEGIN RSA PRIVATE KEY-----`n" + [System.Convert]::ToBase64String($privateKeyFromCert.Key.Export([System.Security.Cryptography.CngKeyBlobFormat]::Pkcs8PrivateBlob), [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END RSA PRIVATE KEY-----");
        Encoding = "Ascii";
    }
    Set-Content @splat

    $splat = @{
        CertStoreLocation = "Cert:\CurrentUser\My";
        Subject = "CN=clientCert";
        Signer = $rootCert ;
        KeyExportPolicy = "Exportable";
        Provider = "Microsoft Enhanced Cryptographic Provider v1.0";
        TextExtension = @("2.5.29.37= {text}1.3.6.1.5.5.7.3.2") ;
        HashAlgorithm = "sha256";
        KeyLength = 4096;
    }
    $clientCert = New-SelfSignedCertificate  @splat
    $splat = @{
        Path = "c:\test\clientPublicKey.cer" ;
        Value = ("-----BEGIN CERTIFICATE-----`n" + [System.Convert]::ToBase64String($clientCert.RawData, [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END CERTIFICATE-----");
        Encoding = "Ascii";
    }
    Set-Content  @splat
    $clientprivateKeyFromCert = [System.Security.Cryptography.X509Certificates.RSACertificateExtensions]::GetRSAPrivateKey($clientCert)
    $splat = @{
        Path = "c:\test\clientPrivateKey.cer";
        Value = ("-----BEGIN RSA PRIVATE KEY-----`n" + [System.Convert]::ToBase64String($clientprivateKeyFromCert.Key.Export([System.Security.Cryptography.CngKeyBlobFormat]::Pkcs8PrivateBlob), [System.Base64FormattingOptions]::InsertLineBreaks) + "`n-----END RSA PRIVATE KEY-----");
        Encoding = "Ascii";
    }
    Set-Content  @splat
}
GenerateCerts

& docker --tlsverify --tlscacert=c:\test\rootca.cer --tlscert=c:\test\clientPublicKey.cer --tlskey=c:\test\clientPrivateKey.cer -H=tcp://containerhost1:2376 images

How to download individual files from GitHub enterprise

Hello,

Below instructions how to download individual files from private GitHub repository using powershell.

Assumptions:

  1. Your GitHub repository is hosted at github.mycompany.com
  2. Your organisation name is my-org
  3. Your repository name is my-repo
  4. Path to file you are trying to download is /myfiles/file.txt

Steps

  1. Obtain personal access token for you account by navigating to your account and choosing settings.
  2. Go to Personal Access Token setting and choose “Generate New Token”. Copy the resulting token key.
  3. Construct URL to the file you are trying to download in following format: http://github.mycompany.com/api/v3/repos/my-org/my-repo/contents/myfiles/file.txt
  4. Powershell script to download file is below. Values in italic are variables which will be different in your environment

Invoke-WebRequest http://github.mycompany.com/api/v3/repos/my-org/my-repo/contents/myfiles/file.txt -Headers @{“Authorization”=”token 8d795936d2c1b2806587719b9b6456bd16549ad8“;”Accept”= “application/vnd.github.v3.raw”}

If you need to download entire contents of your master branch then request will look like below

Invoke-WebRequest http://github.mycompany.com/api/v3/repos/my-org/my-repo/zipball/master -Headers @{“Authorization”=”token 8d795936d2c1b2806587719b9b6456bd16549ad8“} -OutFile out.zip

IIS WebDav hosting using IIS Manager Users to authentication

Enabling IIS WebDav functionality by using IIS Manager Users

Setting up IIS WebDav functionality is pretty trivial if one to rely on Windows user accounts for authentication but this architecture causing massive issues, namely:

  1. Accounts have to be precreated in Windows and are in fact real Windows accounts with permissions through system. I frequently see people while troubleshooting WebDav authentication issues adding those users to various group (in addition to default Users group) including Administrators account.
  2. It’s difficult to maintain since those users accounts are specific to machine where they live and hence not trivial to extend setup to several servers without keeping all accounts in sync.
Instead we can rely on IIS Manager to store and maintain users which was designed to allow hosting providers to provide remote IIS management functionality to customers. This setup remove all the drawbacks of using Windows users as authentication provider. It’s easily scalable (since IIS shared configuration can be used) and do not provide any sort of access to underlying operating system.
Solution consists of 2 DSC scripts below. Instead of using UI to set this up DSC was chosen since it’s easily replicated at scale and provide reproducible and consistent behavior.
Prerequisites.ps1 which performs following:
  1. Install basic IIS features
  2. Enabled remote management to enable IIS Manager User features
  3. Install Nuget and chocolatey providers to pull required DSC resources to create website and manipulate NTFS permissions

Startup.ps1 which performs following:

  1. Enables WebDav and neccessary features
  2. Configured IIS Manager to accept both Windows and IIS Manager credentials
  3. Modifies permissions to allow IIS_IUSRS users to read configuration file
  4. Creates website and bindings it to default ports
  5. Create IIS manager users with the password
  6. Modifies IIS configuration to allow WebDav publishing based off IIS Manager credentials provider
  7. Assigns WebDav permissions to newly created users to access website

Prerequisites.ps1


Startup.ps1

Running Windows Nano server on QNAP NAS device

How to run Windows Nano server on QNAP

Prerequisites:

Steps.

1.  Download and extract Windows 2016 ISO somewhere on HDD (I use 7-zip for this purposes)
2. Build WIM image by utilizing script below. At the end of the script you shall end up with c:\nanoserver folder with a bunch of subfolders beneath it

$Target_Drive = "C:"  
$cd_drive = "C:\win2016"
###################
$NanoTarget = join-path $Target_Drive "Nanoserver"
$NanoServer = join-path $cd_drive "Nanoserver"
$Nanosource = join-path $cd_drive "Sources"
$DismPath = Join-Path $NanoTarget "DISM"
New-Item -ItemType Directory $NanoTarget
New-Item -ItemType Directory $DismPath
foreach ($Filter in "*api*downlevel*.dll","*dism*","*provider*")
{
Get-ChildItem -Filter $Filter -Path $Nanosource | Copy-Item -Destination $DismPath -PassThru
}
Copy-Item "$NanoServer\*" $NanoTarget -Recurse


3. Convert VIM image into VHD file with powershell commad below

.\convert-windowsimage.ps1 -SourcePath .\NanoServer.wim -Edition CORESYSTEMSERVER_INSTALL -VHDPath .\nano.vhd -VHDFormat VHD -DiskLayout BIOS

4. You will end up with VHD file in your nano server directory
5. Update you VHD image with OEM drivers below. Make sure “mountdir” folder created first in your build folder.

dism\dism /Mount-Image /ImageFile:.\Nano.vhd /Index:1 /MountDir:.\mountdir
dism\dism /Add-Package /PackagePath:.\packages\Microsoft-NanoServer-OEM-Drivers-Package.cab /Image:.\mountdir  
dism\dism /Add-Package /PackagePath:.\packages\en-US\Microsoft-NanoServer-OEM-Drivers-Package.cab /Image:.\mountdir  
dism\dism /Unmount-Image /MountDir:.\MountDir /Commit  

6. You have fully working VHD now which you can import into Hyper-V if you want, but we need to convert it to qcow2 format used by QNAP by using qemu-img.exe tool

.\qemu\qemu-img.exe  convert -O qcow2 .\nano.vhd dest.img
7. Create new VM in QNAP with this image as HDD and you have yourself a working Nano server running on QNAP

Move any physical/virtual servers to Azure with free tools

Below are steps which can be taken to move physical/virtual servers to Azure. All tools used are freely available.

 Depending which architecture is being moved (physical/virtual) you might start on any of the steps below bypassing some of earlier steps (for example if you want to move Hyper-V managed server). I assume we are moving either from Vmware or physical machine for this flow.

1. Download disk2vhd tool and run on your target machine. Uncheck “VHDx” since Azure supports only VHD files.

2. Create new Virtual Machine and attach generated VHD file to it (Generation 1). Boot machine and uninstall any software which will not be needed in Azure. (Vmware tools for example)

3. Enable firewall for all networks and make exception for remote desktop

4.  If you have system reserved partition then delete it using instructions available on this link
5.  Install or update Hyper-V Integration services components and Azure VM agent (https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-extensions-agent-about/)
6. Make sure you HDD is not bigger then it needs to be once it’ll be in Azure. For this to happen you need to defrag disk and move all the files to the start of HDD so you can shrink it to desired size. You will have to do offline defrag in some cases to move all the files to start of HDD. I used Puran Defrag for this purposes.
7. After resize you need to shrink OS partition in Windows to desired final size. 
8. Final 2 steps for VHD is to shrink it and convert it to Fixed size. I used VHDResizer for this purpose.
9. Upload your VHD to Azure storage. I use CloudBerry Explorer. For this you need to register account in Cloudberry by providing account name and key which you can find in Azure portal.
10. Upload your VHD file as Page Blob
11. After upload is complete, go to classic portal and add VHD like below
12. The last step is to create VM based on this VHD
If everything was done right then you will have exact image of you machine running in Azure cloud