Often I prefer building container images and running them on a docker host that is separate from my dev laptop. There are several reasons for doing this. But chief among them are that these images tend to take up a lot of space and I often find myself work with both Windows and Linux containers. I use a mac and cannot run Windows containers on it. Even if I could, I would rather avoid the hassle of having to switch between the two modes ever so often. I prefer setting up two separate Azure VMs on which I install docker machine and use those machines as remote docker hosts. I am content running just the client on my mac. If you prefer this approach, then follow along and I will show you how to get going with this setup.

Remote docker host

To run containers on Azure VMs you need to choose only those VMs on which nested virtualization is enabled.


For creating the Windows VM that can act as a remote docker host I prefer using the Microsoft supplied Windows Server 2016 with Containers image.

    1. Create the Windows VM. Whitelist tcp port 3389 inbound on the NSG to allow RDP connectivity to the VM.

    2. Login to the VM, open PowerShell and check if Windows containers are enabled. You can do this by typing the command below.

      PS C:\> docker images

    3. You should see a response that lists two Windows Container images.

      • microsoft/windowsservercore

      • microsoft/nanoserver

    4. Create a new file with the name daemon.json inside C:\ProgramData\docker\config and the below information.

      "hosts": [ "tcp://", "npipe://" ] }

    5. Update your NSG rules to whitelist TCP traffic to ports 2375, 2376. For more information look at the section titled Security below.

    6. Restart your docker service with the following command.

      PS C:\> Restart-Service docker

    7. Check connectivity from your laptop running docker client to the remote host by running the below command.

      computer:~ username$ docker -H "tcp://remote-host-ip:2375" info


I typically use debian stretch for running my Linux containers. Hence, this article illustrates setting up remote docker host using that flavor of Linux.

  1. Create an Azure VM running debian stretch. Whitelist tcp port 22 on NSG for connecting into the machine.

  2. Install docker on the VM by following the article here.

  3. Setup docker to run at startup using the following command.

    username@computer:~$ sudo systemctl enable docker

  4. Configure docker for remote access by editing systemd file. Type the following command.

    username@computer:~$ sudo systemctl edit docker.service

  5. Enter the following information in the file that opens up.
    [Service] ExecStart=
    ExecStart=/usr/bin/dockerd -H tcp://

  6. Reload the configuration.

    username@computer:~$ sudo systemctl daemon-reload

  7. Restart docker Service.

    username@computer:~$ sudo systemctl restart docker.service

  8. Connect to remote host from your laptop.

    computer:~ username$ docker -H "tcp://remote-host-ip:2376" info


It is important to secure the communication channel between your laptop and your remote docker host machines. There are 3 mechanisms that can be used to accomplish the task. You could use all three together or at the very least choose NSG rule plus Point to Site or Certificate based security.

NSG rule

Consider restricting inbound access on ports 2375 and 2376 to your IP address only.


Point To Site Network

Consider establishing a secure Point to Site network between your laptop and Azure Virtual Network in which your docker hosts have been created. Instructions to set this up can be obtained here. This network is setup over IPsec and ensures secure connectivity.

Docker’s certificate based security

To use this method, you will have to create the following

  • root cert
  • server cert
  • client cert
  • server private key
  • client private key

You will then have to copy the certificates onto both the servers. Assuming your dev laptop is a mac or any linux based machine, you could run this script to generate those certs.

The script takes Location (location of your Azure VM), PassPhrase (password to protect your certificate) and host ip (Host IP address).

When using this method, you have to ensure that the remote host IP Address does not change. If you are using Point to Site connectivity then ensure that you set the private ip address of the host machine to static. If you are connecting over public ip, ensure that you provision static public ip for the host machine. If you do not do that, you may not be able to connect to your remote host because the certificates assume a static ip address.


Copy ca.pem, server-cert.pem and server-key.pem onto the server. For Linux copy them over to ~/.docker folder. Create the folder if it does not exist.

Update docker.service file with the following information

[Service] ExecStart=
ExecStart=/usr/bin/dockerd -H tcp:// --tlsverify --tlscacert=/home/username/.docker/ca.pem --tlscert=/home/username/.docker/server-cert.pem --tlskey=/home/username/.docker/server-key.pem

Reload and restart docker service.

From your client machine run the flowing command to connect.

computer:~ username$ docker -H tcp://:2376 --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem info

Verify that it works.


Copy ca.pem, server-cert.pem and server-key.pem onto the server. For Windows copy them over to C:\ProgramData\docker\certs.d.

Update daemon.json

"hosts": ["tcp://", "npipe://"],
"tlsverify": true,
"tlscert": "C:\\ProgramData\\docker\\certs.d\\server-cert.pem",
"tlskey": "C:\\ProgramData\\docker\\certs.d\\server-key.pem",
"tlscacert": "C:\\ProgramData\\docker\\certs.d\\ca.pem"

Restart Docker Service

From your client machine run the flowing command to connect.

computer:~ username$ docker -H tcp://:2375 --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem info

Verify that it works.