Smarter ideas worth writing about.

Intro to Docker Swarm Mode and Azure - Part 2

In part 1, I introduced Docker Swarm Mode, creating a local swarm cluster, and showed it in action. This should have given you enough to get started, but typically when you want to get more serious about running containers in a development or production environment, you can’t have things running locally. That’s where Azure comes into play.

Azure Offerings

The two main ways to go with regards to deploying containers in Azure are Azure Container Service (ACS) or Docker For Azure. Both offerings make it easier to create, configure, and manage VM’s that are preconfigured to run containers. If you are interested in using something else besides Docker for container orchestration, ACS will also allow you to also use Marathon and DC/OS, or Kubernetes. For this blog post, I’ve chosen Docker For Azure simply because ACS does not support Docker Swarm Mode (as of the writing of this blog) and is a bit more intuitive to use.

Prerequisites

  1. Access to an Azure account with admin privileges.
  2. An SSH public/private key pair to install on the Azure VM’s to gain access to. It’s fairly simple to create one on linux or windows.

Setting Up Docker for Azure

  1. Creating an AD Service Principal (SP) The service principal is required to make Azure API calls to when scaling nodes up/down or when deploying apps on your swarm cluster that require Azure Load Balancer configuration.

    1. Get the docker4x/create-sp-azure container. This container just runs a helper script to create the SP. You can download and run it by running the following commands at either a command prompt or powershell:
    2. > docker pull docker4x/create-sp-azure:latest
      > docker run -ti docker4x/create-sp-azure [sp-name] [rg-name] [rg-region]
      1. Replace sp-name with any name you want. The name is not important, but it’s something you’ll recognize in the Azure portal.
      2. ii.Replace rg-name with the name of the Azure resource group you want to create. If you have an existing resource group you would like to use, enter that name instead.
      3. Replace rg-region with the name of the Azure region you want to deploy to (e.g. – eastus)

    3. If successful, the Service Principle should be created. The two items of importance in the output given are the SP App ID and App Secret. You will need these when creating Docker swarm cluster in the next step.


    What should show after successful SP creation

  2. Creating Your Nodes
    1. Go to https://docs.docker.com/docker-for-azure/ and click on the link for the Stable CE version. This will take you directly to the azure portal to deploy a custom ARM template to setup your Azure services to run Docker Swarm Mode. The following will be asked of you:
      1. Subscription – If you have more than one subscription available, select the one you would like to use. (Note – Once deployment is completed, you will be charged so make sure you select the correct subscription)
      2. Resource Group/Location – Select the same group and location used when the SP was created.
      3. AD Service Principal App ID & Secret – Enter in the values from SP creation.
      4. Enable System Prune – Leave as is.
      5. Manger Count – Default is 1. Leave as is for the purposes of this blog post. You would want at least 3 manager nodes for production.
      6. Manager VM Size – The Azure VM size used when manager nodes are created. The default is sufficient for this blog post.
      7. SSH Public Key – Enter in your SSH public key.
      8. Swarm Name – Leave as is.
      9. Worker Count – You can create up to 15. Select 3 for now.
      10. Worker VM Size – The Azure VM size used when worker nodes are created. The default is sufficient for this blog post.

    2. Once agreeing to the terms and conditions and clicking the purchase button, it can take a few minutes to create everything. Azure will:
      1. Create all the resources needed (Storage accounts, VM scale sets, Load Balancer, etc.).
      2. Initialize swarm mode all the manager nodes.
      3. Connect worker nodes to the manager nodes.



  3. Connecting to Your Manager Node
    1. You can connect to the manager node, by navigating to the resource group specified during deployment and opening the externalSSHLoadBalancer. The overview section shows the public IP address you need.
    2. Take your SSH private key (which you should have your public key was generated) and make sure that’s properly loaded before trying to SSH in. For example with Windows, you can specify this via Putty or load it
      1. The host address is docker@<external-ssh-lb-public-ip>
      2. The SSH Port is 50000. By default the inbound NAT rules of the external SSH load balancer map 50000 to 22.
      3. When connecting, allow agent forwarding. This needs to be enabled if you need/want to SSH into a worker node to pass your SSH private key through. If you are using putty, make sure your private key is loaded via pageant as well.

  4. Connecting to Your Worker Node(s)(optional)
    1. All of your service/task deployments are done from your manager node. However if you want to SSH into a worker node, you must do through the manager. For security reasons, Azure does this out of the box and prevents incoming external connections to worker nodes.
    2. Once connected to the manager node, run the following commands
    3. > cat /etc/resolv.conf
      > docker node ls
      > ssh <node-hostname><internal-domain-name> 
      1. The first command gives you the internal domain all nodes are running on.
      2. The second command gives you all the host name of all nodes in the swarm cluster.
      3. The third command is to ssh to the desired node.

So now you have a docker swarm cluster up and running in Azure. From here feel free to deploy some services and test things out. Don’t forget to also delete everything when you’re done otherwise you will keep getting charged! The easiest way to do that is to delete the resource group.

Share:

About The Author

Software Developer

As a software developer, Timothy supports the App Dev team in Cardinal's Raleigh office. The majority of his experience has been with the Microsoft stack, but he has also been working on RESTful service development on current mobile initiatives.