In part 1, I introduced Docker Swarm Mode, creating a local swarm cluster, and showed it in action. This should have given you enough to get started, but typically when you want to get more serious about running containers in a development or production environment, you can’t have things running locally. That’s where Azure comes into play.
The two main ways to go with regards to deploying containers in Azure are Azure Container Service (ACS) or Docker For Azure. Both offerings make it easier to create, configure, and manage VM’s that are preconfigured to run containers. If you are interested in using something else besides Docker for container orchestration, ACS will also allow you to also use Marathon and DC/OS, or Kubernetes. For this blog post, I’ve chosen Docker For Azure simply because ACS does not support Docker Swarm Mode (as of the writing of this blog) and is a bit more intuitive to use.
- Access to an Azure account with admin privileges.
- An SSH public/private key pair to install on the Azure VM’s to gain access to. It’s fairly simple to create one on linux or windows.
Setting Up Docker for Azure
- Creating an AD Service Principal (SP)
The service principal is required to make Azure API calls to when scaling nodes up/down or when deploying apps on your swarm cluster that require Azure Load Balancer configuration.
- Get the docker4x/create-sp-azure container. This container just runs a helper script to create the SP. You can download and run it by running the following commands at either a command prompt or powershell:
> docker pull docker4x/create-sp-azure:latest
> docker run -ti docker4x/create-sp-azure [sp-name] [rg-name] [rg-region]
- Replace sp-name with any name you want. The name is not important, but it’s something you’ll recognize in the Azure portal.
- ii.Replace rg-name with the name of the Azure resource group you want to create. If you have an existing resource group you would like to use, enter that name instead.
- Replace rg-region with the name of the Azure region you want to deploy to (e.g. – eastus)
- If successful, the Service Principle should be created. The two items of importance in the output given are the SP App ID and App Secret. You will need these when creating Docker swarm cluster in the next step.
What should show after successful SP creation
- Creating Your Nodes
- Go to https://docs.docker.com/docker-for-azure/ and click on the link for the Stable CE version. This will take you directly to the azure portal to deploy a custom ARM template to setup your Azure services to run Docker Swarm Mode. The following will be asked of you:
- Subscription – If you have more than one subscription available, select the one you would like to use. (Note – Once deployment is completed, you will be charged so make sure you select the correct subscription)
- Resource Group/Location – Select the same group and location used when the SP was created.
- AD Service Principal App ID & Secret – Enter in the values from SP creation.
- Enable System Prune – Leave as is.
- Manger Count – Default is 1. Leave as is for the purposes of this blog post. You would want at least 3 manager nodes for production.
- Manager VM Size – The Azure VM size used when manager nodes are created. The default is sufficient for this blog post.
- SSH Public Key – Enter in your SSH public key.
- Swarm Name – Leave as is.
- Worker Count – You can create up to 15. Select 3 for now.
- Worker VM Size – The Azure VM size used when worker nodes are created. The default is sufficient for this blog post.
- Once agreeing to the terms and conditions and clicking the purchase button, it can take a few minutes to create everything. Azure will:
- Create all the resources needed (Storage accounts, VM scale sets, Load Balancer, etc.).
- Initialize swarm mode all the manager nodes.
- Connect worker nodes to the manager nodes.
- Connecting to Your Manager Node
- You can connect to the manager node, by navigating to the resource group specified during deployment and opening the externalSSHLoadBalancer. The overview section shows the public IP address you need.
- Take your SSH private key (which you should have your public key was generated) and make sure that’s properly loaded before trying to SSH in. For example with Windows, you can specify this via Putty or load it
- The host address is docker@<external-ssh-lb-public-ip>
- The SSH Port is 50000. By default the inbound NAT rules of the external SSH load balancer map 50000 to 22.
- When connecting, allow agent forwarding. This needs to be enabled if you need/want to SSH into a worker node to pass your SSH private key through. If you are using putty, make sure your private key is loaded via pageant as well.
- Connecting to Your Worker Node(s)(optional)
- All of your service/task deployments are done from your manager node. However if you want to SSH into a worker node, you must do through the manager. For security reasons, Azure does this out of the box and prevents incoming external connections to worker nodes.
- Once connected to the manager node, run the following commands
> cat /etc/resolv.conf
> docker node ls
> ssh <node-hostname><internal-domain-name>
- The first command gives you the internal domain all nodes are running on.
- The second command gives you all the host name of all nodes in the swarm cluster.
- The third command is to ssh to the desired node.
So now you have a docker swarm cluster up and running in Azure. From here feel free to deploy some services and test things out. Don’t forget to also delete everything when you’re done otherwise you will keep getting charged! The easiest way to do that is to delete the resource group.