Docker 1.12 has been released!
Last June during the keynote of DockerCon we saw a demo where a 3 node swarm cluster was created in 30 seconds using the new swarmkit integration into Docker Engine 1.12
Impressive, but of course I need to try it myself to see if the demo holds up.
Create the Nodes in your Swarm
docker-machine create -d virtualbox node1 docker-machine create -d virtualbox node2 docker-machine create -d virtualbox node3
docker-machine ssh node1 # or 2, or 3
I found the easiest way to get started with swarm was to use docker-machine to create hosts with Docker Engine installed. I used the virtualbox driver, but you can use whatever driver you want such as amazonec2.
Initialize your manager
docker swarm init --advertise-addr [advertise ip]:2377
Join your workers
Copy the command displayed from your first node on your workers to join the cluster.
docker swarm join --token [token] [manager ip]:[manager port]
You now have a Swarm cluster!
Let's Deploy Something
To launch an application, we use the docker service create command. With the --replicas flag you can easily scale the service.
First, we want to create an overlay network to deploy our app on.
docker network create -d overlay mynetwork
Before Docker Engine 1.12, overlay networks required an external key/value store. But with the built in distributed store in Docker 1.12, that is no longer required.
Let's deploy a simple apache application and publish port 5001. I am using a special apache image I found on docker hub that prints the container ID that serves the request. This will be useful to demostrate load balancing later.
docker service create --name web --network mynetwork --replicas 3 -p 5001:80 francois/apache-hostname
You can use these commands to inspect your new service.
docker service ls docker service ps web
The built in routing mesh and decentralized architecture in Docker 1.12 allows any worker node on the cluster to route traffic to other nodes.
In our web service above, we exposed a cluster-wide port 5001. You can send a request to any node at port 5001, and the routing mesh will route the request to a node that is running the container.
Whenever a new service is created, a virtual IP is created with that service. IPVS handles the load balancing, a highly performant layer 4 load balancer that is built into the linux kernel.
To show this, curl the app multiple times to see the container ID change.
Load balancing with Docker 1.12 is container-aware. With host-aware load balancing systems like nginx or haproxy, adding or removing containers from the load balancer required updating configuration and restarting these services. There is a useful library called Interlock that listens to Docker Events API and updates config/restarts services on the fly. But this tool is no longer necessary with the new builtin load balancing in Docker 1.12.
No way it can be this easy...
This picture from Nigel Poulton sums up the difference between old swarm and new swarm really nicely.
With Docker 1.12, you no longer have to install an external discovery service (consul, etcd, zookeeper) or a separate scheduling service. TLS is configured end-to-end out of the box, there is no "insecure mode". There is no doubt in my mind that the new Docker Swarm is the quickest way to get a production-ready docker-native cluster up and running.
What about at larger scales? Thanks to the efforts of Docker Captain Chanwit Kaewkasi and DockerSwarm2000, they showed us that you can create a cluster of 2,384 nodes & 96,287 running containers.
Swarm Mode is Optional
Enabling "swarm mode" is completely optional. You can think of swarm mode as a set of dark go routines, that are only activated upon
docker swarm init.
Swarm in Docker 1.12 also supports reconcilation, rolling image updates, global services, and scheduling services based on constraints. I will cover these in a future post.