Docker Swarm on AWS

5 min read
Docker Swarm is Docker’s native clustering solution. It turns a group of Docker Engines into a single, virtual Docker engine using an API proxy system.  Here are some of the benefits of using Docker Swarm and advice on deploying a Docker Swarm into AWS. Benefits of a Docker Swarm docker-swarmBenefits in using Docker Swarm include:
  • Security: Communication between services and node(s) in the swarm have TLS encryption enabled by default.
  • Built-in service discovery: Services and node(s) are able to talk to each other easily using DNS (this capability is automatically set up). Containers can be queried using the internal DNS.
  • Mesh routing: All the nodes in the cluster know where individual containers are in the decentralized nature of Docker Swarm. Any node can run any service, and every node can be load-balanced equally, reducing the complexity and number of resources needed for an application.
  • APIs: Docker Swarm uses the same APIs Docker users are familiar with.

Docker Swarm on an AWS template

We’ll now use an AWS template that sets up a Docker Swarm instance. We’ll be using the AWS CloudFormation template ( There are two ways the template can deploy the swarm:
  • On an existing VPC
  • On a newly created VPC
In the example below, we’ll be using option two. With option two, a new VPC and all necessary components will be created from the template.


The CloudFormation template creates the following objects in AWS and thus needs the required permissions to those sections:
  • EC2 instances + Auto Scaling groups
  • IAM profiles
  • DynamoDB tables
  • SQS Queue
  • VPC + subnets and security groups
  • ELB
  • CloudWatch Log Group
An SSH key is also required to provision access to the Docker manager and workers.


Here are some of the configurations required by the CloudFormation template:
Configuration Description Example Values
Number of Swarm managers? The number of nodes for the managers 1
Number of Swarm worker nodes? The number of nodes for the nodes 3
Which SSH key to use? The SSH key to access the EC2 instances
Enable daily resource cleanup? Cleans up unused images, containers, networks, and volumes Yes
Use Cloudwatch for container logging? Sends all container logs to CloudWatch Yes
Swarm manager instance type? EC2 instance type t2.micro
Manager ephemeral storage volume size? The size of Manager’s ephemeral storage volume in GiB 20
Manager ephemeral storage volume type? The Manager ephemeral storage volume type Standard
Agent worker instance type? EC2 instance type t2.micro
Worker ephemeral storage volume size? The size of Workers’ ephemeral storage volume in GiB 20
Worker ephemeral storage volume type? The Worker ephemeral storage volume type Standard
  Launching the CloudFormation  As usual, launching the CloudFormation template can be triggered via the configuration screen in AWS itself, or from the Release Notes page: Figure 1: Launching the CloudFormation Stack This diagram describes the setup upon completing the CloudFormation template: image2017-3-30 15252.png Figure 2: Setup diagram

Launching a web app

A web app can now be launched on the Docker Swarm. We’ll be using the following Docker Compose setup: image2017-3-30 15313.png To use this .yml, SSH into (any) manager node as the user “docker” and run the following command:
curl -O 
docker stack deploy -c ghost-docker-compose.yml ghost
A stack is a collection of services that make up an application in a specific environment. In the ghost-docker-compose.yml, we’re starting the “ghost” image ( The Compose version used here is v3 (, and is mainly used for specifying configuration related to the deployment and running of services. It is only used when deploying to a swarm using the “docker stack deploy” command. In this case, we’re specifying that there will be two replicas in the swarm. We’re also specifying that it will only run on worker nodes. We’ve also thrown in a service called visualizer that is running “dockersamples/visualizer.” This will give us some insight into what is happening with the containers in our swarm. This is the output of “docker stack deploy”:
~ $ docker stack deploy -c ghost.yml ghost
Creating network ghost_default
Creating network ghost_frontend
Creating service ghost_ghost
Creating service ghost_visualizer
~ $ docker stack ps ghost
ID            NAME                IMAGE                            NODE                                         DESIRED STATE  CURRENT STATE        ERROR  PORTS
ixiuebesowo6  ghost_visualizer.1  dockersamples/visualizer:stable  Running        Running 2 hours ago
k1ddq0jsqgp4  ghost_ghost.1       ghost:latest             Running        Running 2 hours ago
is0op2snl0m8  ghost_ghost.2       ghost:latest              Running        Running 2 hours ago 
This is how “dockersamples/visualizer” (running at port 8080) shows the current output:   Figure 3: Visualizing the Docker Swarm

Scaling your application

To scale the Ghost blog application, let’s run this command: ~ $ docker service scale ghost_ghost=4 ghost_ghost scaled to 4 This is the result of increasing the ghost service to four: Figure 4: Increased ghost services running to four Docker Swarm allows for multiple containers of the same service to run on the same node. With the ability to easily resize EC2 instances, one can scale as many services as needed into the constraints of the server. Also, if the Compose file is ever updated for any reason, the containers will be updated when the stack deploy command is next executed:  ~ $ docker stack deploy -c ghost.yml ghost Another way to scale the nodes is to increase the number of nodes in the Docker Swarm. This can be done by editing/updating the current CloudFormation Stack. Another option is to increase the number of instances at the Auto Scaling Groups (under EC2). This is the result after increasing it to four: 


I’ve demonstrated an example of how we can set up a web application in a Docker Swarm mode in AWS. The combination of Docker’s native clustering functionality with the infrastructure capabilities of AWS is really a perfect match. P.S. Don’t forget to turn off/delete all the objects created in this tutorial. The fastest way is to delete the CloudFormation Stack. For more information on Go2Group’s hosting services, or for technical assistance with AWS, contact us.

Timothy Chin

Software Consultant at Go2Group
Having had a hand in everything from development, support and infrastructure management; the chief firefighter is here to save the day! Currently focusing on Atlassian suite of tools and dabbling in all sorts of technologies.
Timothy Chin

Latest posts by Timothy Chin (see all)

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply