The flying whale and the free falling petunias: part 2

In the first part of this article, we ran through the components that make up the Docker Container Stack. In this second and final part of the article, we will look at a simple example that covers the full breadth of the platform.

The aim here is to complement the content provided in the first part of this article and not to cover the inexhaustible options available in the platform in absolute detail.

Note: the examples in this article were made and tested on an Ubuntu 16.04 box with Docker Runtime V. 17.06.1-ce, Docker Compose V. 1.16.1, Docker Machine 0.12.2 and Oracle Virtualbox V. 5.1.26 r117224. It is advisable to match the docker component versions listed here and to stick to a Linux distribution when trying out the examples. Internet should be accessible from the host computer network and the VM network.

The article starts off with a description of the example application, it then dives into various aspects of containerizing and deploying this application with regard to the components of the docker stack introduced in the previous article. Finally, it wraps up with a brief discussion of how this application can be orchestrated to meet various needs that may arise.

Docker services abstract some aspects of containerized applications so that they can communicate with each other while preserving a loosely coupled relationship between the varied components that make up the whole. As discussed in the previous article, this type of relationship is of paramount importance in an environment where atomic components need to be spun up and killed off at a moment’s notice to meet rapid changes in demand, this is the case with the Cloud. This capability is refered to as elasticity in Cloud computational litreture.

Provisioning the Infrastructure

Docker machine allows virtualized infrastructure to be provisioned with uniformity regardless of the underlying virtualization platform or service being used. The example presented here utilizes Oracle’s virtualization product, Virtualbox[1], though other alternatives may be used just as easily with the use of the correct docker-machine driver[2]. By going with a virtualization tool such as Virtual Box you can bypass the need to configure and manage networking between the docker hosts.

The code block below shows how docker machine can be used to create three docker runtime pre-configured VM’s. At line # 4 a container is created out of the docker registry:2 image hosted on docker hub and bound to the VM host’s network interface. Take note that the IP’s of the VM’s provisioned by docker-machine(you should see something similar to that of screenshot 1).

CODE BLOCK 1

Login to swarmmanager1 and worker1 VM’s nodes and set the IP of the  registrybox VM in the /etc/hosts file so that the hostname registrybox resolves correctly. Refer Code Block 2.

CODE BLOCK 2

Verify the infrastructure as shown in Code Block 3, if everything’s good to go, the echo statements in line # 2 and 4 will print out 0’s, identifying successful process exit.  Line # 5 verifies that the docker runtime set up in the two swarm related VM’s have added the docker registry to their insecure registry lists. Secure registry connectivity has been omitted from this example for increased brevity.

CODE BLOCK 3

Creating the Swarm Cluster

Now that the infrastructure needs are met, we can go about setting up the swarm cluster.

As discussed in the previous article, users may avail Docker swarm to carry out container orchestration tasks. This example sets up a two node cluster. In Code Block 4, the swarm manager is initialized through a shell command passed in as a docker machine ssh argument. Executing Code Block 4 will produce a swarm join token along with the command needed to register a worker with a manager, it will take the shape of Output Block 4. Take this output and substitute it for <join token> placeholder in Code Block 5 to complete the cluster. The output of Code Block 5 should list both nodes as Active, indicating success.

CODE BLOCK 4

OUTPUT BlOCK 4

CODE BLOCK 5

The diagram below depicts the architecture we have created for our containerized application.

Deploying and Managing the Clustered Application

Let’s deploy the application in our swarm cluster. Clone the example application repository[3] to the host, then navigate into the repository and build the Heartbeat Application on the host using the docker build command, push it to the registry set up in the registrybox VM. To complete this step the docker registry will need to be added to the insecure registry list in the Host’s docker runtime and the runtime be restarted(the command needed to do this can vary based on the Linux distribution being used).

Once the image is made available to the swarm, the application may be deployed using the docker stack command as shown in line # 4.

CODE BLOCK 6

That’s all!

Verify the application deployment using service ls command, scale the Heartbeat Application using the service scale command and check the logs of a container using the exec command. Note that the exec command should be run after logging into a node of the swarm cluster running a Heartbeat Application container.

CODE BLOCK 7

[1] – https://www.virtualbox.org/
[2] – https://docs.docker.com/machine/drivers/
[3] – https://github.com/handakumbura/DockerStackExample

Spread the word,

Leave a Reply

Your email address will not be published. Required fields are marked *