Docker Monitoring: Challenges arising from the containers nature
This post is also available in: Spanish
Docker Monitoring: Challenges introduced by containers and microservices
Since its official launch in 2013, Docker has been reporting significant annual growth in their number of users. This is a good reason why most of the monitoring tools have made efforts to implement solutions for docker monitoring.
But, does adoption of Docker technology imply that the way we make server and application monitoring has to be reviewed?
In this article we propose to review Docker basics and try to identify the characteristics of containerized environments that require making changes on the previously established monitoring practices.
Docker is an open source project that builds and executes distributed applications on virtual software containers. Container technology was available before the Docker project, but they were able to shape a structure simple enough to make containers viable.
A Docker container, unlike a virtual machine, does not require a platform an operating system. In fact, we can have several Docker containers in the same physical server using the same operating system.
The Docker engine allows to create, initiate, stop and distribute containers.
The images are read-only templates that allow the creation of containers. An image is a static representation of the application or service that includes their configuration, environment and dependencies.
A container is an instance of an image. Each container runs an application process. Everything we make in a container remains in that container and does not affect the image in any way. In fact, images once created are immutable.
Developers have to maintain all the images in a registry that functions as a library of images, or they can use public registries.
One of the key points in Docker success is the public register service called Docker Hub. In this cloud-based site anyone can find and use images for many products and projects like, PostgreSQL, Ubuntu, MySQL, etc. Likewise anyone can push their own images and share them with the Docker community.
A cluster is a collection of Docker hosts that are exposed as if it were just a single virtual Docker host. There are many tools especially designed to establish and manage Docker clusters: Docker Swarm or Kurbenetes are known as orchestrators and allow us to manage load balancing, services discovery and define high availability structures.
Finally we have to review a concept well related to Docker and container technology, which is Microservices architecture.
Microservices is a development architecture in which an application is structured as a collection of fine grained services connected by lightweight protocols like https.
This architectural style is relevant here because their implementation based on Docker containers is increasingly popular.
When we try to define the strategies that will guide the way we will do Docker monitoring, we must consider some challenges from the very nature of this kind of environments:
Challenges due to the transience of containers
In monolithic client-server applications, we think about covering the application itself plus all servers involved and the communicational platform that connect them.
But if we are concerned about the performance of an application developed on microservices architecture and Docker containers, this approach could be inoperative.
For this kind of applications, developers have the option of easily creating several containers to implement a microservice and obviously they have the possibility to create or eliminate containers when needed.
Likewise, developers can create fault tolerance schemes where the process of one container can easily be assumed by another.
This transience of containers has meant that it is not necessary to monitor each container. Monitoring is focused on host, clusters of containers, microservices and applications in addition to regular interest in networks and communications.
This approach can be understood like an implementation or an adaptation of the service oriented monitoring to Docker and microservices environments. More information about Pandora’s service oriented monitoring can be found here.
Challenge on metrics
In more traditional environments we are used to think that the more elements to monitor and the more diverse those elements are, the bigger is the number of metrics we need to collect.
With the introduction of Docker containers and microservices, the number of metrics to collect has never been greater.
With Docker containers technology, a large number of tools that allow us to collect metrics for monitoring have been developed. However, it could be a hard work to evaluate those tools considering the market offers from self hosted tools to cloud based tools.
We recommend starting by checking what the same Docker Engine can do. A simple command like docker stats can bring information on the health of host based on the consumption of CPU, RAM memory and disk and can be a nice start point take that data in by a third-party tool.
Before you can start analysis of Docker monitoring tools, it may be better to think about if you already have a monitoring tool installed and running. If you do, you have to evaluate the way that this tool assumes the challenge of monitoring the ecosystem around Docker containers. It could be useful to check out other IT professionals’ experience of using that monitoring tool in Docker environments.
Another important point when we are planning to get metrics is the orchestrator used. Being the orchestrators the components responsible for creating and managing the container clusters it sounds like a good idea to establish a closer relationship between monitoring tools and the orchestrators, using in this way, the whole monitoring potential included in orchestrators.
In this post on this same blog the reader can check the combination between Docker Swarm and Pandora FMS.
Challenge on troubleshooting
On traditional environments, troubleshooting and determining the root cause of a problem is already a challenging activity; with the introduction of containers and microservices, it could be even more complicated.
Given an event, one key aspect is making a successful tracking and low cost correlation between different metrics and logs from a great number of elements.
Even networking analysis in this kind of platform can be difficult since Docker technology introduces a new level of networking. When Docker is installed, a network is created by default in order to control what containers can communicate with others.
Many tools are reacting to this challenge making efforts on automatic discovery of relation between containers and introducing artificial intelligence algorithms that learn usage patterns and correlation between different events.
Finally we can say that defining strategies for a successful discipline of Docker monitoring implies the reformulation of traditional monitoring practices and also involves going deeper and faster on certain paths that had already been traced.