Cloud Computing

6 Mins Read

Docker: Game changer in the world of Cloud and Virtualization

Docker is an open source container service which provides lightweight alternative to heavy and complex virtual machines. It allows developer to package the whole application with its dependencies and stacks into a box which here is referred as a container, which will be in an isolated environment. Each and everything required to run the application will be present inside the box making it independent of the underlying operating system/environment. Docker changes the application deployment scenario completely, helping developers in shipping their code faster, test faster, deploy faster, due to which the time cycle between writing and running the code reduces to a great extent. The tasks which are done by virtual machines can be done using docker with ten times more efficiency, speed and also ten times less resource usage. Docker uses the LXC (Linux Containers) as its base technology.

The biggest problems usually developers face is dependency hell, i.e. the application works perfectly fine on dev environment, but when deployed into QA/production environment, the application throws errors and does not work, leaving developers with a huge problem to find cause of the problem and what is it missing exactly. Docker eliminates this problem by providing a complete guarantee of execution, as everything needed to run the application is already present inside container, which is an isolated environment and is not dependent on the underlying platform.

The people who already know what linux containers are might be wondering what’s the big deal about docker? Linux containers have been there since quite few years and are not really helpful due to its complexities and limitations. What does docker offer which makes it superior than its base technology? We will get back to that after a brief revision on what exactly LXC is in this next section.


LXC (Linux Containers)

LXC(Linux Containers) is an operating-system-level virtualization environment for running multiple isolated Linux Systems(containers) on a single Linux control host. LXC combines kernel’s cgroups and support for isolated namespaces to provide an isolated environment for applications. Docker can also use LXC as one of its execution drivers, enabling image management and providing deployment services.

Namespace is a technique where groups of processes are separated in such a way that they cannot see resources in other groups. For example,  a PID namespace provides a separate enumeration of process identifiers within each namespace. Other than PID, other namespaces are Mount, UTS, network, user namespaces and SysV IPC namespaces.

Also known as control groups, is a feature provided by linux kernel that can be used to restrict, prioritize and monitor the resource usage on the process level. Every group will have a group profile where the limitations can be provided, and all the processes under that group will have access only to the given amount of resources. The group profiles will have parameters like Cpu, memory, disk I/O throughput, network bandwidth usage etc.


So now as we know what LXCs are, let us look into what more docker offers to the existing linux containers and why is it more efficient to use.

LXC refers to the capabilities of a linux kernel i.e. namespaces and control groups, which allows us to separate processes from one another and also to control the resources allocated to them. Docker carries some advantages over LXC, and provides features which LXC doesn’t to make it developer friendly. Docker is a high level application/tool which provides a lot of compelling functionalities on a low level foundation of kernel features i.e. LXC.


Advantages of Docker over LXC

Docker provides great portability by making deployment easy across machines by outlining a format using which the application can be bundled with all its dependencies into a single object which can be transferred to any machine running docker with guaranteed execution. LXC contains process sandboxing which is an important aspect of Docker’s portability too, but LXC cannot provide guarantee that the application bundled using LXC container will run on any machine as it will be limited to a single machine’s specific configurations like networking, logging, storage etc. Docker eliminates all the complexities, making developers concentrate totally on their application rather than machine configurations.

Docker’s main purpose is to concentrate mainly on application and eliminate all the possible risks/errors which can cause problems in deployment. Docker is optimized completely for application deployment which is not the case with LXC. Docker’s API, User Interface, Design philosophy and Documentation all of them shows to what extent Docker is committed in making deployment easy. On the other hand, LXC focuses on making containers as lightweight machines which takes less RAM and boots faster and is limited to machine configurations.

The automatic building of containers can be achieved using Dockerfile. Dockerfile is a tool for developers to build a container from scratch from their source code itself, with complete control on the application’s dependencies, building tools and their packages etc. Developers are free to use chef, make, puppet, maven, debian packages, salt, rpms, source tarballs, or the combination of any of the above, regardless of configurations of underlying machine.

Docker’s versioning system is a lot similar to git. The layering system on docker is like an onion. Any changes to the container will be added by editing the base image itself instead of creating a whole new container. Only the changes will be added just like the git. Docker provides capabilities for tracking the existing version of containers, inspecting the difference between the existing and the new version of containers, committing the changes to the new versions and rolling back in case of any errors. Docker also provides history to check how the container was created and who is the owner.

The existing docker Containers can be used as a base image for creating another image with more specialized features into an application. For example, an existing ubuntu image with Nginx installed can be used to create 10 more specialized containers on it, each of them running their own separate web applications. This can be done manually or can be achieved as part of an automated build i.e. Dockerfile.

Docker has access to its own public registry where people have uploaded thousands of useful containers for everyone to use for their own specific requirements of applications. The docker team maintains a “Standard Library” of use containers which are also included in the docker registry. The private containers can also be stored and transferred by deploying our own registry as they registries are open source.


Best Features of Docker

The Docker containers are extremely lightweight and boot a lot faster. This makes the scaling very fast and very easy. As and when required, more containers can be launched within few seconds and can be scaled down when no longer required.

Docker containers are extremely portable as they can be moved very easily without any hassle or worries of underlying machine. There are images and registries to maintain the docker containers. The snapshots of any environment can be taken and uploaded to the registry (Private or Public based on user’s preferences), and later can be downloaded using which containers can be created.

Deployment is one of the main feature of docker, as the containers will run on any machine irrespective of their environment. We don’t have to worry about execution of an application as Docker containers guarantees the execution.

Docker makes sure that the resources are utilized to its best with great efficiency. Docker’s lightweight nature makes sure that many containers can run on a single machine, making the best of available resources and reducing a lot of cost (licensing etc.)


Comparison with Virtual machine

Docker separates application from its underlying operating system, whereas Virtual Machines separates operating system from its underlying hardware. Docker containers are extremely lightweight and don’t require much of the resources. On the other hand, Virtual machines are heavy and require their own guaranteed resources. Docker containers takes seconds to boot up and start running, where as Virtual Machine takes a lot more time to start comparatively. There’s no hypervisor in Docker containers. Virtual machines can grow very large in size if all the required dependencies and packages are handled. Virtual Machines consumes large amount of CPU and memory which makes it complex for scaling. Docker on the other hand, is built mainly to eliminate this by providing faster results with lesser resource utilization.

DockerVsVM


 Dockerfile

Dockerfile is nothing but a set of instructions in a file which is to be executed step by step while building a container. The commands which are manually executed for building a container are put together into one Dockerfile which will take care of executing those commands step by step and end result will be an image using which we can create a container. Other scripts can also be called from Dockerfile, but the scripts should be in the same folder. The best practice for building a Docker image using Dockerfile is to place the Dockerfile into an empty folder. And then the files needed for the execution of commands entered in Dockerfile can be placed in same folder.


Docker was earlier named as DotCloud because it was developed with an aim to revolutionize the cloud computing technology. Docker’s versioning System is very user friendly and fast. Any changes or updates of a new version can be reflected to the base version just by uploading only the changes and not the whole image all over again. This makes it very fast as there will be only few Mbs of data to be uploaded. Unlike VM, Docker containers are extremely lightweight and can be shared with others in very less time compared to Virtual machines. Docker Hub is a public registry where we can upload our Docker images which can later be used for specific purposes. Private registries can also be created for sharing images under a particular organization. All this with less than a GB and also the great advantage of git like versioning system. All these features solves a lot of complicated problems for developers, which is why the whole world is adopting Docker containers making it grow exponentially day by day.

 

WRITTEN BY CloudThat

SHARE

Comments

  1. Sam Hart

    Feb 25, 2021

    Reply

    Great deals of important information and the article is great.
    I am bookmarking it for future reference and consultation. Many thanks for sharing! 🙂

  2. Click to Comment

Get The Most Out Of Us

Our support doesn't end here. We have monthly newsletters, study guides, practice questions, and more to assist you in upgrading your cloud career. Subscribe to get them all!