DevOps

Docker is the best thing to happen to Linux since the GNU bread

Its has become an oft-repeated cliche now that docker has become the latest buzzword, both in DevOps circles as well as the open source community. But being the ever cynic and contrarian, I wasn’t quite convinced of its merits until quite recently, I thought it was just another hype that will come and go like all other things.

After all, software configuration management has been going on nicely without these kinds of virtualization tools since ages, and since I can install and configure php/python/apache/nginx/whatever on any given LAMP box, and there is already virtualbox to create a parallel LAMP setup on my laptop, why exactly do I need docker?

But some months ago, I came to understand that the contributors to the docker project included some of the biggest industry leaders such as Google, Microsoft, IBM, Cisco and Red Hat, and Linux is their primary development platform! Now, all of these professional companies couldn’t have possibly bet on the wrong horse, isn’t it? So, I got interested and started exploring a bit more about docker and learned that it sat somewhere between pure hardware virtualization like VirtualBox/VMWare and the slow software emulation like QEMU.

In fact, docker uses existing linux tools and libraries like iptables, apparmor, SELinux and libvirt to create an efficient solution for containerized running of your apps. This parallel environment is called container, and unlike the other parallel environment called VMs, a container is a lot more efficient and conservative of resources (memory and cores), since it doesn’t have to emulate or churn up an entirely new virtual kernel or operating system. The isolation is there at the application level, not at the operating system level which is exactly the need of the hour in about 90% of use cases for software engineers. I think the below diagram illustrates this point very well:

Docker Containers vs Virtualbox VMs (Image Source: Docker.com)
Docker Containers vs Virtualbox VMs (Image Source: Docker.com)

Emulating a system or running an app in a jailed or containerized environment is just one use case of docker, another and probably more popular use case is deployment and integration testing. The advantage of using containerized deployment is that you don’t actually have to install php and nginx to run wordpress, you don’t actually have to install python and flask to run a flask app!

All the developer does is builds a docker image (of which a container is an instance) and your image can derive from and add reference to one of the several official pre-built ones on the docker hub such as Python, PHP, nginx, etc. All a developer has to do is define a thing called DockerFile (the source code used to build an image), add reference to Python or whatever (there are variants like python:slim or python-<version> to take care of specific cases), add his source code files (including requirements.txt to pull the packages from pypi if need be), build, run, test and push the container to the docker hub and be done with it! The developer runs a container which is entirely self-contained and entirely separated from his actual system. You don’t need to have python, php or any of the image dependencies on your actual system, you’ve developed your entire application without installing a single software on your system, and without resorting to heavy parallelization tools like VMs, can you even imagine how magical that feels!

Once the developer pushes the image to docker hub, the DevOps or sysadmin can pull that image from there and install it on his production, testing or whatever cloud instance and just run it. Without any single configuration change, the container will run exactly as it ran on developer’s environment which is just magically innovative! You’ve essentially abstracted away your application development and deployment to such an extent that its no longer necessary for developer and DevOps to communicate or agree on any conventions, or even know each other at all!

The separation of concerns between a docker container or image, and the underlying linux system is what does this whole magic:

Docker Engine Components

Docker runs as a systemd service or daemon on your linux system and is responsible for keeping this separation intact and basic errands like building an image from DockerFile, running a container from the image, list down, track and clean images and containers, push images to docker hub, etc. And all this happens using a single docker command which is easy to grasp once you start using it:

## List CLI commands
docker
docker container --help

## Display version and info
docker --version
docker version
docker info

## Execute image
docker run hello-world

## List images
docker image ls #or docker images

## List containers (running, all, all in quiet mode)
docker container ls
docker container ls --all
docker container ls -aq

Of course, an entire tutorial on docker is out of scope for this article, but I’d like to leave some useful links to help you install, learn and use it:

Published on System Code Geeks with permission by Prahlad Yeri, partner at our SCG program. See the original article here: Docker is the best thing to happen to Linux since the GNU bread

Opinions expressed by System Code Geeks contributors are their own.

Prahlad Yeri

Prahlad is a freelance software developer working on web and mobile application development. He also likes to blog about programming and contribute to opensource.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button