Containers - Let's get moving!

I recently posted an article about Docker and how it changed the way we perceive virtualization and deployment tasks. If you haven't read that one, I strongly recommend that you start by reading it before continuing to this one. Who are you, Docker?

In this short post, we are going to talk a little bit more about Containers. No, not the ones we use to ship out cars, clothes, electric Items and other goods. The ones we use to make it easier for us to manage virtualization and deployment tasks.

What is a Container?
Well, if we go to Google, we will find this definition:

"A container is a standard unit of software that packages up the code and all its dependencies so the application runs quickly and reliably from one computing environment to another."

Now let's try to better understand how it works. Take a look at the drawing below.

Let' say we have a machine with a Linux OS.
As we see in the above drawing there are several processes running on that machine.
In this case, we launched the processes and they run just fine, but what if you need to deploy these processes on another machine or 1000 different servers?
The target host machine might have different environment settings, different OS version, and many other ecosystems differences that potentially can make the process run in a different way that it runs on our machine.

Isolate and containerize...
This is the most basic notion and concept of a Container. The process we isolate inside of the "Sandbox" of the Container, will have a namespace and its own restrictions of what this process can do and what resources it can access in his runtime definition. For example, CPU, directories, ports, etc...
The Container lifecycle is aligned with the processes lifecycle. Ones you start the container, it starts the process, ones you stop it, it stops the process.
The Container itself "sits" on top of your operating system and uses its resources. In Linux based operating systems such as Ubuntu, the container is able to run directly using your machines Kernel layer. In Windows, Docker requires an additional "Hypervisor-type" layer. Therefore your Containers will run on a "Docker-machine".

This containerized approach, allows us to deploy applications that are able to run cross-environments and still be very lightweight and easy to use.

Although Docker is the most well know and popular one, there are other Container Engines as well, for example, RKT (Rocket).

This was a high-level overview of Docker-Containers and the way they work. I invite you to follow my blog for more Testing, Automation and DevOps content.


Post a Comment

Popular posts from this blog

צעדים ראשונים - המדריך לבודק המתחיל חלק א

5 Ways to Fail With Your Test Automation

What's new in Test Automation? - Someone is pulling the strings once again