Containers - Let's get moving!




I recently posted an article about Docker and how it changed the way we perceive virtualization and deployment tasks. If you haven't read that one, I strongly recommend that you start by reading it before continuing to this one. Who are you, Docker?

In this short post, we are going to talk a little bit more about Containers. No, not the ones we use to ship out cars, clothes, electric Items and other goods. The ones we use to make it easier for us to manage virtualization and deployment tasks.

What is a Container?
Well, if we go to Google, we will find this definition:

"A container is a standard unit of software that packages up the code and all its dependencies so the application runs quickly and reliably from one computing environment to another."

Now let's try to better understand how it works. Take a look at the drawing below.



Let' say we have a machine with a Linux OS.
As we see in the above drawing there are several processes running on that machine.
In this case, we launched the processes and they run just fine, but what if you need to deploy these processes on another machine or 1000 different servers?
The target host machine might have different environment settings, different OS version, and many other ecosystems differences that potentially can make the process run in a different way that it runs on our machine.

Isolate and containerize...
This is the most basic notion and concept of a Container. The process we isolate inside of the "Sandbox" of the Container, will have a namespace and its own restrictions of what this process can do and what resources it can access in his runtime definition. For example, CPU, directories, ports, etc...
The Container lifecycle is aligned with the processes lifecycle. Ones you start the container, it starts the process, ones you stop it, it stops the process.
The Container itself "sits" on top of your operating system and uses its resources. In Linux based operating systems such as Ubuntu, the container is able to run directly using your machines Kernel layer. In Windows, Docker requires an additional "Hypervisor-type" layer. Therefore your Containers will run on a "Docker-machine".

This containerized approach, allows us to deploy applications that are able to run cross-environments and still be very lightweight and easy to use.

Although Docker is the most well know and popular one, there are other Container Engines as well, for example, RKT (Rocket).

This was a high-level overview of Docker-Containers and the way they work. I invite you to follow my blog for more Testing, Automation and DevOps content.







Comments

  1. it is different from what was committed to memory, thus allowing a memory overwrite. data science course in india

    ReplyDelete
  2. We are really grateful for your blog post. You will find a lot of approaches after visiting your post. Great work
    Best Digital Marketing Courses in Hyderabad

    ReplyDelete
  3. Nice blog post,
    Social Media Marketing Course is designed for people who would like to learn just social media marketing. Social Media platforms are widely used these days by more than 50% of every country’s population. So companies have a chance to market their potential customers through these social media platforms.

    ReplyDelete
  4. Thanks for sharing this information. I really like your blog post very much. You have really shared a informative and interesting blog post with people..
    data science training in hyderabad

    ReplyDelete
  5. Very awesome blog. Great stuff. Informative content. Very useful to many people. Keep up this good work.
    Best Data Science Training in Hyderabad

    ReplyDelete

Post a Comment

Popular posts from this blog

Sharing is caring - Intro to Jenkins shared libraries

Intro to Terraform and how it is related to test automation infrastructure

Test Automation, Security, and other vegetables