Kubernetes (K8S) introduction

So, by now chances are you have heard about Kubernetes at least ones. In this article, we will try to demystify Kubernetes and transform this buzzword into something we can comprehend.
To those of you that didn't get a chance to read one of my previous articles introducing the key concepts of Docker, (Who are you, Docker?) I strongly suggest you start by reading that article before you move forward to this one.

So what is Kubernetes?
Kubernetes, (commonly stylized as K8s) is an open source originally designed by Google. It is a container orchestration system for automating application deployment, scaling, and management.

Cluster who? -  Terminology that will help us move forward.
For one let's understand what is a cluster.
If you ever played around with Docker containers you must know how easy it is to spin up several Containers or multi-container applications. Maybe you even got a chance to experiment with docker-compose.  For something very small this kind of deployment could be sufficient, but sometimes you have complex architectures that go way beyond a hand-full of Containers that you can keep track of in your mind. There are complex deployments that could include multiple Containers on multiple servers. This kind of Container-based architecture is called a Cluster. So, how would we go from 5 servers to 50? how would you remember or keep track of on which machine you launched, for example, your Redis server? how would you manage this kind of scale?

Basic Kubernetes Skeleton (Architecture)
On a very high level, These are the key components in Kubernetes.
First, you have a "master node". A "master node" is a process and it manages the cluster.
There are also "Worker nodes". The worker nodes are basically machines (Both physical or virtual) that combine the cluster architecture. There can be more than one Master nodes and obviously, there can be thousands of Worker nodes.
The worker nodes contain "Pods". The pods are basically a scheduled unit. It could be one or more Containers. With the help of these "Pods", we can deploy multiple dependant containers together.
You can say that the pod acts as a wrapper around these Containers.
The master's role in all of this super cool structure is to manage and monitor the cluster and its components.
Kubernetes keeps track of it's deployed components and has monitoring mechanism. It performs a "health check" on them and knows when something fails.
One more important component in Kubernetes is the API server. The API server is the gateway to the entire Cluster. If we want to create, display, delete or update any Kubernetes object, you have to go through this API.

3 main things Kubernetes simplifies for us:

Deployment - With this type of orchestration tool, scaled deployments are made easier.

Scaling - It is reasonable to assume that you do not spin up a server each time you want to deploy a container. That is just not a reasonable way to manage your resources. Well, in this case, Kubernetes will "Find" the best place for your Pods given their CPU and other sustainable requirements.

Monitoring - As we mentioned, monitoring a huge Cluster is not a simple task. With Kubernetes and its built-in Monitoring mechanism, we can better control our architecture.

This is obviously a high-level introduction to K8s. You can visit Kubernetes.io for more helpful information.

קורס אוטומציה למתקדמים - ההרשמה נפתחה למחזור ספטמבר
:פרטים בלינק


Post a Comment

Popular posts from this blog

Sharing is caring - Intro to Jenkins shared libraries

Chromedriver - Under The Hood

Build Your Own Custom AMI With Packer