When you get a job as a consultant or a technical manager, it’s normal to imagine a life revolving around sophisticated meetings, intense discussions and what not! But you forget one teeny tiny detail before weaving a dream of high visibility job, the fact that to become a part of these much anticipated discussions, meetings and the consulting work requires an understanding of products and services that are in demand. Well having a technical background would solve this dilemma some might think. But in reality everything is a lot different than what most of us study in colleges.
Plethora of tools and functionalities seem confusing at first and then look kind of same. Running behind cost analysis won’t help much too because at the end it’s not the only thing that drives user decision. A business user should know a product or a service in an entirely different way compared to a technical expert. The business understanding need not be an in depth one but it should be an expansive one. A true business user knows the highs and lows of a product or a service.
Kubernetes is one such term that has now become the most coveted jewel of containerized applications. So, let us try to understand what containers actually are! Consider containers as a package that contains an application along with all it’s dependencies, configurations etc., needed to run it. Since, containers contain their own runtime environment there’s no need to worry about difference in OS versions or underlying infrastructure. Containers are like boats which float on the same water body. Every boat has it’s own design and structure but all of them utilize the water body in a similar manner.
Now that we’ve a basic understanding of containers we can proceed towards Kubernetes and why it’s so important. As we know containers are nothing but self sustaining applications that share an underlying infrastructure, you might be wondering where does Kubernetes fit? Imagine having a lot of boats on the river, this might make managing demand of boats difficult. A “boat system management” is needed in such cases to take care of scheduling, tracking, managing and ensuring each boat stays unaffected by another boat. K8s allows easier management while isolating each container. In simple words K8s manages the containerized application life cycle and improves predictability, scalability and availability. Like the boat system management, a K8s user can define how the application runs or interacts with other applications.
How Does K8s work?
The best thing about Kubernetes is that it can run anywhere on any cloud, be it public, private, or both. A Kubernetes deployment is called a “cluster” which is a combination of “control plane” and “compute machines” or “nodes”. Was this description a bit confusing? Let’s assume each boat has a boat operator and some passengers. The boat operator or the “compute machine/node” is responsible for operating the boat i.e., run the applications. However, the ticket master or the “control plane” keeps a track of boats on the lake and maintains the best state in the lake. In other words a control plane is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. The desired state of a Kubernetes cluster defines which applications or other workloads should be running, along with which images they use, which resources should be made available to them, and other such configuration details.
The Kubernetes master or the control plane is the brain of the cluster and monitors the cluster and respond to cluster events which might include scheduling, scaling or restarting an unhealthy pod. As we discussed before it can be compared to a ticket master who overlooks the boat allocation, request management etc. It consists of 5 components: kube-apiserver, etcd, kube-scheduler, kube-controller-manager, and cloud-controller-manager.
- kube-apiserver: is the frontend for the Kubernetes control plane and exposes the REST API endpoint to user. User can run several instances of it to balance traffic by scaling horizontally
- etcd: Key value store for the cluster data (regarded as the single source of truth)
- kube-scheduler: Watches new workloads/pods with no assigned node and assigns them to a node based on several scheduling factors (resource constraints, anti-affinity rules, data locality, etc.)
- kube-controller-manager: Central controller that watches the node, replication set, endpoints (services), and service accounts
- cloud-controller-manager: Interacts with the underlying cloud provider to manage resources. It lets you link your cluster into your cloud provider’s API, and separates out the components that interact with that cloud platform from components that just interact with your cluster
The Node components unlike the master components run on every node, maintain running pods and provide the Kubernetes runtime environment. It has 3 components: kubelet, kube-proxy, container runtime.
- kubelet: Agent that runs on every node to inspect the container health, reports to the master and listens to new commands from the kube-apiserver. It makes sure containers are running in a pod
- kube-proxy: Maintains the network rules on nodes, allowing network communication to pods from network sessions inside or outside of a cluster
- container runtime: Software for running the containers (e.g. Docker, rkt, runc, CRI-O)
You can also have a look here where you can learn how each K8s unit work and manage the incoming requests.
The Right Kubernetes provider!
There are more than 45 Kubernetes distribution at CNCF (Cloud Native Computing Foundation) and a lot of cloud-providers offer fully managed Kubernetes clusters. A user who’s on cloud can decide to use the K8s offering by their cloud provider or go for a different K8s distribution. Some of the highly used K8s clusters are offered by Google Kubernetes Engine(GKE), Microsoft Azure Kubernetes Service (AKS) and Amazon AWS EKS (Elastic Kubernetes Service). For On-prem solutions companies provide a management layer over Kubernetes that adds features without restricting user to a specific vendor. These solutions can be deployed to almost any infrastructure including the user datacenter. Red Hat OpenShift is one such offering that provides a complete Kubernetes distribution which adds features such as multi-tenancy, extended support for CI/CD using Jenkins, improved networking, and a built-in private image registry.
Which offering a user must select totally depends on the organization structure, specific needs and demands, cloud maturity, policies etc. An in-depth analysis should be done before utilizing the offerings. A detailed comparison can be seen here which looks at the current features and limitations of the managed Kubernetes services from the three largest cloud service providers: EKS, AKS, and GKE.
While learning about Kubernetes one should never forget that even though it has become a de facto standard for Cloud Application Development, one should prioritize the existing architecture and business needs while deciding. No doubt, it opens up innovation, speed to market and resilient capabilities for applications but it’s evolving at a very fast rate and staying updated is the key to making smart decisions!