Kubernetes Architecture and its Components | A Quick Guide

Navdeep Singh Gill | 02 February 2023


XenonStack White Arrow

Thanks for submitting the form.

Introduction to Kubernetes

We are rapidly moving towards the age of cloud computing, where every organization, small or big, has or wants its resources and data to be available, no matter what region or time it's been accessed. Previously, the data generated by the applications was less than in today's scenario. Availability of data every time is the most crucial aspect for businesses, no matter the situation or the geographic location it is being used/accessed. And this can be costly.Because installing physical machines everywhere, businesses need to expand and handle all the data or transfer the data to servers located in different locations is an expensive procedure.

This is why the cloud came to light. With the cloud, everything was readily available and could be used by any business, small or big, without worrying about the things mentioned above. The major cloud providers AWS, Google, and Microsoft Azure. A few more concepts came with the cloud, such as images, containers, etc. Let's talk about containers. What are these? So, in simple words, reliable in any computing environment.

Yes, Containers provide us with many things, but they also need to be managed and linked to the outside world for other processes such as distribution, scheduling & load balancing. This is done by a container orchestration tool like Kubernetes. So, when we talk about containers, KUBERNETES comes attached. Maybe I'm wrong, but within the past few years, it has made its place in this cloud world as a container orchestration tool. It is built by google as per their experience in using containers in production. And Google has made sure that it is the best in its field.

PostgreSQL is a powerful, open-source Relational Database Management System. Click to explore about our, PostgreSQL Deployment in Kubernetes

What is Kubernetes?

Kubernetes is an open-source container orchestration engine and also an abstraction layer for managing full-stack operations of hosts and containers. From deployment, Scaling, Load Balancing and to rolling updates of containerized applications across multiple hosts within a cluster. Kubernetes make sure that your applications are in the desired state. Kubernetes 1.8 released on September 28, 2017, with some new features and fulfill the most demanding enterprise environments.

There are now new features related to security, stateful applications, and extensibility. With Kubernetes 1.7 we can now store secrets in namespaces in a much better way, We ‘ll discuss that below.

A sort of database service which is used to build, deployed and delivered through cloud platforms. Click to explore about, Cloud Native Databases on Docker and Kubernetes

Why do we need Kubernetes?

There are several reasons listed below:

Moving from - Monolith to microservices

Monolith applications are described as single-tiered applications in which different components are packaged together to form a single platform. It's been used for years and is still in use for its easy-to-develop and deploy characteristics. But it has some drawbacks and limitations. So, people are moving towards microservices as an alternative to monolithic applications.

Microservices are an application development technology that produces an application based on highly distributed services as an iterative process of the service-oriented architecture and design style. In contrast to monolithic architecture, Microservices architecture takes a modular design instead of combining all information in a single platform.

Increased usage of containers

The Application Container business is rapidly growing yearly, reaching USD 7.6 billion by 2026, increasing to USD 1.5 billion in 2020. This rapid growth is mainly because of the advantages containers provide for development teams, such as agility, portability, development speed, efficiency, easy management with fault isolation, and high security.

Managing a large number of containers

Container orchestration methods provide a structure for scalably managing containers and microservices architecture. For container life-cycle management, there are multiple container orchestration tools. Docker, Kubernetes, and Apache are a few popular choices.

It enables the creation of application services that involve multiple containers, the scheduling of containers across a group, the scalability of those containers, and the management of their health over time. Manage everything from software deployments to infrastructure

An open-source system, developed by Google, an orchestration engine for managing containerized applications over a cluster of machines. Click to explore the potential of Kubernetes for Enterprises

What are the benefits of Kubernetes?

The benefits are highlighted below:

Write once and run anywhere

At this time, the business has to work across a wide range of infrastructure and clouds such as AWS, GCP & Azure, and working across these platforms is made easy using it as its supports a wide range of cloud platforms.

Portability and flexibility

It is compatible with most of the container runtime. Moreover, it can collaborate with virtually underlying architecture, no matter what cloud it is, a public or private cloud or an on-premises server.

It is portable because it can be used on many infrastructures and environments. Most other orchestration tools lack portability because they are attached to specific runtime environments or infrastructures.

Multi-cloud capability

It partly because of its portability, can host workloads that run on a single cloud and workloads that run across multiple clouds. Furthermore its environment can be easily scaled from one cloud to another.

These characteristics indicate that it suits today's multi-cloud strategies many businesses are pursuing. Other orchestration tools may also work with multi-cloud infrastructures, but Kubernetes arguably goes above and beyond in terms of multi-cloud flexibility. When considering a multi-cloud strategy, there are additional requirements to consider.

One of the best platforms to deploy and manage containerized applications. Click to explore about our, Helm - Package Manager for Kubernetes


It is an entirely open-source, community-driven project managed by CNCF. It has several major corporate sponsors, but no single company "owns" it or has sole control over how the platform evolves. In the CNCF's Kubernetes Project Journey report in 2019, Weaveworks was named one of the top eight contributors.

To many businesses, it's open-source strategy makes it preferable to either closed-source orchestrators (such as those built into public clouds) or open-source but closely associated with only one company.

Service discovery

So, You must be wondering what service discovery is simply put, it's the process of locating a service whenever necessary for completing the tasks within the application. Anyone could make requests either by users or by admins several times.
Kubernetes service provides protocols for communicating with the pods or a set of pods during their runtime. Pods are the smallest deployable unit of its system, which carries out the essential tasks necessary for the application to function.

In its service, three principal units make the mechanism possible by exposing the service to the system ClusterIP, NodePort & LoadBalancer.

Storage orchestration

Storage is a big problem, and handling it even more. Making the data available always, backing up the data, keeping secret data safe, and many more challenges were there too. Still, as the world is rapidly moving in the digital era, these challenges seem more complex, and we need to solve these. Its architecture provides the necessary aspects that make these challenges.

Easy to handle. The k8s storage architecture is based on volumes. Volumes can be either persistent or maybe non-persistent. K8s provides a requesting mechanism that allows the containers to use storage as per their need, known as volume claims.

Automated rollouts and rollbacks

In the Digital era, everything is about maintaining consistency in everything from service to quality. Any business demands that they don't want any downtime as it results in huge losses.

Previously updating any application manually was a hectic and lengthy process. If the update has some bugs that may fail, taking back, the updated version was hectic, too, and may result in pausing the service for a certain period. It provides automated rollout and rollback, which is done gracefully without downtime. There are many more advantages, such as Automatic bin packaging, Self-healing, and Secret and configuration management, which simplifies the cloud development environment.

What are the types of Kubernetes Architecture?

Managed Kubernetes Cluster operates in master and worker architecture. In which Kubernetes Master gets all management tasks and dispatch to appropriate Kubernetes worker node based on given constraints.

  • Master Node
  • Worker Node


Kubernetes Components

Below we have created two sections so that you can understand better what are the components of the kubernetes architecture and where we exactly using them.

Master Node Architecture


Kube API Server

Kubernetes API server is the center of each and every point of contact to kubernetes cluster. From authentication, authorization, and other operations to kubernetes cluster. API Server store all information in the etc database which is a distributed data store.


Setting up Etcd Cluster

Etcd is a database that stores data in the form of key-values. It also supports Distributed Architecture and High availability with a strong consistency model. Etcd is developed by CoreOS and written in GoLang. Kubernetes components stores all kind of information in etcd like metrics, configurations and other metadata about pods, service, and deployment of the kubernetes cluster.



The kube-controller-manager is a component of Kubernetes Cluster which manages replication and scaling of pods. It always tries to make kubernetes system in the desired state by using kubernetes API server. There are other controllers also in kubernetes system like

  • Replication controller
  • Endpoints controller
  • Namespace controller
  • Service accounts controller
  • DaemonSet Controller
  • Job Controller


The kube-scheduler is another main component of Kubernetes architecture. The Kube Scheduler check availability, performance, and capacity of kubernetes worker nodes and make plans for creating/destroying of new pods within the cluster so that cluster remains stable from all aspects like performance, capacity, and availability for new pods. It analyses cluster and reports back to API Server to store all metrics related to cluster resource utilisation, availability, and performance. It also schedules pods to specified nodes according to submitted manifest for the pod.

Worker Node Architecture



The Kubernetes kubelet is a worker node component of kubernetes architecture responsible for node level pod management. API server put HTTP requests on kubelet API to executes pods definition from the manifest file on worker nodes and also make sure containers are running and healthy. Kubelet talks directly with container runtimes like docker or RKT.


The kube-proxy is networking component of the Kubernetes Architecture. It runs on each and every node of the Kubernetes Cluster.

  • It handles DNS entry for service and pods.
  • It provides the hostname, IP address to pods.
  • It also forwards traffic from Cluster/Service IP address to specified set of pods.
  • Alter IPtables on all nodes so that different pods can talk to each other or outside world.


Docker is an open source container run time developed by docker. To Build, Run, and Share containerized applications. Docker is focused on running a single application in one container and container as an atomic unit of the building block.

  • Lightweight
  • Open-Source
  • Most Popular


rkt, a security-minded, standards-based container engine - CoreOS

rkt is another container runtime for the containerized application. Rocket is developed by CoreOS and have more focus towards security and follow open standards for building Rocket runtime.

  • Open-Source
  • Pod-native approach
  • Pluggable execution environment

Managed Kubernetes Supervisor

Kubernetes supervisor is a lightweight process management system that runs kubelet and container engine in running state.

 Logging with Fluentd

Fluentd is an open-source data collector for Kubernetes cluster logs.

Continuous delivery concentrates on automating the software delivery process so that teams can quickly and confidently deploy their code to production at any point. Click to explore about, Kotlin Application Deployment with Docker and Kubernetes

What are the basics concepts of Kubernetes?

The below mentioned are the basics kubernetes concepts :


Kubernetes Nodes are the worker nodes in the kubernetes cluster. Kubernetes worker node can be a virtual machine or bare metal server. Node has all the required services to run any kind of pods. Node is also managed by the master node of the kubernetes cluster. Following are the few services of Nodes

  • Docker
  • Kubelet
  • Kube-Proxy
  • Fluentd

Docker Containers

A container is a standalone, executable package of a piece of software that includes everything like code, run time, libraries, configuration. 1. Supports both Linux and Windows-based apps 2. Independent of the underlying infrastructure. Docker and CoreOS are the main leaders in containers race.


Pods are the smallest unit of Kubernetes architecture. It can have more than 1 containers in a single pod. A pod is modelled as a group of Docker containers with shared namespaces and shared volumes. Example:- pod.yml

Kubernetes Deployment

A Deployment is JSON or YAML file in which we declare Pods and Replica Set definitions. We just need to describe the desired state in a Deployment object, and the Deployment controller will change the actual state to the desired state at a controlled rate for you. We can

  • Create new resources
  • Update existing resources

Example:- deployment.yml

Serverless Framework is serverless computing to build and run applications and services without thinking about the servers. Click to explore about, Cloud Native Databases on Docker and Kubernetes

Managed Kubernetes Service YAML/JSON

A Kubernetes Service definition is also defined in YAML or JSON format. It creates a logical set of pods and creates policies for each set of pods that what type of ports and what type of IP address will be assigned. The Service identifies set of target Pods by using Label Selector. Example: - service.yml

Replication Controller

A Replication Controller is a controller who ensures that a specified number of pod “replicas” should be in running state.

  • Pods should be running
  • Pods should be in desired replica count.
  • Manage pods on all worker nodes of the Managed Kubernetes cluster.

Example :- rc.yml


Labels are key/value pairs. It can be added any kubernetes objects, such as pods, service, deployments. Labels are very simple to use in the kubernetes configuration file. Below mentioned code snippet of labels Because labels provide meaningful and relevant information to operations as well as developers teams. Labels are very helpful when we want to roll update/restore application in a specific environment only. Labels can work as filter values for kubernetes objects. Labels can be attached to kubernetes objects at any time and can also be modified at any time. Non-identifying information should be recorded using annotations.

Container Registry

Container Registry is a private or public online storage that stores all container images and let us distribute them.There are so many container registries in the market.

Microservices Application

Kubernetes is a collection of APIs which interacts with computer, network and storage. There are so many ways to interact with the Managed Kubernetes cluster.

  • API
  • Dashboard
  • CLI

Direct Kubernetes API is available to do all tasks on the kubernetes cluster from deployment to maintenance of anything inside the kubernetes cluster. Kubernetes Dashboard is simple and intuitive for daily tasks. We can also manage our kubernetes cluster from the kubernetes dashboard.


Kubernetes CLI is also known as kubectl. It is written in GoLang. It is the most used tool to interact with either local or remote kubernetes cluster. 


Continuous Delivery for Application

Below mentioned deployment guides can be used to most of the popular language application on kubernetes.

What are the best practices of Kubernetes Monitoring?

Kubernetes gives us an easier and managing infrastructure by creating many levels of abstractions such as node, pods, replication controllers, services. Nowadays due to this, we don’t worry about where applications are running or related to its resources to work properly. But in order to ensure good performance, we need to monitor our deployed applications and containers.


There are many tools like Advisor, Grafana available to monitor the kubernetes environment with visualization. Nowadays Grafana is booming in the industry to monitor kubernetes environment.

Using cAdvisor to Monitor Kubernetes

cAdvisor is an open source tool to monitor kubernetes resource usage and performance. cAdvisor discovers all the deployed containers in the kubernetes nodes and collects the information like CPU, Memory, Network, file system. cAdvisor provides us with a visualise monitoring web dashboard.


Monitoring Using Grafana

Grafana is an open source metrics analytics and visualization suite. Grafana commonly used for visualizing time series data for application analytics. In Grafana, we need a time series database like “influxdb” and a cluster-wide aggregator of monitoring and event data like heapster. There are 4 steps to get information of kubernetes and visualise it to grafana dashboard.

  • Step 1: Hepster collects the cluster-wide data from the kubernetes environment.
  • Step 2: After collecting the data hepster provide it to influxdb.
  • Step3: And now grafana execute the metrics through the influxdb client to collect required data.
  • Step4: After getting required data grafana visualise the same in graphs. You can create a custom dashboard on Grafana as per your requirement.





Enterprise Solutions & Production Grade Cluster

The below is the defined Enterprise Solutions & Production Grade Cluster:

Workloads API GA in Kubernetes 1.9

Kubernetes 1.9 introduced General Availability (GA) of the apps/v1 Workloads API, which is now enabled by default. The Apps Workloads API groups the DaemonSet, Deployment, ReplicaSet, and StatefulSet APIs together to form the foundation for long-running stateless and stateful workloads in Kubernetes. Deployment and ReplicaSet, two of the most commonly used objects in Kubernetes, are now stabilized after more than a year of real-world use and feedback.

Windows Support (Beta) For Kubernetes 1.9 

Kubernetes 1.9 introduces SIG-Windows, Support for running Windows workloads.

Storage Enhancements in Kubernetes 1.9

Kubernetes 1.9 introduces an alpha implementation of the Container Storage Interface (CSI), which will make installing new volume plugins as easy as deploying a pod, and enable third-party storage providers to develop their solutions without the need to add to the core Kubernetes codebase.

Compressive Approach to Kubernetes

Kubernetes is an open-source container for managing full stack operations and containers. Discover how XenonStack's Managed Kubernetes Consulting Solutions For Enterprises and Startups can help them for Migrating to Cloud-Native Application Architectures means re-platforming, re-hosting, recoding, rearchitecting, re-engineering, interoperability, of the legacy Software application for current business needs. Application Modernization services enable the migration of monolithic applications to new Microservices architecture. Deploy, Manager and Monitor your Big Data Stack Infrastructure on Kubernetes. Run large scale multi-tenant Hadoop Clusters and Spark Jobs on Kubernetes with proper Resource utilization and Security. Please review the below steps:


Transform your
Enterprise With XS

  • Adapt to new evolving tech stack solutions to ensure informed business decisions.

  • Achieve Unified Customer Experience with efficient and intelligent insight-driven solutions.

  • Leverage the True potential of AI-driven implementation to streamline the development of applications.