Kubernetes Basics #1 - Getting Started

Last Edited: 3/11/2025

This blog post introduces the fundamental concepts needed to get started with Kubernetes.

DevOps

Imagine your web application has gone viral against all odds, generating millions of users overnight. Your initial load balancing setup using Docker and Nginx, which was serving requests smoothly just moments before, now struggles to keep up with the massive influx of traffic. The server crashes due to its limited capacity, prompting you to implement horizontal scaling by purchasing new servers.

You replicate your configuration on each new server, pulling the repository and running Docker again, and set up an additional reverse proxy to direct traffic to multiple servers. However, as your application now spans across multiple containers in multiple servers, managing replicas and ensuring continuous accessibility becomes increasingly tedious. Every time a server crashes or needs maintenance, you are forced to manually update the reverse proxy configuration, add new servers, and reconfigure everything.

This is where Kubernetes comes into play, an open-source tool developed by Google in 2014 designed to automate container management, eliminating the manual work required for deployment and scaling. Due to its portability and robust features, Kubernetes has gained rapid popularity among large corporations managing complex platforms. In this Kubernetes series, we will cover the foundations to get started with Kubernetes, so that we can avoid potential struggles just in case when we end up developing an application with a large amount of traffic.

Kubernetes Cluster

Kubernetes, often abbreviated as k8s (since there are 8 letters between "k" and "s"), offers abstractions for making it easier for users to understand the architecture, which can be a source of confusion at first. Here, we will cover the essential components in Kubernetes. In Kubernetes, we create a cluster of nodes (where each node is a physical or virtual machine) with a master node having a control plane that monitors and controls the entire cluster, and worker nodes that run containerized applications in pods.

K8s Components

The control plane consists of an API server (the entry point for us to interact with the cluster), a control manager (which keeps track of the cluster's information), a scheduler (which assigns pods to nodes intelligently), and etcd (which stores the states of the cluster). We can use the command-line tool kubectl (Kube Control) to interact with the cluster via the API server. Then, each worker node is equipped with a kubelet (which manages pods based on instructions) and a kube-proxy (which allows communications between nodes), and runs a set of pods.

Resources

On top of all the components in a cluster introduced above, we have other abstractions for configuring clusters, resources. Just like Docker Compose and Nginx, Kubernetes allows us to configure clusters via resources using YAML files.
Firstly, we need to set up a deployment, which serves as a blueprint for pods, similar to how we have a Docker Compose file as a blueprint for building and running Docker containers. It specifies the number of pods that the cluster should have, the image to build the container from, the container port, and so on.

K8s Configurables

Secondly, we need to set up a service, which provides a permanent IP address or a simple entry point and works as a load balancer for multiple pods running the same container(s). By setting up a service, we can abstract the details of networking with pods. The service can be external (accessible by a worker node's IP address and port) or internal (only accessible by other nodes in the cluster). Lastly, we need to set up config maps and secrets, storing key-value pairs related to configurations and secrets to avoid rebuilding the entire cluster over simple configuration and secret changes.

In addition, we can optionally set up an Ingress, which is responsible for exposing URLs, TLS/SSL termination, and path-based routing that distribute traffic to services based on the path. The Ingress rule is enforced by the Ingress controller, which is a reverse proxy or load balancer in a pod. The diagram above visualizes the resources we covered. In future articles, we will discuss them in more detail, including how we can configure them in YAML files and apply the configuration with kubectl.

Conclusion

In this article, we covered the motivations behind Kubernetes and the basic components and resources for getting started with Kubernetes. There are still some concepts we haven't introduced yet, but we will introduce these in future articles that delve deeper into the content we've covered in this article.

Resources