Kubernetes Basics #2 - Kube Control

Last Edited: 3/15/2025

This blog post introduces the basics of kubectl in Kubernetes.

DevOps

In the previous article, we understood the fundamental components and resources of Kubernetes needed to get started. In this article, we will dive deeper into the resources and demonstrate how we can configure them and control the cluster with kubectl.

Minikube

Before getting started with kubectl, we need to set up an experimental environment, which can be done easily by utilizing minikube. Minikube implements a local Kubernetes cluster on multiple operating systems for developers to experiment with Kubernetes easily without having to use production-scale cloud environments. Minikube can be installed by following the instructions on the official documentation linked here.

Kubectl should also be installed as a dependency while installing minikube. We can run minikube in a cluster using Docker with minikube start --driver=docker, which will automatically configure kubectl to use minikube. The cluster consists of only one node, which works as both the master and worker. In a production environment, we usually set up multiple master nodes and separate worker nodes, but the cluster setup is irrelevant for learning to use kubectl. We can confirm that the setup is complete with minikube status, and we can use minikube delete -all to delete everything after experimenting.

Setting Up Deployments

As we discussed in the previous article, we need to set up resources, deployments, services, configmaps, and secrets (optionally ingress) to configure a Kubernetes cluster. First, we can set up deployments as blueprints for pods using a YAML file like the following.

frontend.yaml
apiVersion: apps/v1
kind: Deployment
# Metadata of deployment
metadata:
  name: frontend-deployment
  labels:
    app: frontend
# Spec of deployment
spec:
  replicas: 3 # number of pods to run in a cluster
  # identify pods that belong to the deployment by matching labels
  selector:
    matchLabels:
      app: frontend
  # Pod template
  template:
    metadata:
      labels: # matchLabels look at this
        app: frontend
    spec:
      containers:
        - name: frontend
          image: frontend-image # image installed in Docker
          ports:
            - containerPort: 3000

The template field contains the blueprint of each pod and assigns metadata and containers (each with name, image, and container port). The labels in the metadata under template are key-value pairs attached to pods that can be used to identify pods with different names running the same containers. The selector field allows Kubernetes to know which pods belong to the deployment using labels, so it can monitor if the specified number of replicas are running and so on.

The label field under the top-level metadata is optional but is a standard practice to include the label. It's also a standard practice to use an app:<name> label, although it could have been set to any key-value pair. To apply the configuration of deployments to the cluster, we can use kubectl apply -f frontend.yaml. We notice that the command does not include deployment, and instead, the YAML file uses kind under a different value. This is because kubectl applies configurations for any resource using kubectl apply on a file containing different values for kind.

Setting Up Services

Next, we can set up a service for the deployment above. The file for a service has the same structure as one for a deployment, containing metadata and spec. The following is an example YAML file for configuring a service.

frontend.yaml
# ... Deployment config
---
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
spec:
  type: ClusterIP # (ClusterIP = internal service, NodePort = external service)
  selector:
    app: frontend # forward to pods with this label
  ports:
    - name: frontend-port
      protocol: TCP
      port: 3000 # service's port
      targetPort: 3000 # container port of deployment
      #nodePort: 30000 
      # (if type=NodePort, service is accessible from Node's IP:nodePort)
      # (It needs to be in range 30000~32767)

The above sets up an internal service for the deployment. As we usually set up a service for each deployment, we can include the service configuration in the same file for the corresponding deployment using --- and apply it together. We can set the type of the service to be external for testing, although we usually set up all services as internal and restrict access to the ingress controller or use a load balancer service (which we will discuss in the next article).

Setting Up ConfigMaps & Secrets

To make it easier for us to manage environment variables and avoid rebuilding the entire cluster after minor changes in environment variables, we can make use of config maps and secrets, which can be easily set up using YAML files like below.

## frontend-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: frontend-config
data:
  backend-url: backend-service # configured in backend.yaml
 
## frontend-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: frontend-secret
type: Opaque # arbirurary key-value pairs defined by user
data: # base64 encoded strings (data must be encrypted beforehand)
  user: ZGF0YWJhc2V1c2Vy
  password: ZGF0YWJhc2VwYXNzd29yZA==

It is essential to note that secrets do not have a built-in encryption mechanism, and data needs to be encrypted beforehand. Also, the data field expects base64 encoded strings for values, which can be changed to storing pure strings by using the stringData field instead. When we refer to the name of a service in a config map, it can translate it to the valid IP of the service, meaning that we can change the service's network configuration without triggering the rebuild process for other services and deployments that need references to it. The following shows how containers in pods can use those environment variables.

frontend.yaml
# ...
  template:
    # ...
    spec:
      # ...
      containers:
        - name: frontend
          image: frontend-image
          ports:
            - containerPorts: 3000
          env:
            - name: BACKEND_URL
              valueFrom:
                configMapKeyRef:
                  name: frontend-config # name of the configmap
                  key: backend-url # key to reference the value from
            - name: PASSWORD
              valueFrom:
                secretKeyRef:
                  name: frontend-secret
                  key: password  

The only difference between referencing environment variables stored in config maps and secrets is the field name (configMapKeyRef vs secretKeyRef). Since deployments might depend on environment variables in config maps and secrets, it is common practice to apply config maps and secrets before applying deployments and services.

Monitoring Cluster

Using kubectl, we can not only apply configurations defined in YAML files but also monitor the status of components and resources within a cluster. To see everything, we can use kubectl get all, and we can look into specific component or resource with kubectl get <resource>. For more detailed information, we can use kubectl get <resource> -o wide.

We can also view logs from an individual pod using kubectl logs <pod-name>, and we can choose to stream the log by adding the -f flag. You can also check the details of resources with kubectl describe <resource> <resource-name>. We can also change the number of replicas for a resource based on a file with kubectl scale --replicas=3 -f frontend.yaml or based on name with a command like kubectl scale --replicas=3 deployment/frontend. There are many options and flags available for these commands, so I recommend checking out the official documentation cited at the bottom of this article for more information.

Conclusion

In this article, we introduced how minikube allows us to set up a local cluster, how we can configure resources with YAML files, and how we can apply configurations to the cluster using kubectl. You can try these commands to get hands-on experience with using kubectl and test if you can correctly set up a cluster by creating an external service, obtaining the node IP and port, and accessing it through your browser. In the next article, we will discuss how to set up the entry point of a cluster in a production-like environment.

Resources