This blog post introduces the ways to set up load balancers and Ingress in Kubernetes.

In the previous article, we covered the basics of setting up resources and controlling the cluster with kubectl
,
and we used NordPort
to expose the pods. However, NordPort
allows anyone to access the pods via node IP addresses and ports,
which is not ideal for production environments. Production settings require dynamically scaling the cluster with appropriate
load balancing between the pods in the worker nodes and tracking access and error logs by exposing access only to
a limited set of reverse proxies. In this article, we'll discuss two alternative ways to expose the cluster
that are better suited for production.
LoadBalancer
Service
The alternative service we can use is a LoadBalancer
service. This allows us to configure an external load balancer,
typically provided by cloud providers (AWS, Google Cloud, etc.), which provides a static public IP address for the deployment.
It also offers features like TLS termination and health checks (depending on the load balancer chosen) and routes traffic to the pods.
apiVersion: v1
kind: Service
metadata:
name: frontend-service
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- name: frontend-port
protocol: TCP
port: 3000
targetPort: 3000
#nodePort: 30000 # Some load balancers require this
The above is an example YAML file for configuring a LoadBalancer
service using NLB in AWS.
For details on setting up your Kubernetes cluster on AWS and choosing between different load balancers (ELB, ALB, and NLB),
I recommend checking out their official resources. Although the implementation details vary,
modern solutions typically avoid using NodePort
services, hiding node IPs and ports for production. It also
is well suited for managing large volume of traffic and providing high availability.
Ingress
Instead of using external service types like NodePort
and LoadBalancer
, we can use Ingress to expose the cluster's services.
We can define Ingress rules in a YAML file, which can be enforced by an Ingress controller implementation
(either a load balancer from a cloud provider or a Kubernetes Nginx implementation). The following is an example YAML file
that configures the Ingress.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: http-example-ingress
# Add below for exposing multiple services like example.com/<different-names>
# anotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
# Add below for TLS termination
# tls:
# - hosts:
# - example.com
# secretname: example-tls-secret
rules:
# Exposing multiple services via subdomains
- host: www.example.com
http:
paths:
backend:
serviceName: web-service
servicePort: 3000
- host: api.example.com
http:
paths:
backend:
serviceName: api-service
servicePort: 4000
As we can see from the above, we can easily configure TLS termination and domain-based routing.
However, it's important to note that the TLS certificate needs to be stored as a secret
named example-tls-secret
with a type
field of kubernetes.io/tls
and a data
field
containing keys tls.crt
and tls.key
mapping to the base64-encoded TLS certificate and key.
To demonstrate Ingress in Minikube, you can apply the above with kubectl
,
run minikube addons enable ingress
, and start the Kubernetes Nginx implementation of the Ingress controller.
Depending on the solution, it's often configured to create a LoadBalancer
or NodePort
service to set up
an external load balancer or access point, which routes traffic to the Ingress controller to perform routing to internal services.
However, unlike exposing multiple load balancers or nodes directly, we can expose only one that redirects traffic
to the Ingress controller, along with its routing and TLS features, which is more ideal for a production environment.
Conclusion
In this article, we introduced the LoadBalancer
service and Ingress as alternative ways of exposing the cluster.
We can use them in combination by setting up Ingress to redirect traffic to LoadBalancer
services for high availability,
although this will likely involve setting up additional network policies to manage traffic to the exposed services.
There are other (perhaps better) ways to manage internal and external traffic than those introduced here,
and they will be covered in future articles.
Resources
- Kubernetes. n.d. Kubernetes Documentation. Kubernetes.
- TechWorld with Nana. 2022. Kubernetes Ingress Tutorial for Beginners | simply explained | Kubernetes Tutorial 22. YouTube.