Network Engineering #9 - Nginx

Last Edited: 3/6/2025

This blog post introduces Nginx as a tool for setting up a web server and reverse proxy.

DevOps

In our previous article, we covered how horizontal scaling can be easily achieved with Docker Compose, but it cannot be done without setting up a reverse proxy to distribute the traffic to the servers, which we have not covered yet. In this article, we will cover the basics of one of the most popular ways for setting up a reverse proxy that also can be used to serve static contents, nginx.

Nginx

We can set up a server with nginx by installing nginx and its modules, which provide functionalities for serving static contents and handling traffic based on a configuration file, typically placed in /etc/nginx/nginx.conf by default. Hence, we can do almost anything just by learning how to write configurations in the configuration files. The following is an example nginx.conf for serving HTML or images.

nginx.conf
user nginx; # user to execute process
worker_processes auto; # number of workers (auto=number of cores)
 
error_log /var/log/nginx/error.log notice; # path to the logging file (notice is a severity level.)
pid /var/run/nginx.pid; # path to the process id file
 
# Modules for handling events
events {
    worker_connections: 1024; # number of connections per worker
    multi_accept on; # if accept multiple accesses at the same time
}
 
# Modules for handling http connections
http {
    include /etc/nginx/mime.types; # defalt mime type mappings by nginx
    default_type text/plain; # add default mime type mapping not by nginx
 
    # define log format called main with variables like $remote_addr
    log_format main ' [$time_local] $remote_addr - $remote_user "$request" $status'; 
 
    access_log /var/log/nginx/access.log main; # use main log format to log access details
 
    keepalive_timeout 65; # max time to sustain tcp connection
 
    server {
        listens 80; # port to listen to
        server_name example.com; # checks if this name matches with Host section of a request
        root /usr/share/nginx/html; # default root directory
        location / {
            root /data/www; # maps http://localhost:80/ to /data/www in the directory
        }
        location /images/ {
            root /data; # maps http://localhost:80/images/ to /data/images/ in the directory
        }
    }
}

In the configuration file, we use spaces for key-value pairs or directives, {} for blocks or contexts, and # for comments. You can read and study the above to understand the basics of setting up a server with nginx. If there are any parts that you are unsure of, I recommend searching them online. You can also set up a reverse proxy to distribute traffic using proxy_pass in the location module as follows.

nginx.conf
http {
    ...
    upstream backend {
        server 127.0.0.1:3000;
        server 127.0.0.1:3001;
        server 127.0.0.1:3002;
    }
    server {
        listens: 80
        server_name: localhost;
        location / {
            proxy_pass http://backend;
        }
    }
}

The proxy_pass directive looks for the backend upstream and distributes traffic to the servers in the upstream. By default, it uses round robin, but it can be configured to use least connections and other load balancing algorithms. We can also assign weights to each server in the upstream context by adding a weight=<number> parameter to each server. You might notice that we have such long configuration files. In such cases, we can split them using the include directive as follows.

nginx.conf
...
 
http {
    ...
    include /etc/nginx/conf.d/*.conf;
}

The above allows the configurations in the /etc/nginx/conf.d/ directory to be used in the http context. Hence, we can have configurations for the server and upstream contexts in separate .conf files copied to the /etc/nginx/conf.d/ directory. This is also useful when we want to use environment variables in our configurations, since copying a configuration file to default.conf.template in /etc/nginx/templates/ automatically substitutes the environment variables defined in the .conf file and stores it in /etc/nginx/conf.d/, which can then be used with the include directive.

Nginx with Docker

By using Docker and the nginx parent image, we can easily containerize a reverse proxy configured with nginx. To do so, we first need to prepare nginx.conf and upstream.conf, which are specified as follows.

# ./conf/nginx.conf
user nginx;
worker_processes auto;
 
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
 
events {
    worker_connections: 1024;
}
 
http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
 
    log_format main ' [$time_local] $remote_addr - $remote_user "$request" $status'; 
 
    access_log /var/log/nginx/access.log main;
 
    keepalive_timeout 65;
 
    include /etc/nginx/conf.d/*.conf;
}
 
# ./conf/upstream.conf
upstream backend {
    server ${PROJECT_NAME}-server-1:${BACKEND_PORT};
    server ${PROJECT_NAME}-server-2:${BACKEND_PORT};
    server ${PROJECT_NAME}-server-3:${BACKEND_PORT};
    server ${PROJECT_NAME}-server-4:${BACKEND_PORT};
}
 
server {
    server {
        listens: ${NGINX_PORT}
        server_name: ${NGINX_CONTAINER_NAME};
 
        root /usr/share/nginx/html;
 
        location /api {
            proxy_pass http://backend;
        }
    }
}

Here, it's essential to note that upstream.conf uses environment variables defined in the .env file. For these configuration files, we can use a Dockerfile as follows.

FROM nginx:latest
 
# Copy static files
COPY ./public /usr/share/nginx/html
 
# Copy nginx.conf
COPY ./conf/nginx.conf /etc/nginx/nginx.conf
 
# Copy to defalt.conf.template for reflect environment variables and store it in conf.d
COPY ./conf/upstream.conf /etc/nginx/templates/defalt.conf.template

We can then configure how to build the reverse proxy and servers in the docker-compose.yaml file as follows. (We set up servers using the same method as we did in the previous article.)

docker-compose.yaml
name: ${PROJECT_NAME}
 
services:
  server:
    build: ./server
    deploy:
      replicas: 4
  reverse-proxy:
    build: ./reverse-proxy # this is where conf dir lives
    container_name: ${NGINX_CONTAINER_NAME}
    env_file: .env
    ports:
      - ${NGINX_PORT}:${NGINX_PORT}
    depends_on:
      - server

When running the docker compose up command, we should see all four servers being built and then the reverse proxy spinning up. When accessing http://localhost:${NGINX_PORT}, we should be able to access one of the servers via the reverse proxy. Using docker stats command, we can view the statistics of the containers and confirm that the server containers are receiving requests after sending a bunch of dummy requests to the reverse proxy. We can even access the container's shell with docker exec -it to see the error and access logs with cat.

Configuring HTTPS

By now, we have covered how to use Docker to containerize applications, use nginx with Docker to set up a containerized reverse proxy for horizontal scaling, use Git for version control of applications, and GitHub for hosting remote repositories and collaborating with others effectively using CI/CD pipelines. We've also learned how to use some bash commands for configuring Linux-based containers and monitoring logs.

To expose the applications from a local device (instead of renting a remote server and configuring it via SSH, which is more common), we can simply use NAT functionality of a router to map the public IP address and port to the local port mapped to the container's port of the reverse proxy. Optionally, we can purchase a domain name from a registrar to enable access via a domain name. However, when users type in the domain to access the service via a browser, they will see the warning saying "Not Secure", since we have only set up an HTTP connection. To configure HTTPS, we can edit the nginx configuration as follows.

upstream.conf
server {
    listen ${NGINX_PORT} ssl;
    server_name: ${NGINX_CONTAINER_NAME};
 
    ssl_certificate /etc/nginx-certs/example.com/fullchain.pem;
    ssl_certificate_key /etc/nginx-certs/example.com/privatekey.pem;
 
    location {
        ...
    }
}

Here, we specify that we use ssl in the listen directive and paths to the SSL certificate and key in their respective directives. The key and the certificate signing request (CSR) can be generated using OpenSSL like openssl genrsa -out example.com.key 2048 and openssl req -new -key example.com.key -out example.com.csr -subject "<subjects>", which can be sent to a trusted certificate authority to obtain a valid SSL certificate and key. Alternatively, we can use an API like Certbot from Let's Encrypt to obtain the SSL certificate and key. (For demonstration purposes, we can generate self-signed certificates with OpenSSL. For more information on Certbot, refer to the official documentation here.)

Conclusion

In this article, we introduced the basics of nginx and how it can be used with Docker to set up a reverse proxy for horizontal scaling. We also combined the tools we covered so far for self-hosting (or hosting on a remote server) a reasonably scalable and maintainable web application with HTTPS configurations. However, there are many other problems that need to be addressed in our development and operations, especially when the service needs to scale massively to accommodate many users. Hence, we will continue our discussions on those issues and tools for solving them in this DevOps series.

Resources