Docker Basics #1 - Getting Started

Last Edited: 2/21/2025

This blog post introduces the fundamental concepts needed to get started with Docker.

DevOps

If you've been following this DevOps series, you must have set up the environment for Ubuntu using a mysterious tool called Docker. Little did we know, Docker is one of the most important and widely used developer tools in the world, and it's essential for virtually anyone to know what it is and how to use it. Even the GitHub actions we covered in the last article utilize Docker to run workflows. Hence, this short article series aims to introduce Docker in a clear manner to help you get started.

What Is Docker?

Before diving into the specifics, we'll cover what Docker is and why Docker is essential in development and deployment in modern times. In the past, we employed a "one app per server" approach to ensure resource isolation between applications. As apps grew in complexity, we'd upgrade servers with more memory and better CPUs through vertical scaling until reaching capacity limits, at which point we'd switch to horizontal scaling using load balancers. However, this strategy often resulted in under-utilized resources and inefficient use of server space.

VMs vs Containers

To address these limitations, we considered virtualizing hardware to emulate different operating systems or virtual machines on a single server, effectively creating isolated environments for each app. This approach leverages hypervisors that manage resource allocation between simulated operating systems. Since it can run different operating systems, it is still useful in many cases, such as testing cross-platform compatible software and running legacy applications. However, simulating entire operating systems is typically unreasonably computationally expensive and slow to boot up for many use cases.

In response, we turned to container engines running on top of the host OS, which efficiently manage resources between lightweight containers containing only essential elements for running an application, such as runtime environment, source code, and dependencies. This containerization approach offers significant advantages over traditional virtualization, including faster boot times, improved resource utilization, and enhanced horizontal scaling capabilities. Moreover, it simplifies development by enabling developers to work in a consistent environment across different environments, eliminating the need for setting up identical local settings or running resource-intensive virtual machines.

Docker is the leading non-commercial and open-source project offering such containerization solution that has revolutionized the development and deployment of software, which is why Docker is one of the most used developer tools and why we should learn it. It offers a containerization engine called Docker Engine and a GUI for developers working with Docker on their desktop, called Docker Desktop. If you haven't installed Docker before, you can install it by following the instructions on this link.

Docker Images & Containers

Utilizing Docker involves multiple terminologies and concepts that often confuse beginners, and the core concepts of images and containers are the main sources of confusion. Docker images serve as blueprints for containers, containing all necessary elements such as a runtime environment, source code, dependencies, and commands. Docker containers are isolated processes or running instances of a Docker image. Therefore, to set up containers, we first need to create images for them.

Having a concept of images allows us to install and configure all necessary components of containers beforehand, which can make spinning up the containers faster. Containers only need to run predetermined commands configured in an image to spin up an isolated process. This also makes it easier to duplicate the same containers as needed. Since changing the image while its container is already running produces inconsistency, images become read-only once they are created. Hence, when we want to use different configurations, we need to set up a new image.

Image Layers

Another important thing to keep in mind is that images are made up of several layers, each performing a task on different levels. Typically, we start by configuring the operating system or runtime, copying the source code to the image, installing dependencies configured in the source code, and configuring commands. (It is because we cannot do these tasks in reverse. For example, we cannot install dependencies before copying the source code, as it specifies which dependencies to intall). The standard way of configuring the operating system or runtime is by using a parent image, publicly released on Docker Hub.

When you visit the Docker Hub website here, you will see a list of available images that we can download and use. For example, searching for "ubuntu", accessing the official ubuntu image, and running the command docker pull ubuntu written on the site will install the image. After installing the image, we can simply run the container on it (what we did in the Linux series) or use it as a parent image to run our application by adding more layers. The details of how to create an image will be covered in the next article.

Conclusion

In this article, we covered what Docker is relative to other alternatives, why Docker is useful, and some fundamental concepts needed to get started with Docker. In our next articles, we will discuss and demonstrate more detailed concepts of how Docker can be used.

Resources