Building Custom Docker Images

January 06, 2025

Custom Docker Images

If you have ever deployed something in Docker, you know that getting a new container up and running can be as simple as docker run hello-world. But what if you didn’t want it to say “hello world”? What if you wanted a custom version that said hello in another language? You could start from scratch, and setup everything yourself, or you could extend the container image that is already there. To do this, you will need to create a custom docker image, using docker files.

Dockerfile

The official documentation for custom Dockerfiles can be found here.

A docker file is basically just a text document, in a format that Docker expects. Create a blank text document named Dockerfile, with no file extension. This can easily be done in an editor like VSCodium, which can also provide you with nice tools and syntax highlighting, but technically you can use any text editor.

In this file, we are going to put our instructions for the container builder.

# This line tells it that we are wanting to start with the latest Ubuntu Docker container as our base image.
FROM ubuntu:latest

# This tells it what directory to start working from.
# We can move around as we wish, but until we change directories, any relative paths, i.e. `file/path/here`, will actually reference `/usr/local/bin/file/path/here`.
WORKDIR /usr/local/bin

# RUN tells it to run a command in the container. This one will tell it to update its package info, which will get us the most recent packages, as-of build time.
RUN apt update && apt upgrade -y

# This will install some custom packages for us, namely, Vim and Git.
# You could install anything you want here really, this is just as an example.
RUN apt install vim git -y

Now, when you build an image from this file, it will do the following steps in order:

  1. Check for a local version of Ubuntu:latest
  2. If it is out of date, or non-existent, it will download the version of Ubuntu that currently has the latest tag.
  3. The working directory is set to /usr/local/bin. (This can be helpful if you are adding custom executables to the container).
  4. The container checks for any updated packages, and installs them. (Even though the container is the latest version, sometimes packages are out of date. apt update is recommended before installing any packages, but apt upgrade -y is optional.)
  5. Install vim and git from the APT repository.

These packages are really not a good example, as there is nothing else in the container that would make use of them yet. You would be left with a custom version of Ubuntu, with a few command line tools, but not much else. Let’s take a look at something that may be a little more useful.

RUN: When using RUN, it is executing a command in the context of the container that is being built. Here, we are using apt because APT is the built in package manager for Ubuntu. If we had instead started with FROM: arch:latest, we would instead want to run RUN pacman -Syu to achieve the same update and upgrade outcome.

Here is an excerpt from the official docs that give an idea of what each of the commands does, and what their options are:

Common instructions

Some of the most common instructions in a Dockerfile include:

  • FROM <image> - this specifies the base image that the build will extend.
  • WORKDIR <path> - this instruction specifies the "working directory" or the path in the image where files will be copied and commands will be executed.
  • COPY <host-path> <image-path> - this instruction tells the builder to copy files from the host and put them into the container image.
  • RUN <command> - this instruction tells the builder to run the specified command.
  • ENV <name> <value> - this instruction sets an environment variable that a running container will use.
  • EXPOSE <port-number> - this instruction sets configuration on the image that indicates a port the image would like to expose.
  • USER <user-or-uid> - this instruction sets the default user for all subsequent instructions.
  • CMD ["<command>", "<arg1>"] - this instruction sets the default command a container using this image will run.

To read through all of the instructions or go into greater detail, check out the Dockerfile reference.

Custom Entry Scripts

What if you wanted your container to do more than just install some packages? The image that we just built is functional, but would only really be useful for interactive use from the command line without some more setup. What about going beyond what we can do with the RUN command? For that, we can use Docker ENTRYPOINT. An ENTRYPOINT tells the docker container that when it is first created, it must run a specified set of commands, or a custom script. This then opens up a whole new world of options, as the script has many more-powerful options than the dockerfile does. The script is running as if it is “in” the container, because it is! If we consider the Dockerfile as a recipe for making a new container, the ENTRYPOINT is like your whisk, oven, dishes, etc.

Consider a docker container, where you would like it to send you an email every hour, on the hour. Without getting into the depths of how email works, let’s just pretend we have a python script named send-me-an-email.py. We also want to make something for our ENTRYPOINT to use. Let’s call it start-script.sh. Our start script is going to use bash in order to setup some things in our new docker container that the Dockerfile cannot, such as Cron entries. Cron will let us run our script on a schedule, as long as the container is still alive, while requiring minimal setup.

Ok, let’s setup our project files. Once again, we will need a blank file named Dockerfile. We will also pretend that we have our email script in the same folder, and name it send-me-an-email.py. Lastly, we need our entry point script start-script.sh, also in the same folder. Your project folder should look something like this:

  • Project-Files
    • Dockerfile
    • send-me-an-email.py
    • start-script.sh

To create our Dockerfile, we are going to change a few things:

# Start with the latest stable Ubuntu Build again
FROM ubuntu:latest

# Here we are now updating the repository info, upgrading all installed packages, and then installing the necessary packages for our container, all in one line.
RUN apt update && apt upgrade -y && install cron python3 pip -y

# Changing the working directory to make sure that the files are going where we expect them to. This is the equivalent of the `cd` command.
WORKDIR /usr/local/bin

# This copies the send-me-an-email.py script from the current directory into the working directory of the container. This only works because the script is in the same folder as the Dockerfile. Otherwise, you would need the full path to the script, instead of ./send-me-an-email.
# It is also copying with the correct permissions set to tell the container that it is a script, and that it is allowed to run, using chmod=755
COPY --chmod=755 ./send-me-an-email.py /usr/local/bin/send-me-an-email.

# This is doing the same thing to our start script, and is copying it out of the current directory of the Dockerfile, into the working directory of the container.
# It is also being set to allow execution, using the chmod=755 flag.
COPY --chmod=755 ./start-script.sh /usr/local/bin/start-script.sh

# Now we are telling it to run the start-script every time the docker container is created.
ENTRYPOINT /usr/local/bin/start-script.sh

This is just a quick example, but to get more in depth, check out the Dockerfile Best Practices docs on Docker’s official website.

Here is what our start script is going to look like:

#!/bin/bash

# This will echo to the docker logs
echo “starting docker container”

# This is setting some options in the shell that will make it more like working in a normal shell environment.
declare -p | grep -Ev ‘BASHOPT|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID|> /container.env

# Set which python we are using
PYTHON=$(which python3)

echoSHELL=/bin/bash
BASH_ENV=/container.env
@hourly $PYTHON /usr/local/bin/send-me-an-email.py >> /var/log/cron.log 2>&1
# Final Line for Cron entry” >crontab.txt

crontab crontab.txt
cron -f

This is setting up an environment for bash, then asking the system which python3 version to use, and finally creating a cron entry that tells the container to run our custom send-me-an-email.py script every hour, using the @hourly schedule.

All of this has covered how to setup your files for building custom docker containers, but how do we actually build them?

Build Time

Up until this point, the system that you are building from has not mattered at all. All of the previous steps should be basically the same, regardless of your development computer. However, once we are ready to build, the steps are going to be slightly different depending on what your Operating System is.

The official install instructions for Docker can be found here for any supported OS. The docs also list system requirements and recommended specs for running Docker.

Linux

If the computer that you are using to build docker containers is some form of linux, you can use the command line to build your container.

Linux Command Line Docker

Open your terminal application of choice, and install the following packages using your distro’s package manager. For Ubuntu, we can just use APT:

sudo apt install docker.io docker-buildx-plugin -y
sudo docker buildx install

We need the regular docker runtime, as well as the build tools. Once your tools are installed, you can go to the project directory, where we made the Dockerfile and scripts, and run the following command:

docker build -t custom_docker_image_name:latest -f Dockerfile .

Here we are using -t to give our image a custom name, and we are using -f to tell it what file to use to build our image. Our docker file is just named Dockerfile, so that is what we are setting as the target for -f.

When using -t in your builds, if you reuse the same name for multiple builds, the old versions will still be available on your system, but they will be given random identifiers, rather than your “production” name. However, you can also setup versions using the following syntax during build: docker build -t custom_docker_image_name:development -f Dockerfile . This would tag this build as a development build, without squashing your previous latest build, for instance.

Linux Docker Desktop

To keep it consistent, I am going to link to the Ubuntu install instructions for Docker Desktop, but I would highly recommend just using the command line version instead. It is a larger learning curve, but it is way easier to use once you get the hang of it. Nevertheless, the install instructions for Docker Desktop on Ubuntu can be found here. To be honest, I don’t know how to build in Linux Desktop, it has just never been something I have had any reason to do 🤫.

MacOS

MacOS Command Line Docker

We can install the command line tools using the Homebrew package manager.

brew update && brew install docker docker-buildx

The build process here is then very similar to building in the Linux command line.

docker buildx build -f Dockerfile -t custom_docker_image_name:latest .

Using buildx build is also an option on Linux, but they are behaviorally very similar there. On Mac OS, buildx build opens up some experimental options that are useful when working with M series processors.

MacOS Docker Desktop

There is also a Desktop version of Docker on Mac OS, if you prefer, which can be downloaded directly from the Docker website.

There is a Builds tab in the app, but I cannot get mine to open. Thankfully, what we can do is just open a Terminal within the app, and use the same command that we already used above for command line build. It may complain that it is missing things, and you may need to use full paths for the -f option, and . build location.

Windows

Windows Docker Desktop

In Windows, we can use the Desktop version of Docker, which can be downloaded directly from Docker’s website. When installing, it asked if I would like to use WSL or HyperV for the containerization backend. I decided to go with WSL2, as this was the recommended path.

In Windows Docker Desktop, there is also an option to see your Builds from within the app. You can also manage your builders there. However, to build an image, we are going to do the same thing as we did on Mac OS. Open a terminal from within Docker Desktop, using the Terminal button in the lower right hand corner of the Docker Desktop window. Make sure that you have enabled the Terminal to function.

We still want to make sure that we are building from the same location as our docker file, just to keep things simple. If needed, you can do the following to get to your project files before building:

cd ~\Documents\Project-Files

The above assumes that your folder containing the Dockerfile, and other scripts, is in your user’s Documents folder. Then, all that is left to do is run the following command in the built in Terminal:

docker builds build -f Dockerfile -t custom_docker_image_name:latest .

Just like on MacOS, the -f option tells it that our custom docker file is named Dockerfile. The -t option is our tag, where we are giving it the name of the image to be built followed by a colon and a version tag. Lastly, the . indicates that it should build in the current directory.

In the Docker Desktop app in the Builds tab, you can see all of your completed builds and any errors if applicable. This can be useful for troubleshooting a broken build.

Wrap up

All in all, the build options across each operating system remain fairly similar. What really changes the most is actually installing the docker engine and build tools. From there, building docker containers can be very simple, or can get quite complex depending on your goals. I have not touched on building for different platforms, though that is also possible with the docker build commands. i.e. building an ARM compatible docker container from an x86 host. We also have not covered anything here regarding actually running docker containers, adding environment variables, or passing through volumes from your host for persistence. I will end with one good nugget though for after your docker container is running.This has helped me immensely. I know on the Docker Desktop app, you can connect a terminal session from the app, and you don’t need to know any command line. However, if you are working from the command line, this command will give you an interactive terminal inside of your container. This can be great for troubleshooting, reading config files, checking file paths, and more.

docker exec -it your-container-name /bin/bash

Along with that, here is an awesome resource for quick reference of docker commands to get you started on your docker journey. Enjoy!

Chris Allen Lane Docker Cheatsheet


Profile picture

Written by Grant Brinkman, amateur coffee, tech, and film enthusiast.