14 September 2023
Docker has revolutionized the way developers build, share and run applications using containers. Setting up a Docker environment enables you to develop applications in isolated containers, ship them as portable images, and deploy them anywhere consistently. This article is a support resource for our popular Docker Training. To find our how you can enroll or get your team trained feel free to get in contact today.
In this comprehensive guide, you'll learn how to get started with Docker by installing it on Linux, Windows 10, and MacOS. We'll cover key Docker concepts like images, containers, volumes, networks and more. You'll learn how to build, manage and share applications using containers and deploy multi-container apps.
We'll also look at best practices for optimizing Docker for development and CI/CD workflows. By the end, you'll have the fundamental skills needed to start productively using Docker for all your development projects.
Before we dive into setting up Docker, let's start with a quick overview of what Docker is and its benefits.
Docker is an open platform for developing, shipping, and running applications using containers. Containers allow you to package an application's code, configurations, and dependencies into a single standardized unit. This guarantees that the application will always run the same, regardless of its environment.
The Docker platform revolves around the Docker Engine which leverages containerization to automate application deployment. The Docker Engine includes:
With Docker, you can quickly build container images from application source code and deploy them anywhere. The container abstraction makes your applications highly portable across environments.
Here are some of the main reasons to use Docker:
As you can see, Docker provides immense value like portability, consistency, isolation and scalability. Next, let's get Docker installed on our system.
Docker provides an easy installation experience for Linux, MacOS and Windows systems. Let's go through how to get Docker up and running on each platform.
On Linux, Docker installation is straightforward since it uses native Linux kernel features.
Most mainstream Linux distributions like Ubuntu, Debian, Fedora, and CentOS have Docker available in their package repositories. The steps to install Docker Community Edition (CE) are:
sudo apt update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
sudo docker version
# Example
# Client: Docker Engine - Community
# Version: 20.10.12
# API version: 1.41
# Go version: go1.16.12
# Git commit: e91ed57
# Built: Mon Dec 13 11:45:33 2021
# OS/Arch: linux/amd64
# Experimental: false
# Server: Docker Engine - Community
# Engine:
# Version: 20.10.12
# API version: 1.41 (minimum version 1.12)
# Go version: go1.16.12
# Git commit: 459d0df
# Built: Mon Dec 13 11:43:42 2021
# OS/Arch: linux/amd64
# Experimental: false
# containerd:
# Version: 1.4.12
# GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
That's it! Docker CE is now installed on your Linux system.
On Windows 10, Docker Desktop provides the simplest setup experience. It includes Docker Engine, Docker CLI, Docker Compose, Docker Content Trust, Kubernetes, Credential Helper and more.
Follow these steps to get Docker Desktop running on Windows 10:
docker version
That's all there is to it. Docker Desktop bundles everything you need in one simple package.
The installation process on MacOS is similar to Windows using the Docker Desktop app. Here are the steps:
docker version
to verify Docker is running properly.
docker version
# Example
# Client: Docker Engine - Community
# Version: 20.10.12
# API version: 1.41
# Go version: go1.16.12
# Git commit: 459d0df
# Built: Mon Dec 13 11:42:54 2021
# OS/Arch: darwin/amd64
# Experimental: false
# Server: Docker Engine - Community
# Engine:
# Version: 20.10.12
# API version: 1.41 (minimum version 1.12)
# Go version: go1.16.12
# Git commit: 87a90dc
# Built: Mon Dec 13 11:41:26 2021
# OS/Arch: linux/amd64
# Experimental: false
# containerd:
# Version: 1.4.12
# GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
Docker Desktop on Mac provides a seamless way to work with Docker on Apple silicon or Intel chips.
Now that we have Docker installed, let's go over some key concepts and architecture that are important to understand before we start using it.
Docker follows a client-server architecture:
So when you run docker image pull
or docker container run
, your local Docker client talks to the daemon which pulls images from a registry like Docker Hub.
Docker includes several high-level abstractions referred to as Docker objects. The key objects are:
These objects allow you to build distributed applications using Docker.
When building custom Docker images, you create a Dockerfile that defines the steps needed to build that image. Each step in the Dockerfile adds a layer to the image.
For example, a Dockerfile may look like:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y nginx
COPY index.html /var/www/html/
This Dockerfile:
ubuntu:18.04
image.apt-get
to install Nginx.index.html
.Each command adds a new writable layer to the image. Layers allow Docker images to share common dependencies and be lightweight.
Now that we understand the basics, let's start using Docker by running some containers.
Containers are running instances of Docker images. You can run, manage and orchestrate containers to develop your applications.
Use the docker container run
command to start a new container from an image:
docker container run -d -p 80:80 --name my-nginx nginx
This runs an Nginx web server container in detached mode, forwards port 80 to the host, and names the container my-nginx
.
You can manage the lifecycle of your containers:
docker container start/stop
to start/stop containersdocker container rm
to remove stopped containersdocker container exec
to run commands inside containersFor example, to execute a shell inside a running container:
docker container exec -it my-nginx bash
This opens up a Bash session within the container.
By default, Docker containers run isolated from the host network on a private bridge network. Port forwarding allows external access to containers.
You can also connect containers together or use custom Docker networks for more flexibility.
That covers the basics of working with containers! Next, let's look at how to build and manage custom images.
Docker images define the environments in which containers are created from. You can build new images or download images from registries like Docker Hub.
To build your own application images, create a Dockerfile that defines the steps needed:
Dockerfile
FROM node:14-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "server.js"]
This Dockerfile builds a Node.js app:
You can build this image with:
docker image build -t my-app .
This will generate a new Docker my-app
image from the Dockerfile.
You can list, tag, push and remove images using CLI commands like:
# List images
docker image ls
# Tag an image
docker image tag my-app mydockerhubid/my-app:1.0
# Push image to registry
docker image push mydockerhubid/my-app:1.0
# Remove image
docker image rm my-app
This allows you to maintain and distribute images properly.
Docker Hub is the default public registry for finding and sharing container images.
To share your own images, you can push them to Docker Hub repositories for others to access. For example:
# Tag image
docker image tag my-image mydockerhubusername/my-image:1.0
# Push to Docker Hub
docker image push mydockerhubusername/my-image:1.0
Now anyone can run your published image with:
docker run -it mydockerhubusername/my-image:1.0
Docker Hub integrates seamlessly with
Docker Hub integrates seamlessly with Docker for building and sharing images.
By default, Docker containers are ephemeral - the data doesn't persist when the container is removed. For persisting data, we can mount volumes.
Let's look at how to add volume mounts to containers.
You can add writable volumes when running containers using the -v
flag:
docker run -d \ -v mydata:/var/lib/mysql \ mysql
This mounts the mydata
volume from the host to the container location /var/lib/mysql
.
Any data written to /var/lib/mysql
inside the container will be persisted to the mydata
volume on the host.
You can also manage volumes outside the scope of containers:
# Create a volume docker volume create my-vol # List volumes docker volume ls # Remove volumes docker volume rm my-vol
Volumes give containers access to durable storage while keeping the container layers lightweight.
In order for containers to communicate with each other and the outside world, we need to connect them to networks.
By default, Docker provides three networks:
When you first docker run
a new container with no network specified, it automatically connects to the default bridge
network.
You can create custom networks for containers to connect to:
# Create an overlay network docker network create -d overlay my-multi-host-network # Create a macvlan network docker network create -d macvlan \ --subnet=172.16.86.0/24 \ --gateway=172.16.86.1 \ -o parent=eth0 pub_net
Connecting containers to these custom networks is useful for multi-tier applications and microservices.
Running multi-container apps with interconnected services can get complex quickly. Fortunately, Docker Compose makes this easy.
Docker Compose allows you to define and run multi-container Docker applications using a simple YAML file.
A sample docker-compose.yml
:
version: '3' services: wordpress: image: wordpress ports: - 8080:80 depends_on: - db db: image: mysql environment: MYSQL_ROOT_PASSWORD: secret
This Compose file defines two services: WordPress and MySQL database. The YAML configuration specifies image to use, ports to expose, dependencies, environment variables and more.
With the docker-compose.yml
defined, you can deploy the full multi-service app stack with:
docker-compose up -d
This launches both the WordPress and MySQL containers automatically linking them together.
Compose is a quick way
Compose is a quick way to build and manage complex multi-container apps with Docker.
Now that we know how to build, run and connect containers, let's look at some best practices for optimizing Docker workflows.
When working with Docker in production, you should follow these best practices:
latest
tag which can contain unexpected changesFollowing these and other Docker best practices will optimize stability, efficiency and security of your applications.
Since containers share kernel resources, running unknown images or privileged containers introduces security risks. Follow these guidelines to enhance security:
Docker provides security features and integrations to help mitigate the expanded attack surface of containers. Be sure to follow security best practices around container deployments.
Despite best efforts, problems can always arise when running complex systems like Docker. Fortunately Docker provides useful debugging tools.
The docker logs
command fetches application output from running containers:
docker logs -f my-container
This allows you to easily retrieve logs for troubleshooting issues.
To run commands within containers, use docker exec
:
docker exec -it my-container sh
This starts an interactive shell session in the container namespace.
The docker inspect
command gives low-level information about a container:
docker inspect my-container
This can reveal network settings, mount points, and other metadata to aid debugging.
Leveraging these and other Docker troubleshooting techniques allows you to operate containers with confidence.
In this extensive guide you learned how to:
Docker empowers you build and deploy applications consistently across environments. With the foundational concepts covered here, you are now ready to architect scalable containerized solutions.
Some recommended next steps are:
Docker has dramatically changed how we develop and run modern software. This guide provided the key skills to start leveraging Docker for all your projects. The container revolution is just getting started!
Here are some common FAQs about using Docker:
How do I access a running container?Use docker exec -it your-container-name bash
to start a Bash shell inside the container.
The container is deleted but the image used to create it still exists on your system.
How can containers communicate?Containers connected to the same Docker network can talk to each other.
How do I make configuration changes to a container?Don't modify a running container, instead rebuild the image with the changes.
What is the difference between Docker Engine and Docker Compose?Docker Engine is the underlying technology that runs containers. Docker Compose is a tool that defines multi-container app environments using a Compose YAML file.
When should I use Docker Swarm vs Kubernetes?Swarm provides basic clustering while Kubernetes offers production-grade orchestration capabilities.
Let us know if you have any other Docker questions! Or check out our first article Comprehensive Guide: How to Use Docker for Beginners
Here are some recommended Docker related courses from JBI Training:
CONTACT
+44 (0)20 8446 7555
Copyright © 2024 JBI Training. All Rights Reserved.
JB International Training Ltd - Company Registration Number: 08458005
Registered Address: Wohl Enterprise Hub, 2B Redbourne Avenue, London, N3 2BS
Modern Slavery Statement & Corporate Policies | Terms & Conditions | Contact Us