CUSTOMISED
Expert-led training for your team
Dismiss
Setting up a Docker Development Environment on (Linux / Windows / Mac

14 September 2023

Setting up a Docker Development Environment on (Linux / Windows / Mac)

Docker has revolutionized the way developers build, share and run applications using containers. Setting up a Docker environment enables you to develop applications in isolated containers, ship them as portable images, and deploy them anywhere consistently. This article is a support resource for our popular Docker Training. To find our how you can enroll or get your team trained feel free to get in contact today. 

In this comprehensive guide, you'll learn how to get started with Docker by installing it on Linux, Windows 10, and MacOS. We'll cover key Docker concepts like images, containers, volumes, networks and more. You'll learn how to build, manage and share applications using containers and deploy multi-container apps.

We'll also look at best practices for optimizing Docker for development and CI/CD workflows. By the end, you'll have the fundamental skills needed to start productively using Docker for all your development projects.

What is Docker and Why Use It?

Before we dive into setting up Docker, let's start with a quick overview of what Docker is and its benefits.

What is Docker?

Docker is an open platform for developing, shipping, and running applications using containers. Containers allow you to package an application's code, configurations, and dependencies into a single standardized unit. This guarantees that the application will always run the same, regardless of its environment.

The Docker platform revolves around the Docker Engine which leverages containerization to automate application deployment. The Docker Engine includes:

  • The Docker daemon which runs on the host machine.
  • REST API for interacting with the daemon.
  • Docker CLI client that talks to the daemon.

With Docker, you can quickly build container images from application source code and deploy them anywhere. The container abstraction makes your applications highly portable across environments.

Why Use Docker?

Here are some of the main reasons to use Docker:

  • Consistent environments - Containers ensure your application and all its dependencies are packaged together. This guarantees consistent behavior across dev, test, and prod.
  • Lightweight - Containers share the host OS kernel and run as isolated user space processes for efficient resource utilization.
  • Portable - You can build an application image on your laptop, push it to a registry like Docker Hub, and run it anywhere.
  • Agile development - Quickly iterate and test applications directly inside containers during development.
  • Microservices - The container model lends itself well to building distributed apps from small modular services.
  • CI/CD - Automate container image building and deployment in your CI/CD pipelines.
  • Isolation - Containers isolate applications from each other on a shared host for improved security.
  • Scalability - You can scale containerized apps across hosts for flexibility and high availability.

As you can see, Docker provides immense value like portability, consistency, isolation and scalability. Next, let's get Docker installed on our system.

Installing Docker

Docker provides an easy installation experience for Linux, MacOS and Windows systems. Let's go through how to get Docker up and running on each platform.

Installing on Linux

On Linux, Docker installation is straightforward since it uses native Linux kernel features.

Most mainstream Linux distributions like Ubuntu, Debian, Fedora, and CentOS have Docker available in their package repositories. The steps to install Docker Community Edition (CE) are:

  1. Update package index and install prerequisite packages:
    sudo apt update
    
    sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
    
  2. Add Docker's official GPG key:
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg  
    
  3. Set up the Docker apt repository:
    echo \  
    "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    
  4. Install Docker CE:
    sudo apt update 
    sudo apt install docker-ce docker-ce-cli containerd.io
    
  5. Verify Docker is installed and check the version:
    sudo docker version
    
    # Example
    # Client: Docker Engine - Community
    # Version:           20.10.12
    # API version:       1.41
    # Go version:        go1.16.12
    # Git commit:        e91ed57
    # Built:             Mon Dec 13 11:45:33 2021
    # OS/Arch:           linux/amd64
    # Experimental:      false
    
    # Server: Docker Engine - Community  
    # Engine:
    # Version:          20.10.12
    # API version:      1.41 (minimum version 1.12) 
    # Go version:       go1.16.12
    # Git commit:       459d0df
    # Built:            Mon Dec 13 11:43:42 2021
    # OS/Arch:          linux/amd64
    # Experimental:     false
    # containerd:
    # Version:          1.4.12
    # GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
    

That's it! Docker CE is now installed on your Linux system.

Installing on Windows 10

On Windows 10, Docker Desktop provides the simplest setup experience. It includes Docker Engine, Docker CLI, Docker Compose, Docker Content Trust, Kubernetes, Credential Helper and more.

Follow these steps to get Docker Desktop running on Windows 10:

  1. Download and install Docker Desktop from docker.com. Be sure to download the Windows installer.
  2. Once installed, Docker Desktop will automatically start the Docker daemon.
  3. Verify Docker is running by opening PowerShell and running:
    docker version
    
  4. When installation is successful, you will see Docker Desktop running as an application in the system tray.

That's all there is to it. Docker Desktop bundles everything you need in one simple package.

Installing on MacOS

The installation process on MacOS is similar to Windows using the Docker Desktop app. Here are the steps:

  1. Download Docker Desktop from docker.com. Get the MacOS Intel or Apple chip installer.
  2. Install Docker Desktop just like any other Mac Application. It will automatically launch and start the Docker daemon.
  3. Open Terminal and run docker version to verify Docker is running properly.
    docker version
    
    # Example  
    # Client: Docker Engine - Community
    # Version:           20.10.12
    # API version:       1.41
    # Go version:        go1.16.12
    # Git commit:        459d0df
    # Built:             Mon Dec 13 11:42:54 2021
    # OS/Arch:           darwin/amd64
    # Experimental:      false
    
    # Server: Docker Engine - Community
    # Engine:  
    # Version:          20.10.12
    # API version:      1.41 (minimum version 1.12)
    # Go version:       go1.16.12
    # Git commit:       87a90dc
    # Built:            Mon Dec 13 11:41:26 2021
    # OS/Arch:          linux/amd64
    # Experimental:     false
    # containerd:
    # Version:          1.4.12
    # GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
    

Docker Desktop on Mac provides a seamless way to work with Docker on Apple silicon or Intel chips.

Docker Concepts and Architecture

Now that we have Docker installed, let's go over some key concepts and architecture that are important to understand before we start using it.

Docker Architecture

Docker follows a client-server architecture:

  • The Docker daemon runs on the host machine. It is a background service that manages building and running containers.
  • The Docker client is the command line tool that allows users to interact with the daemon. It uses the Docker API to communicate with the daemon.
  • A registry stores Docker images that can be downloaded to local daemons. Docker Hub is the default public registry with 100K+ images.

Architecture of Docker - GeeksforGeeks

So when you run docker image pull or docker container run, your local Docker client talks to the daemon which pulls images from a registry like Docker Hub.

Docker Objects

Docker includes several high-level abstractions referred to as Docker objects. The key objects are:

  • Images - Read-only templates used to create containers. Images define what the container will contain.
  • Containers - Running instances of Docker images. You can run, start, stop and delete containers.
  • Services - Containers running together and managed as a group. Services define how containers behave in production.
  • Volumes - Directories and files for persisting data outside containers. Volumes enable data sharing between the host machine and containers.
  • Networks - Private networks for communication between containers across hosts. Networks control how containers talk to each other.
  • Swarm - Cluster of Docker hosts running in swarm mode. Enables orchestration of services across many physical or virtual machines.

These objects allow you to build distributed applications using Docker.

Dockerfile and Image Layers

When building custom Docker images, you create a Dockerfile that defines the steps needed to build that image. Each step in the Dockerfile adds a layer to the image.

For example, a Dockerfile may look like:

FROM ubuntu:18.04
 
RUN apt-get update && apt-get install -y nginx

COPY index.html /var/www/html/

This Dockerfile:

  1. Starts from an existing ubuntu:18.04 image.
  2. Runs apt-get to install Nginx.
  3. Copies over a custom index.html.

Each command adds a new writable layer to the image. Layers allow Docker images to share common dependencies and be lightweight.

Now that we understand the basics, let's start using Docker by running some containers.

Working with Containers

Containers are running instances of Docker images. You can run, manage and orchestrate containers to develop your applications.

Running Containers

Use the docker container run command to start a new container from an image:

docker container run -d -p 80:80 --name my-nginx nginx  

This runs an Nginx web server container in detached mode, forwards port 80 to the host, and names the container my-nginx.

Managing Container Lifecycle

You can manage the lifecycle of your containers:

  • docker container start/stop to start/stop containers
  • docker container rm to remove stopped containers
  • docker container exec to run commands inside containers

For example, to execute a shell inside a running container:

docker container exec -it my-nginx bash

This opens up a Bash session within the container.

Container Networking

By default, Docker containers run isolated from the host network on a private bridge network. Port forwarding allows external access to containers.

You can also connect containers together or use custom Docker networks for more flexibility.

That covers the basics of working with containers! Next, let's look at how to build and manage custom images.

Building Docker Images

Docker images define the environments in which containers are created from. You can build new images or download images from registries like Docker Hub.

Building Custom Images with Dockerfiles

To build your own application images, create a Dockerfile that defines the steps needed:

Dockerfile
FROM node:14-alpine

WORKDIR /app  
COPY . .

RUN yarn install --production

CMD ["node", "server.js"]

This Dockerfile builds a Node.js app:

  • Uses a Node 14 base image.
  • Sets the working directory.
  • Copies the application code.
  • Installs dependencies.
  • Specifies the runtime command.

You can build this image with:

docker image build -t my-app .

This will generate a new Docker my-app image from the Dockerfile.

Managing Images

You can list, tag, push and remove images using CLI commands like:

# List images
docker image ls 

# Tag an image
docker image tag my-app mydockerhubid/my-app:1.0

# Push image to registry   
docker image push mydockerhubid/my-app:1.0

# Remove image
docker image rm my-app

This allows you to maintain and distribute images properly.

Sharing Images via Docker Hub

Docker Hub is the default public registry for finding and sharing container images.

To share your own images, you can push them to Docker Hub repositories for others to access. For example:

  
# Tag image
docker image tag my-image mydockerhubusername/my-image:1.0

# Push to Docker Hub 
docker image push mydockerhubusername/my-image:1.0

Now anyone can run your published image with:

docker run -it mydockerhubusername/my-image:1.0 

Docker Hub integrates seamlessly with

Docker Hub integrates seamlessly with Docker for building and sharing images.

Persisting Data with Volumes

By default, Docker containers are ephemeral - the data doesn't persist when the container is removed. For persisting data, we can mount volumes.

Let's look at how to add volume mounts to containers.

Adding Docker Volumes

You can add writable volumes when running containers using the -v flag:

 docker run -d \ -v mydata:/var/lib/mysql \ mysql  

This mounts the mydata volume from the host to the container location /var/lib/mysql.

Any data written to /var/lib/mysql inside the container will be persisted to the mydata volume on the host.

Managing Volumes

You can also manage volumes outside the scope of containers:

 # Create a volume docker volume create my-vol # List volumes docker volume ls # Remove volumes docker volume rm my-vol 

Volumes give containers access to durable storage while keeping the container layers lightweight.

Networking Containers

In order for containers to communicate with each other and the outside world, we need to connect them to networks.

Default Container Networks

By default, Docker provides three networks:

  • bridge - The default network all containers connect to. Containers on this network can communicate but remain isolated from external network access without additional configuration.
  • host - Adds containers directly to the host network stack. No isolation between host machine and containers on this network.
  • none - No networking access. Disables all incoming and outgoing networking for a container.

When you first docker run a new container with no network specified, it automatically connects to the default bridge network.

Creating Custom Networks

You can create custom networks for containers to connect to:

 # Create an overlay network docker network create -d overlay my-multi-host-network # Create a macvlan network docker network create -d macvlan \ --subnet=172.16.86.0/24 \ --gateway=172.16.86.1 \ -o parent=eth0 pub_net  
  • Overlay networks allow containers across multiple Docker hosts to communicate securely. This enables Swarm services to be linked together.
  • Macvlan networks assign containers direct access to host physical interfaces and IP addresses. Useful when containers need to directly talk to an external network.

Connecting containers to these custom networks is useful for multi-tier applications and microservices.

Multi-Container Apps with Docker Compose

Running multi-container apps with interconnected services can get complex quickly. Fortunately, Docker Compose makes this easy.

Docker Compose allows you to define and run multi-container Docker applications using a simple YAML file.

Defining Services in Compose

A sample docker-compose.yml:

 version: '3' services: wordpress: image: wordpress ports: - 8080:80 depends_on: - db db: image: mysql environment: MYSQL_ROOT_PASSWORD: secret  

This Compose file defines two services: WordPress and MySQL database. The YAML configuration specifies image to use, ports to expose, dependencies, environment variables and more.

Deploying Apps with Docker Compose

With the docker-compose.yml defined, you can deploy the full multi-service app stack with:

 docker-compose up -d 

This launches both the WordPress and MySQL containers automatically linking them together.

Compose is a quick way 

Compose is a quick way to build and manage complex multi-container apps with Docker.

Now that we know how to build, run and connect containers, let's look at some best practices for optimizing Docker workflows.

Docker Best Practices

When working with Docker in production, you should follow these best practices:

Image Optimization

  • Use small base images like Alpine Linux to reduce size
  • Leverage multi-stage builds to keep dev dependencies out of production images
  • Scan images for vulnerabilities using tools like Trivy or Snyk
  • Don't store secrets or keys in images - use secret management instead

Container Configuration

  • Set memory limits and CPU constraints on containers
  • Enforce container healthchecks and restart policies
  • Follow principle of least privilege - run containers with minimal necessary access
  • Give containers immutable infrastructure whenever possible

Logging and Monitoring

  • Collect and ship container logs to a central system
  • Monitor container resource usage, uptime and health metrics
  • Tag containers with useful metadata for filtering
  • Enable Docker debugging for troubleshooting

Image Tagging

  • Use semantic versioning and meaningful tags for images
  • Don't rely on the latest tag which can contain unexpected changes
  • Limit image tag retention to avoid storage saturation

Following these and other Docker best practices will optimize stability, efficiency and security of your applications.

Docker Security Considerations

Since containers share kernel resources, running unknown images or privileged containers introduces security risks. Follow these guidelines to enhance security:

  • Limit privileges given to containers via read-only volumes, dropping capabilities and applying AppArmor or SELinux policies
  • Scan images for known vulnerabilities before deploying to production
  • Sign images using Docker Content Trust to ensure integrity
  • Enable Docker Security Scanning to monitor for security issues in running containers
  • Restrict network traffic between containers and block connections to unused ports/protocols
  • Use secrets for sensitive data and store them securely outside containers
  • Continuously monitor and audit container runtime activity to detect threats

Docker provides security features and integrations to help mitigate the expanded attack surface of containers. Be sure to follow security best practices around container deployments.

Debugging Containers

Despite best efforts, problems can always arise when running complex systems like Docker. Fortunately Docker provides useful debugging tools.

Retrieving Container Logs

The docker logs command fetches application output from running containers:

 docker logs -f my-container 

This allows you to easily retrieve logs for troubleshooting issues.

Executing Commands Inside Containers

To run commands within containers, use docker exec:

 docker exec -it my-container sh 

This starts an interactive shell session in the container namespace.

Inspecting Container Details

The docker inspect command gives low-level information about a container:

 docker inspect my-container 

This can reveal network settings, mount points, and other metadata to aid debugging.

Leveraging these and other Docker troubleshooting techniques allows you to operate containers with confidence.

Wrapping Up Docker

In this extensive guide you learned how to:

  • Install Docker on Linux, MacOS and Windows systems
  • Work with containers and images to build, run and share applications
  • Manage volumes for persisting data across containers
  • Network containers across multiple hosts
  • Use Docker Compose for defining and running multi-container apps
  • Optimize Docker for improved security, efficiency and ease of use

Docker empowers you build and deploy applications consistently across environments. With the foundational concepts covered here, you are now ready to architect scalable containerized solutions.

Some recommended next steps are:

  • Learn how Kubernetes builds upon Docker to provide production-grade orchestration
  • Explore deploying Docker Swarm clusters for complex multi-host container coordination
  • Understand CI/CD integration and workflow triggers to automate image building/deployment
  • Check out advanced networking techniques like service meshes to connect microservices

Docker has dramatically changed how we develop and run modern software. This guide provided the key skills to start leveraging Docker for all your projects. The container revolution is just getting started!

Frequently Asked Questions About Docker

Here are some common FAQs about using Docker:

How do I access a running container?

Use docker exec -it your-container-name bash to start a Bash shell inside the container.

What happens when I remove a container?

The container is deleted but the image used to create it still exists on your system.

How can containers communicate?

Containers connected to the same Docker network can talk to each other.

How do I make configuration changes to a container?

Don't modify a running container, instead rebuild the image with the changes.

What is the difference between Docker Engine and Docker Compose?

Docker Engine is the underlying technology that runs containers. Docker Compose is a tool that defines multi-container app environments using a Compose YAML file.

When should I use Docker Swarm vs Kubernetes?

Swarm provides basic clustering while Kubernetes offers production-grade orchestration capabilities.

Let us know if you have any other Docker questions! Or check out our first article Comprehensive Guide: How to Use Docker for Beginners

Here are some recommended Docker related courses from JBI Training:

CONTACT
+44 (0)20 8446 7555

[email protected]

SHARE

 

Copyright © 2023 JBI Training. All Rights Reserved.
JB International Training Ltd  -  Company Registration Number: 08458005
Registered Address: Wohl Enterprise Hub, 2B Redbourne Avenue, London, N3 2BS

Modern Slavery Statement & Corporate Policies | Terms & Conditions | Contact Us

POPULAR

Rust training course                                                                          React training course

Threat modelling training course   Python for data analysts training course

Power BI training course                                   Machine Learning training course

Spring Boot Microservices training course              Terraform training course

Kubernetes training course                                                            C++ training course

Power Automate training course                               Clean Code training course