Mastering Advanced Docker Concepts.

Β·

7 min read

Docker has revolutionized how applications are developed, deployed, and managed, making containerization a mainstream practice in the software industry. Our previous blog post explored the basic Docker commands and concepts, laying the foundation for working with containers. However, mastering its advanced features is essential to truly harness the power of Docker and unlock its full potential. While the basics of Docker are relatively straightforward, delving into its advanced concepts can open up a world of possibilities and empower you to optimize resource utilization, implement sophisticated networking and storage solutions, and streamline your containerization workflows.

How to create a Docker Image?

To create a Docker Image for the Node.js project, follow these steps. Open your project and create a DockerFile in the root directory. But before that, you should have a Docker hub account to publish your images online.

Please keep your Docker Daemon on while working on this stuff.

This Docker configuration tells the Docker engine to perform the following steps:

// Use the Ubuntu base image from Docker Hub
FROM ubuntu   

// Update the package lists for the latest versions
RUN apt-get update  

// Install curl, which is required to download Node.js
RUN apt-get install -y curl  

// Download the script to add the Node.js repository
RUN curl -sL https://deb.nodesource.com/setup_18.x | bash -  

// Upgrade all packages to their latest versions
RUN apt-get upgrade -y  

// Install Node.js
RUN apt-get install -y nodejs  

// Copy the package.json file from the current directory to the container
COPY package.json package.json 

 // Copy the package-lock.json file from the current directory to the container
COPY package-lock.json package-lock.json 

// Copy the main application file (index.js) from the current directory to the container
COPY index.js index.js  

// Install the project dependencies specified in package.json
RUN npm install  

// Set the default command to run when the container starts, which is to execute the index.js file with Node.js
ENTRYPOINT [ "node", "index.js" ]

This Dockerfile sets up a containerized environment with Node.js installed and copies the project files (package.json, package-lock.json, and index.js) into the container. When the container is built and run, it will automatically install the project dependencies using npm install and then execute the index.js file with Node.js, effectively running the Node.js application within the container.

docker build -t your-dockerhub-username/image-name .

this command will build the docker image out of the Dockerfile. Here -t represents tag for the image.

docker push your-dockerhub-username/image-name

You run the above command to push the docker image to the dockerhub.

In your docker daemon, you will notice that a new image has been created.

What are Docker Volumes?

As we know from our previous Docker blog, data inside Docker containers is not persistent by default. Containers are designed to be immutable and ephemeral, which means that any changes or data stored inside a container's filesystem will be lost when the container is stopped or removed.

This non-persistent nature of container data can pose challenges, especially in scenarios where you need to store and maintain data across container restarts or deployments. To address this issue, Docker provides a feature called Volumes, which allows you to persist and share data between containers and the host system.

Volumes are directories (or mount points) that are designed to store persistent data, independent of the container's lifecycle. When a container is removed or recreated, the data stored in volumes remains intact, making it a convenient way to manage data in containerized environments.

Now let's just setup the MongoDB inside Docker and try to use it locally.

docker pull mongo

Now this will pull the mongo image from the dockerhub.

docker volume create mongo-data

This command creates a named volume called mongo-data to store the MongoDB data files.

docker run -d --name mongo-container -p 27017:27017 -v mongo-data:/data/db mongo

Let's break down this command:

  • -d runs the container in detached mode (in the background).

  • --name mongo-container assigns a name to the container for easy reference.

  • -p 27017:27017 assign container port to host port.

  • -v mongo-data:/data/db mounts the mongo-data volume to the /data/db directory inside the container, the default location for MongoDB data files.

  • mongo is the name of the Docker image to run.

Now to use this database in our project we have to use the string

mongodb://host.docker.internal:27017 #windows
mongodb://{docker-hostIP}:27017/ #docker-host IP

What is Docker Compose?

Docker Compose is a tool that helps you define and run multi-container Docker applications. It uses a YAML file to configure the application's services, networks, and volumes. Here's a simple example to illustrate how Docker Compose works:

Sure, here's an example of using Docker Compose with a Node.js application:

  1. Create a new directory for your project and navigate to it:
mkdir node-compose-app
cd node-compose-app
npm init -y
npm i express
  1. Create an index.js file with a simple Express.js server:
const express = require('express');
const app = express();
const port = 3000;

app.get('/', (req, res) => {
  res.send('Hello, Docker Compose!');
});

app.listen(port, () => {
  console.log(`Server running on port ${port}`);
});
  1. Create a docker-compose.yml file with the following content:
version: '3'
services:
  app:
    build: .
    ports:
      - "3000:3000"
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development

This docker-compose.yml the file defines a single service called app:

  • build: . tells Docker Compose to build the Docker image from the current directory.

  • ports: - "3000:3000" maps the host's port 3000 to the container's port 3000.

  • volumes: - .:/app mounts the current directory (node-compose-app) to the /app directory inside the container.

  • volumes: - /app/node_modules mounts the node_modules directory as a volume to persist the installed packages across container rebuilds.

  • environment: - NODE_ENV=development sets the NODE_ENV environment variable to development.

  1. Build and run the Docker Compose application:
docker-compose up

This command will build the Docker image (if it doesn't exist) and start the container defined in the docker-compose.yml file.

  1. Open your web browser and go to http://localhost:3000. You should see the "Hello, Docker Compose!" message from the Express.js server.

In this example, Docker Compose sets up a Node.js application with a single service. The Node.js application is built from the current directory, and the node_modules directory is mounted as a volume to persist the installed packages across container rebuilds.

Docker Compose makes it easy to define and manage containerized Node.js applications by handling the creation and configuration of containers, volumes, and environment variables. It also allows you to scale services, manage environment variables, and define dependencies between services.

You can extend this example to include additional services like a database (e.g., MongoDB or PostgreSQL) or a caching service (e.g., Redis) by defining them in the docker-compose.yml file and configuring the necessary networking and dependencies.

What are some Good Practices?

1. Use Official Images When Possible:

  • Security and Reliability: Official images are generally more secure and better maintained.

  • Up-to-date: They are regularly updated with security patches.

2. Minimize the Image Size:

  • Use Lightweight Base Images: Use images like alpine when possible.

  • Multi-stage Builds: Use multi-stage builds to reduce the final image size by separating build dependencies from runtime dependencies.

3. Keep Dockerfiles Simple and Clean:

  • Order Matters: Place the most frequently changing instructions at the bottom of the Dockerfile to leverage layer caching.

  • Reduce Layers: Combine multiple RUN commands to reduce the number of layers.

# Bad Example 2
FROM node:latest

# Set the working directory
WORKDIR /app

# Copy everything including unnecessary files
COPY . .

# Install dependencies
RUN npm install

# Expose the port the app runs on
EXPOSE 3000

# Run as root user
USER root

# Command to run the application
CMD ["node", "server.js"]

4. Use .dockerignore:

  • Exclude Unnecessary Files: Create a .dockerignore file to exclude files and directories that aren't needed in the build context, similar to .gitignore.

5. Optimize for Build Cache:

  • Leverage Caching: Structure your Dockerfile to maximize caching. For instance, place instructions that are less likely to change at the top.

  •     # Good Example
        # Use a lightweight base image
        FROM node:14-alpine
    
        # Set the working directory
        WORKDIR /app
    
        # Copy package.json and package-lock.json to leverage caching
        COPY package*.json ./
    
        # Install dependencies
        RUN npm install
    
        # Copy the rest of the application code
        COPY . .
    
        # Expose the port the app runs on
        EXPOSE 3000
    
        # Define environment variables
        ENV NODE_ENV=production
    
        # Command to run the application
        CMD ["node", "server.js"]
    
    • Lightweight Base Image: Using node:14-alpine keeps the image size small.

    • Leverage Caching: Copying only package*.json and running npm install before copying the rest of the application code allows Docker to cache the npm install step.

Please Share this Blog with your Friends and who want to know about Docker Concepts in simple language.Learn something new I will meet you next week till then Keep Learning and keep growing. Happy CodingπŸ‘¨β€πŸ’»πŸ‘¨β€πŸ’»

Did you find this article valuable?

Support Vishal Sharma by becoming a sponsor. Any amount is appreciated!

Β