First Steps with Docker: Containers on Linux
Docker has revolutionized the way developers and operations teams build, ship, and run applications. By providing lightweight, portable containers, Docker simplifies application deployment and ensures consistent environments across development, staging, and production. This comprehensive guide will walk you through everything you need to know to get started with Docker on a Linux host.
Why Containers
Traditional virtualization relies on hypervisors and virtual machines (VMs), each with its own full operating system. Containers, in contrast:
- Share the host kernel, greatly reducing overhead.
- Start almost instantly compared to VM boot times.
- Provide isolation for processes, networks, and filesystems.
- Offer reproducible environments, eliminating “works on my machine” issues.
Prerequisites
- A Linux distribution with a modern kernel (3.10 ). Tested on Ubuntu, CentOS, Fedora, Debian.
- Basic command-line proficiency and sudo/root privileges.
- Internet access for downloading packages and images. If you’re behind a corporate firewall, a VPN such as OpenVPN or WireGuard can help you reach Docker’s registries securely.
1. Installing Docker Engine
The Docker Engine consists of three major components: the Docker daemon (dockerd), the CLI client (docker), and the Docker registry. Let’s install them on a Debian/Ubuntu system:
- Update packages:
sudo apt-get update - Install dependencies:
sudo apt-get install -y ca-certificates curl gnupg lsb-release
- Add Docker’s official GPG key and repository:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo deb [arch=(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu (lsb_release -cs) stable sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- Install Docker Engine:
sudo apt-get update sudo apt-get install -y docker-ce docker-ce-cli containerd.io
- Verify installation:
sudo docker versionandsudo docker run hello-world.
2. Core Concepts
Container: A runnable instance of an image. It includes the application and its dependencies.
Image: A read-only template built up from a series of layers. Images are stored in registries.
Dockerfile: A text document that automates the building of an image by specifying instructions.
Registry: A storage and distribution system for Docker images, e.g., Docker Hub or private registries.
Comparing Container Engines
| Feature | Docker | Podman | containerd |
|---|---|---|---|
| Daemon (rootless support) |
dockerd (partial) |
No central daemon (yes) |
Embedded (no) |
| CLI | docker | podman | ctr |
| Ecosystem | Largest | Growing | Low-level |
3. Building Your First Image
Use a Dockerfile to describe how your image should be built. Here’s a simple Node.js example:
# Use an official Node.js runtime as a parent image FROM node:18-alpine # Set working directory WORKDIR /usr/src/app # Copy package definition and install dependencies COPY package.json ./ RUN npm install --production # Copy application source COPY . . # Expose port and run EXPOSE 3000 CMD [node,app.js]
- Place the
Dockerfilein your project root. - Build the image:
docker build -t my-node-app:1.0 .
- List available images:
docker images.
4. Running and Managing Containers
- Start a container:
docker run -d --name web -p 8080:3000 my-node-app:1.0
-d runs detached -p maps host port to container port.
- View logs:
docker logs web - Execute a shell:
docker exec -it web sh
- Stop and remove:
docker stop web docker rm web
5. Networking
Docker sets up several networks by default: bridge, host, and none. You can also create custom bridge networks:
docker network create --driver bridge isolated_net docker run -d --network=isolated_net --name db postgres:15 docker run -d --network=isolated_net --name app my-node-app:1.0
Containers on the same bridge network can communicate by name:
- From
appcontainer:ping db - Use environment variables
DB_HOST=dbin your application.
6. Data Persistence with Volumes
To keep data beyond container lifecycles, use volumes:
- Create a named volume:
docker volume create app_data
- Mount it:
docker run -d --name web -v app_data:/usr/src/app/data my-node-app:1.0
Inspect volumes with docker volume ls and docker volume inspect app_data.
7. Best Practices
- Minimize layers: Combine RUN commands where possible.
- Leverage official images from trusted sources.
- Avoid storing secrets in images use environment variables or Docker secrets.
- Use multi-stage builds to reduce final image size.
- Regularly scan images for vulnerabilities using tools like Docker Scout or third-party scanners.
8. Security Considerations
Although containers share the host kernel, you can harden Docker deployments:
- Run containers with least privileges:
--userflag to avoid root inside container. - Enable SELinux or AppArmor profiles for confinement.
- Use
docker bench securityfor automated security audits. - Keep Docker Engine and images updated to patch vulnerabilities.
9. Docker Compose: Orchestrating Multi-Container Apps
For multi-container setups, Docker Compose simplifies configuration with a docker-compose.yml:
version: 3.8
services:
db:
image: postgres:15
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: example
web:
build: .
ports:
- 8080:3000
depends_on:
- db
volumes:
db_data:
Bring up the stack:
docker-compose up -d
10. Next Steps
- Explore Docker Swarm or Kubernetes for advanced orchestration.
- Integrate with CICD pipelines (GitHub Actions, GitLab CI).
- Implement monitoring (Prometheus, Grafana) and logging (ELK stack).
- Experiment with rootless Docker for enhanced security.
By mastering these fundamentals, you’ll be well on your way to building robust, portable containerized applications on Linux with Docker.
Leave a Reply