~/articles/docker-for-linux-admins

Docker for Linux Admins: From Basics to Production

A practical guide to Docker for Linux administrators — containers, images, Dockerfiles, networking, volumes, and production deployment tips.

12 min read

Docker changed how we package and deploy software. For Linux administrators, it builds on concepts you already know — namespaces, cgroups, union filesystems — and wraps them in a workflow that makes containers practical at scale. This guide cuts through the hype and focuses on what you actually need to know as a Linux admin working with Docker in production.

We will cover containers from the ground up: what they really are under the hood, how to write efficient Dockerfiles, networking and storage patterns, security considerations, and production deployment practices.

Containers Are Not Virtual Machines

The most important mental model shift: containers are isolated processes, not lightweight VMs. A container shares the host's kernel. It uses Linux namespaces for isolation (PID, network, mount, user, IPC, UTS) and cgroups for resource limits. There is no hypervisor, no guest OS.

# A container is just a process with isolation

$ docker run -d nginx

$ ps aux | grep nginx  # visible on the host

This means containers start in milliseconds (no OS boot), use far less memory than VMs, and can run hundreds on a single host. But it also means kernel vulnerabilities affect all containers, and you cannot run a different OS kernel inside a container.

Images, Layers, and the Build Cache

A Docker image is a stack of read-only filesystem layers. Each instruction in a Dockerfile creates a new layer. When you change a line in your Dockerfile, Docker rebuilds from that point forward — everything above it uses the cache.

# Good: dependencies cached separately from code

COPY package.json package-lock.json ./

RUN npm ci --production

COPY . .  # only code changes invalidate from here

# Bad: any code change reinstalls all deps

COPY . .

RUN npm ci --production

Order your Dockerfile from least-frequently-changed to most-frequently-changed. Put OS packages and dependencies before application code. This can reduce build times from minutes to seconds.

Writing Production Dockerfiles

A production Dockerfile should be minimal, secure, and reproducible. Here are the key practices:

Use multi-stage builds

Compile in one stage, copy the binary to a minimal runtime image. Your final image does not need gcc, make, or build tools — reducing size and attack surface by 10x or more.

Pin base image versions

Never use FROM node:latest. Pin to a specific version like FROM node:20.11-alpine3.19. This ensures reproducible builds and prevents surprise breaking changes.

Run as non-root

Add a USER instruction to run your process as a non-root user. If a container escape vulnerability is exploited, the attacker lands as an unprivileged user instead of root.

Use .dockerignore

Exclude .git,node_modules,.env, and test files. This speeds up builds and prevents secrets from leaking into images.

Networking: Bridge, Host, and Overlay

Docker creates a virtual bridge network (docker0) by default. Containers get their own IP address on this bridge and can communicate with each other by container name when on the same user-defined network.

Network TypeIsolationUse Case
bridgeContainer-levelDefault, good for most single-host setups
hostNoneMaximum performance (no NAT overhead)
overlayCross-hostDocker Swarm / multi-host clusters
noneCompleteSecurity-sensitive batch processing

# Create an isolated network for your app stack

$ docker network create myapp-net

$ docker run -d --network myapp-net --name api myapp-api

$ docker run -d --network myapp-net --name db postgres:16

# "api" can reach "db" by hostname: postgres://db:5432

Volumes and Persistent Storage

Containers are ephemeral — when removed, their filesystem is gone. For persistent data (databases, uploads, logs), use volumes. Docker offers three storage options:

# Named volume (Docker-managed, recommended for databases)

$ docker volume create pgdata

$ docker run -v pgdata:/var/lib/postgresql/data postgres:16

# Bind mount (host directory mapped into container)

$ docker run -v /opt/app/config:/etc/myapp:ro myapp

# tmpfs (in-memory, for sensitive temporary data)

$ docker run --tmpfs /tmp:size=100m myapp

Use named volumes for databases and stateful services. Use bind mounts for configuration files and development workflows. Use :ro (read-only) wherever possible to limit what the container can modify on the host.

Container Security Essentials

Running containers in production requires attention to security at every layer:

1. Scan images for vulnerabilities — use docker scout, Trivy, or Snyk in your CI pipeline.

2. Use minimal base images — Alpine, distroless, or scratch. Fewer packages = fewer vulnerabilities.

3. Drop capabilities — use --cap-drop ALL --cap-add to grant only needed Linux capabilities.

4. Set resource limits — use --memory and --cpus to prevent runaway containers from starving the host.

5. Use read-only root filesystem--read-only prevents writes except to mounted volumes and tmpfs.

6. Never store secrets in images — use Docker secrets, environment variables from a vault, or mounted secret files.

7. Enable user namespace remapping — maps container root to an unprivileged host user.

Docker Compose for Multi-Container Applications

Most applications consist of multiple services. Docker Compose lets you define and run multi-container applications with a single YAML file:

# docker-compose.yml

services:

  api:

    build: .

    ports: ["8080:8080"]

    depends_on: [db, redis]

    environment:

      DATABASE_URL: postgres://db:5432/myapp

  db:

    image: postgres:16-alpine

    volumes: [pgdata:/var/lib/postgresql/data]

  redis:

    image: redis:7-alpine

volumes:

  pgdata:

$ docker compose up -d      # start all services

$ docker compose logs -f api # follow logs for one service

$ docker compose down -v     # stop and remove volumes

Essential Docker Commands for Admins

$ docker ps -a                  # all containers (including stopped)

$ docker stats                  # live resource usage

$ docker logs --tail 100 -f app # last 100 lines + follow

$ docker exec -it app /bin/sh   # shell into running container

$ docker inspect app            # full container metadata

$ docker system prune -af       # reclaim disk space

$ docker image ls --format 'table {{.Repository}} {{.Tag}} {{.Size}}'

Related Tools