Running
docker runwith default settings gives your container root access, all Linux capabilities, and a writable filesystem. Here are 10 things you should change before shipping to production.
1. Never Run as Root
By default, containers run as root. If an attacker breaks out of your application, they have root inside the container — and potentially on the host if combined with other vulnerabilities.
# ❌ Bad — runs as root by default
FROM node:20
COPY . /app
CMD ["node", "/app/server.js"]
# ✅ Good — create and use a non-root user
FROM node:20
RUN groupadd -r appuser && useradd -r -g appuser appuser
COPY --chown=appuser:appuser . /app
USER appuser
CMD ["node", "/app/server.js"]2. Use Minimal Base Images
node:20 is 350MB+ and includes compilers, shell utilities, and package managers an attacker can use. Use slim or distroless images:
# ✅ Slim — much smaller attack surface
FROM node:20-slim
# ✅ Even better — distroless has no shell at all
FROM gcr.io/distroless/nodejs20-debian12Distroless images do not contain a shell, package manager, or any other program. If an attacker gets code execution, there is nothing to work with.
3. Drop All Capabilities
Linux capabilities grant fine-grained root powers. By default, Docker grants several. Drop them all and add back only what you need:
# Drop all capabilities, add back only what is needed
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp# docker-compose.yml
services:
app:
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE4. Use Read-Only Filesystem
Make the container filesystem read-only. If your app needs to write temporary files, mount a specific tmpfs:
docker run --read-only --tmpfs /tmp:rw,noexec,nosuid myapp# docker-compose.yml
services:
app:
read_only: true
tmpfs:
- /tmp:rw,noexec,nosuidThis prevents attackers from writing malware, scripts, or modifying application code inside the container.
5. Scan Images for Vulnerabilities
Your base image ships with OS packages that have known CVEs. Scan before deploying:
# Using Trivy (free, open source)
trivy image myapp:latest
# Using Docker Scout (built into Docker Desktop)
docker scout cves myapp:latest
# Using Snyk
snyk container test myapp:latestIntegrate scanning into your CI pipeline so vulnerable images never reach production:
# GitHub Actions example
- name: Scan image
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:latest
severity: HIGH,CRITICAL
exit-code: 1 # fail the build6. Never Store Secrets in Images
Secrets baked into images are visible to anyone with docker history or access to the image layers:
# ❌ Terrible — secret is permanently in the image layer
ENV DATABASE_PASSWORD=supersecret123
# ❌ Also bad — visible in image history even if deleted later
COPY .env /app/.env
RUN rm /app/.envInstead, pass secrets at runtime:
# Using environment variables
docker run -e DATABASE_PASSWORD="$(vault kv get -field=password secret/db)" myapp
# Using Docker secrets (Swarm)
echo "supersecret123" | docker secret create db_password -For Kubernetes, use Secrets with encryption at rest, or better — an external secrets manager like HashiCorp Vault or AWS Secrets Manager.
7. Set Resource Limits
Without limits, a compromised container can consume all host resources (CPU, memory, disk I/O), affecting other containers:
docker run --memory=512m --cpus=1.0 --pids-limit=100 myapp# docker-compose.yml
services:
app:
deploy:
resources:
limits:
memory: 512M
cpus: "1.0"
reservations:
memory: 256MThe --pids-limit flag prevents fork bombs.
8. Use Network Policies
By default, all containers on the same Docker network can talk to each other. Isolate them:
# docker-compose.yml
services:
app:
networks:
- frontend
db:
networks:
- backend
api:
networks:
- frontend
- backend
networks:
frontend:
backend:
internal: true # no external accessThe database should never be reachable from the frontend network. Only the API service bridges both.
9. Enable Logging and Monitoring
You cannot detect a breach if you are not logging:
# Use a logging driver that ships logs to a central system
docker run --log-driver=json-file --log-opt max-size=10m --log-opt max-file=3 myappMonitor for suspicious activity:
- Unexpected outbound network connections
- Processes spawning shells (
/bin/sh,/bin/bash) - File modifications in read-only filesystems
- Unusual resource consumption spikes
Tools like Falco can detect runtime threats by monitoring syscalls.
10. Sign and Verify Images
Image signing ensures that the image you deploy is the same one your CI pipeline built — not a tampered version:
# Sign with cosign (Sigstore)
cosign sign --key cosign.key myregistry.com/myapp:latest
# Verify before deploying
cosign verify --key cosign.pub myregistry.com/myapp:latestIn production, configure your container runtime to reject unsigned images.
Multi-Stage Builds: A Security Win
Multi-stage builds keep build tools out of your production image:
# Build stage — has compilers, dev dependencies
FROM node:20 AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage — minimal, no build tools
FROM node:20-slim
RUN groupadd -r app && useradd -r -g app app
WORKDIR /app
COPY --from=build --chown=app:app /app/dist ./dist
COPY --from=build --chown=app:app /app/node_modules ./node_modules
USER app
CMD ["node", "dist/server.js"]Your production image has no compiler, no source code, and no dev dependencies.
Conclusion
Container security is not a single setting — it is a combination of non-root users, minimal images, dropped capabilities, read-only filesystems, vulnerability scanning, proper secrets management, network isolation, resource limits, logging, and image signing. None of these are difficult. Most are a single line in your Dockerfile or compose file. Do them all.