Docker in 2026: Containers, BuildKit, and the Modern OCI Ecosystem
Containers are not new anymore. Docker turned twelve this year, and the developers who once called it revolutionary now just call it Tuesday. Yet the fundamentals behind containers are more worth understanding than ever, because the tooling, the runtime stack, and the build system have all changed significantly since those early days. If you learned Docker in 2018 or even 2021, a lot of what you know is still valid — but some of it is outdated, and a few things are just wrong now.
This post is for practitioners who want an accurate picture of where things stand in 2026.
Why containers still matter
The short version: VMs gave you isolation at the cost of running a full OS per workload. Containers give you isolation by sharing the host kernel, which means faster startup, less RAM overhead, and a packaging model that travels cleanly from a developer laptop to a production cluster. That trade-off is still the right one for most server workloads. Serverless and PaaS have carved off the simpler end of the spectrum, but anything stateful, latency-sensitive, or complex enough to need a real runtime still ends up in a container.
The runtime landscape: Docker is now a UX layer
This surprises people who have not been paying attention. When you install Docker Desktop and run docker run, you are not actually using Docker as the runtime. Docker, Inc. refactored the engine years ago, and what runs your containers in production is almost certainly containerd — a CNCF-graduated project that handles image pulls, container lifecycle, and storage. Below containerd sits runc, the low-level OCI runtime that actually calls the Linux kernel APIs (namespaces, cgroups) to create the container process.
The Container Runtime Interface (CRI) is the API Kubernetes uses to talk to container runtimes. Kubernetes removed its built-in dockershim adapter in version 1.24, which means Docker Engine is no longer a valid CRI target. Most managed Kubernetes services (EKS, GKE, AKS) switched to containerd as the node runtime years ago. On-prem clusters often use CRI-O, another lightweight CRI runtime that Red Hat backs.
In practice: Docker Desktop is still what most Mac and Windows developers use locally. But the thing running containers on your production nodes is containerd or CRI-O, not the Docker daemon. Understanding that stack matters when you are debugging low-level container issues or reading Kubernetes node logs.
Podman is worth knowing. It is a daemonless container engine — no background process, no root requirement by default. Podman is compatible with most Docker CLI commands and is the default on RHEL and Fedora systems. For teams running rootless container builds in CI or needing to avoid the Docker daemon entirely, Podman is a real option.
Docker Desktop in 2026
Docker Desktop remains the default for Mac and Windows developers, and for good reason — it handles the Linux VM, manages file sharing, integrates with WSL2 on Windows, and just works. On Apple Silicon it runs containers natively via the ARM64 architecture, which matters when you build images: a docker build on an M-series Mac now produces an ARM64 image by default, not x86. If your production environment is x86, you need to be explicit about this.
The licensing situation: Docker Desktop is free for personal use, education, and small businesses (under 250 employees and under $10M revenue). Larger commercial organizations need a paid subscription. This pushed some teams toward alternatives.
Rancher Desktop is the main free alternative. It uses containerd under the hood, ships with nerdctl (a Docker-compatible CLI for containerd), and runs on Mac, Windows, and Linux. If your organization wants to avoid the Docker Desktop license and you are comfortable with a slightly different CLI, Rancher Desktop works well.
BuildKit: the build system is different now
BuildKit has been the default builder since Docker 23, released in early 2023. If you installed Docker in the last couple of years, you are already using it. But a lot of tutorials and Dockerfiles floating around the internet were written before BuildKit, and they miss features that meaningfully change how you should build images.
The most immediately useful feature is cache mounts. Package managers like npm, pip, Maven, and Cargo download the same dependencies on every build when you use a naive Dockerfile. Cache mounts let you persist those download caches between builds on the same host, without baking them into the image layer. The --mount=type=cache syntax is clean:
# syntax=docker/dockerfile:1
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production
COPY . .
CMD ["node", "server.js"]
That --mount=type=cache line tells BuildKit to mount a persistent cache directory at /root/.npm during the RUN step. The first build is the same speed. Subsequent builds that only change application code skip the full npm ci download and reuse what is already cached. On a project with heavy dependencies, this is the difference between a two-minute build and a fifteen-second one.
BuildKit also handles build secrets properly. The old pattern of passing API keys or tokens as build arguments was a security mistake — ARG values end up in the image layer history. BuildKit’s --secret flag mounts the secret as a tmpfs file that is available only during the specific RUN step and never written to any layer:
docker build --secret id=npmrc,src=$HOME/.npmrc -t myimage .
Inside the Dockerfile:
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
npm install
BuildKit also parallelizes independent build stages automatically. If you have a multi-stage Dockerfile where the test stage and the dependency download stage do not depend on each other, BuildKit runs them concurrently.
Multi-platform builds
docker buildx is the interface for multi-platform builds, and it is worth learning even if you only deploy to one architecture — because the architecture you build on locally might not be the one you deploy to.
The command is straightforward:
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myimage:latest \
--push .
This produces a manifest list — a single image tag that points to the right architecture when pulled. An x86 server gets the amd64 image. An ARM Graviton node on EKS gets the arm64 image. A developer on an M4 MacBook gets the arm64 image locally.
Why this matters in practice: Apple Silicon is now dominant on developer machines, and ARM Graviton is increasingly common in production for cost reasons. If you build with --platform linux/amd64 only, ARM developers hit QEMU emulation on every docker run. If you build only for ARM, your x86 CI nodes or production servers fail to pull the image. Multi-platform builds solve both problems in one step.
Building for a non-native architecture uses QEMU emulation under the hood, which is slow for compilation-heavy workloads. The builds work; they just take longer. For the majority of web application Dockerfiles, it is fast enough.
Docker Compose v2
docker-compose (hyphenated, a separate Python binary) is deprecated. docker compose (space, no hyphen, built into the Docker CLI) is the current tool. If you are still using the old binary, migrate. The syntax is mostly the same, but the new one is actively maintained and has features the old one does not.
Compose Watch is the most useful addition for development workflows. Instead of mounting your entire source directory into the container (which can cause performance issues on Mac and Windows), you define a watch configuration that syncs specific paths:
services:
api:
build: .
ports:
- "3000:3000"
develop:
watch:
- action: sync
path: ./src
target: /app/src
depends_on:
db:
condition: service_healthy
db:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
retries: 5
Run docker compose watch and changes to ./src are synced into the running container immediately. Combined with a process manager that watches for file changes inside the container (nodemon, watchexec, etc.), this gives you hot-reload without the overhead of a full volume mount.
The depends_on with condition: service_healthy is also something teams should be using. The old depends_on just waited for the container to start, not for the service inside it to be ready. Health checks fix the race condition where your application starts up before the database is actually accepting connections.
For environment management, the standard pattern is to keep base configuration in compose.yaml and use compose.override.yaml for local development overrides (volume mounts, debug ports, relaxed resource limits). Production deployments use their own override file or a separate Compose file entirely.
OCI images and registries
The OCI (Open Container Initiative) image spec means Docker images are not Docker-proprietary. Any OCI-compliant registry can store them, and any OCI runtime can run them. AWS ECR, GitHub Container Registry (ghcr.io), Google Artifact Registry, and Docker Hub all speak the same protocol.
Image signing has become a real concern for supply chain security. Sigstore and Cosign are the tools the ecosystem has converged on. After pushing an image, you sign it:
cosign sign --key cosign.key ghcr.io/myorg/myimage:latest
Your deployment pipeline can verify the signature before pulling. This is not just theoretical — there have been enough supply chain attacks involving tampered images that signing is moving from optional to expected in security-conscious organizations.
Security basics
A few things that should be in every production Dockerfile:
Do not run as root. Add a USER directive:
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
Use minimal base images. The Ubuntu base image contains hundreds of packages, most of which your application does not need, each of which is an attack surface. Alpine Linux images are much smaller. Distroless images (from Google) contain only the runtime and your application — no shell, no package manager. Chainguard images take this further with hardened, frequently-updated minimal images.
Scan your images. docker scout is built into newer versions of the Docker CLI:
docker scout cves myimage:latest
This scans against known vulnerability databases and shows you which packages have CVEs and how to fix them. Run this in CI. Do not wait until a security team flags an image you shipped six months ago.
The combination of minimal base images and regular scanning catches most issues before they reach production.
When not to use containers
Not everything belongs in a container. Lambda functions are already isolated and managed — containerizing them adds complexity without benefit unless you need a custom runtime. A static site served from S3 or a CDN does not need a container. A short-lived script that runs on a developer machine and has no deployment requirements does not need a container.
The operational overhead of containers is real: you need a registry, a runtime, a way to manage secrets, health checks, log aggregation, and eventually an orchestrator. For a small internal tool that runs once a week, a virtualenv and a cron job is the right answer. Containers are a deployment and isolation primitive — reach for them when you have something to deploy and isolate, not by default.
Comments