When I first started learning Kubernetes, Docker was the container runtime. It felt like Kubernetes and Docker were best friends – Kubernetes was originally designed to work with Docker and only Docker. For years, having Docker as the default runtime was simply how things were.
In this post we’ll take a trip down memory lane, explore why that friendship took a brief pause (with the Dockershim deprecation), and celebrate how CRI-Dockerd brings Docker back into the Kubernetes world.
If you’ve been nostalgic for the “old days” of running docker ps
on your cluster nodes, this one’s for you! 😄
For those of you studying for the CKA, the Certified Kubernetes Administrator exam, the time spent reading this post will be well worth it. Custom CRI's are now a hot topic in the updated curriculum!
Back in the Day: Docker + Kubernetes = ❤️
When Kubernetes launched (circa 2014), Docker was synonymous with containers. Naturally, Kubernetes used Docker Engine as its runtime – it was the industry standard.
If you set up a cluster with Docker installed you could simply run:
docker ps
on any node and see all the Kubernetes pods represented as Docker containers. This was super‑useful for learners: you could poke around your cluster with familiar Docker commands. In turn you'd see those kubernetes components, running as containers.
I recall the thrill of recognising those container names and IDs. It made the Kubernetes’ inner workings feel less like magic.
Under the hood, though, there was complexity. Kubernetes required a simple engine to run containers. Docker, was the family wagon - image build pipeline, rich CLI, networking, volumes and more - fantastic for devs but Kubernetes only needed the container engine.
The Dockershim Era – and Why It Ended
Around 2016 the Kubernetes project introduced the Container Runtime Interface (CRI) so that the kubelet could speak to any runtime through a clean gRPC API1. Docker, born before CRI, didn’t implement it, so Kubernetes shipped a temporary adapter known as Dockershim.
Dockershim was literally a shim - glue code inside kubelet to translate CRI calls to the Docker Engine API.
kubelet → Dockershim → dockerd → containerd → runc
Dockershim did it's job but became technical debt. Meanwhile Docker engineers were spinning out containerd
(the core runtime inside Docker) as an independent, CRI‑compliant project and, in 2017, donated containerd to the CNCF2.
By 2020 containerd
had matured and most distros switched to it. Kubernetes v1.20 announced the deprecation of Dockershim and v1.24 removed it entirely3. Headlines screamed “Kubernetes drops Docker!” and panic ensued, however, only the shim was dropped; clusters could still happily run images built with Docker.
Removing Dockershim was good!:
- Kubernetes shed vendor‑specific code and fully embraced CRI.
- Runtime teams (
containerd
,CRI-O
, etc) could innovate independently. - Docker itself wasn’t the problem – the duct tape coupling was.
Another way to look at it: Kubernetes and Docker stopped being room‑mates but stayed good friends.
The Docker engine functionality lives on through containerd
and the images that you build with Docker continue to run anywhere.
Docker’s Gift to the Community: containerd
🌟
containerd
is an unsung hero here. A light‑weight high level container runtime daemon that handles pulling images, snapshotting layers and launching OCI containers with runc
(a low level container runtime, also donated by Docker) – exactly what Kubernetes needed. Carved out of Docker and donated to CNCF, it became the defacto runtime for kubeadm, GKE and many others.
You can think of containerd
as Docker’s engine running inside your nodes, minus the rest of the family wagon. Day‑to‑day, the big change for ops teams was the tooling: you’d use crictl
or ctr
or nerdctl
instead of docker ps
for low‑level inspection.
Many folks, myself included however missed the Docker UX...
Reunited via CRI‑Dockerd 🎉
Remember that I mentioned that Kubernetes and Docker were like old friends who’d been through everything together. Parting ways isn’t easy, but sometimes the road forks.
Like all good friends though, sometimes, you just need a reunion party to rekindle that friendship.
CRI‑Dockerd - Back in town, Dockershim reborn out‑of‑tree for those times when only a best friend will do!
After v1.24, Mirantis & Docker co‑maintained this small daemon that implements the CRI and proxies calls to dockerd
4.
How is this different from the old shim?
Old Dockershim (in‑tree) | CRI‑Dockerd (out‑of‑tree) |
---|---|
Part of the kubelet binary | Separate service + socket |
Tied to Kubernetes release cycle | Versioned independently |
Harder to maintain | Maintained by Docker/Mirantis |
Meant to be temporary | First‑class long‑term option |
CRI‑Dockerd and kubelet provides a perfectly compliant CRI endpoint while you regain your beloved docker ps
and docker exec
workflows.
Hands‑On Demo – Docker Runtime via CRI‑Dockerd
Tested on Ubuntu 22.04, Kubernetes v1.33, Docker via official installer
1. Install Docker and enable Docker.socket
# Install Docker using the get.docker.com script curl https://get.docker.com/ | sudo bash % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 20554 100 20554 0 0 164k 0 --:--:-- --:--:-- --:--:-- 164k # Executing docker install script, commit: bedc5d6b3e782a5e50d3d2a870f5e1f1b5a38d5c + sh -c 'apt-get -qq update >/dev/null' + sh -c 'DEBIAN_FRONTEND=noninteractive apt-get -y -qq install ca-certificates curl >/dev/null' + sh -c 'install -m 0755 -d /etc/apt/keyrings' + sh -c 'curl -fsSL "https://download.docker.com/linux/debian/gpg" -o /etc/apt/keyrings/docker.asc' + sh -c 'chmod a+r /etc/apt/keyrings/docker.asc' + sh -c 'echo "deb [arch=arm64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian bookworm stable" > /etc/apt/sources.list.d/docker.list' + sh -c 'apt-get -qq update >/dev/null' + sh -c 'DEBIAN_FRONTEND=noninteractive apt-get -y -qq install docker-ce docker-ce-cli containerd.io docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin docker-model-plugin >/dev/null' Extracting templates from packages: 100% ================================================================================ To run Docker as a non-privileged user, consider setting up the Docker daemon in rootless mode for your user: dockerd-rootless-setuptool.sh install Visit https://docs.docker.com/go/rootless/ to learn about rootless mode. To run the Docker daemon as a fully privileged service, but granting non-root users access, refer to https://docs.docker.com/go/daemon-access/ WARNING: Access to the remote API on a privileged Docker daemon is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' documentation for details: https://docs.docker.com/go/attack-surface/ ================================================================================ # Enable docker.socket and verify it is running systemctl enable --now docker.socket systemctl status docker.socket ● docker.socket - Docker Socket for the API Loaded: loaded (/lib/systemd/system/docker.socket; enabled; preset: enabled) Active: active (listening) since Thu 2025-07-31 14:23:07 UTC; 3s ago Triggers: ● docker.service Listen: /run/docker.sock (Stream) Tasks: 0 (limit: 9394) Memory: 0B CPU: 383us CGroup: /system.slice/docker.socket
2. Install CRI-Dockerd
# Fetch the latest release wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.18/cri-dockerd-0.3.18.amd64.tgz sudo tar -C /usr/local/bin -xzvf cri-dockerd-0.3.18.amd64.tgz --strip-components=1 cri-dockerd/cri-dockerd # Setup Systemd units sudo wget -P /etc/systemd/system \ https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service sudo wget -P /etc/systemd/system \ https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket # Fix binary path if needed sudo sed -i 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service # Reload systemd daemon sudo systemctl daemon-reload sudo systemctl enable --now cri-docker.service cri-docker.socket # Check cri-docker service statuses sudo systemctl status cri-docker.service cri-docker.socket
A CRI socket now listens at
/var/run/cri-dockerd.sock
3. Configure Kubernetes for installation via kubeadm
sudo apt-get install -y apt-transport-https ca-certificates curl gpg curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl # Optional, when running in a container where swap is enabled echo KUBELET_EXTRA_ARGS=--fail-swap-on=false | sudo tee /etc/default/kubelet
4. Initialise Kubeadm with cri-dockerd
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock [init] Using Kubernetes version: v1.33.3 [preflight] Running pre-flight checks [WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.19.0.2] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [control-plane localhost] and IPs [172.19.0.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [control-plane localhost] and IPs [172.19.0.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "super-admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 502.748208ms [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s [control-plane-check] Checking kube-apiserver at https://172.19.0.2:6443/livez [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez [control-plane-check] kube-controller-manager is healthy after 1.595322668s [control-plane-check] kube-scheduler is healthy after 2.223175126s [control-plane-check] kube-apiserver is healthy after 3.50681521s [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node control-plane as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node control-plane as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: 2a65ho.xy9pffiyo4uwcj0t [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.19.0.2:6443 --token 2a65ho.xy9pffiyo4uwcj0t \ --discovery-token-ca-cert-hash sha256:418308d628d98be8806ebf91817af33e3b32f646b9e67664469e3716279e4272
Follow kubeadm’s post‑install hints (
export KUBECONFIG
, install a CNI, etc)
5. Marval at docker ps
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8c0703bd65bc 738e99dbd732 "/usr/local/bin/kube…" About a minute ago Up About a minute k8s_kube-proxy_kube-proxy-kmfvh_kube-system_59f8bf0d-c580-4f8a-b15c-8d1c850c0211_0 c2103c8e205d registry.k8s.io/pause:3.10 "/pause" About a minute ago Up About a minute k8s_POD_kube-proxy-kmfvh_kube-system_59f8bf0d-c580-4f8a-b15c-8d1c850c0211_0 ad529fceb2c9 c0425f3fe3fb "kube-apiserver --ad…" 2 minutes ago Up 2 minutes k8s_kube-apiserver_kube-apiserver-control-plane_kube-system_20de8320fcd112c1968a24fc55d743c2_0 30e32aac59f6 c03972dff86b "kube-scheduler --au…" 2 minutes ago Up 2 minutes k8s_kube-scheduler_kube-scheduler-control-plane_kube-system_ed410428e1d867eb3a5afe794e5b4a7c_0 94b5da54a4f1 ef439b94d49d "kube-controller-man…" 2 minutes ago Up 2 minutes k8s_kube-controller-manager_kube-controller-manager-control-plane_kube-system_ca364074bcb635cd9e7c00ce9832ff41_0 21b8b1f5ee6c 31747a36ce71 "etcd --advertise-cl…" 2 minutes ago Up 2 minutes k8s_etcd_etcd-control-plane_kube-system_7b9f3c689147788f58e3b000a94a751a_0 aa416d83cd0c registry.k8s.io/pause:3.10 "/pause" 2 minutes ago Up 2 minutes k8s_POD_etcd-control-plane_kube-system_7b9f3c689147788f58e3b000a94a751a_0 7054388608d3 registry.k8s.io/pause:3.10 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-scheduler-control-plane_kube-system_ed410428e1d867eb3a5afe794e5b4a7c_0 dd048d9a1756 registry.k8s.io/pause:3.10 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-controller-manager-control-plane_kube-system_ca364074bcb635cd9e7c00ce9832ff41_0 34d1f82a3f10 registry.k8s.io/pause:3.10 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-apiserver-control-plane_kube-system_20de8320fcd112c1968a24fc55d743c2_0
There they are – control‑plane pods happily running as Docker containers. Feel free to docker logs
or docker exec
(carefully!) for nostalgic debugging.
Performance note: CRI‑Dockerd adds one small hop (
kubelet → cri-dockerd → dockerd
) versuscontainerd
. Overhead is minimal for most setups, but at scale you may prefercontainerd
directly.
Final Thoughts … & an Exam Tip 😉
CRI‑Dockerd gives us the best of both worlds: Kubernetes’ clean CRI architecture and the familiar Docker experience. If you’re sitting the CKA after the recent curriculum update, make sure you understand the runtime landscape: containerd
is default, but Docker
via CRI‑Dockerd
is absolutely fair game!
Technology moves fast, but good ideas have a habit of circling back. Docker was our first container love, and through CRI‑Dockerd it’s still with us - just wearing a smarter outfit.
Happy Containering! 🚀 – James Spurin