Custom Tools for Kubernetes - Idle
Welcome to a new series on Custom Tools I've built to make Kubernetes learning, teaching, and platform experimentation simpler and more predictable. Over the years of teaching kubernetes, I've found that the best learning resources are those that make use of tiny, well-behaved containers that do exactly what you expect - no more, no less. This series spotlights some of these custom tools that I've created to assist with this, why they exist, and how to use them effectively across beginner and expert scenarios.
This first post introduces idle
: a purpose-built, ultra-minimal container whose only job is to stay running politely. It's ideal for demonstrating pod lifecycles, scheduling, disruption budgets, taints/tolerations and anything where you don't need the noise semantics of a "real app."
Idle: the tiniest container that supports big Kubernetes ideas
When you're learning Kubernetes (or teaching it!), you often need a container that just stays alive. No shell. No noisy logs. No hidden processes. No Bloat. Just a steady PID you can schedule, probe, evict, drain, and generally poke for scientific/learning purposes.
Meet idle
- a purpose-built, ultra-minimal container whose only job is to… wait. It's perfect for demos, labs, and production-like experiments where the application logic would only distract from the thing you're trying to learn or demonstrate. Think of it as a clean test double for "a container exists here."
Why a "sleep" image?
You could run busybox sleep infinity
. You could even make use of the Kubernetes' pause image... But:
busybox
/alpine
: bring extra binaries, libc, package managers, shells, etc. These are great tools but they are just not minimal.pause
: is Kubernetes' infra/sandbox container that owns pod namespaces; it has a specific job in Kubernetes internals. It's not meant to be your app container. (If you're curious, I have a whole post on why pause is the pod's "hidden hero".)
idle
is a tiny, scratch-based container compiled from minimalist C, with graceful SIGTERM/SIGINT handling and a multi-arch image. It's designed specifically to do nothing well - allowing you to focus on lifecycle, scheduling and platform behaviours, not app/container image quirks.
TL;DR quick start
Docker (instant)
docker run --rm spurin/idle:latest
You'll get a container that simply sits there until you docker stop
or the runtime sends it a signal. The binary exits cleanly on SIGTERM/SIGINT (i.e. press Ctrl-C).
Kubernetes (One-Liner)
kubectl run idle --image=spurin/idle:latest --restart=Never && kubectl get pod idle -w
Kubernetes (Deployment)
apiVersion: apps/v1 kind: Deployment metadata: name: idle spec: replicas: 3 selector: matchLabels: app: idle template: metadata: labels: app: idle spec: containers: - name: idle image: spurin/idle:latest
Apply it with:
kubectl apply -f idle-deploy.yaml
What makes idle special?
- Tiny & predictable - built
FROM scratch
, statically linked, and aggressively stripped. No shell; no surprises. - Well-behaved lifecycle - traps SIGTERM/SIGINT and exits fast/cleanly, which is perfect for demonstrating termination and grace periods.
- Multi-arch - prebuilt for common platforms (arm64, amd64/v2, riscv64, ppc64le, s390x, and more), so your mixed clusters don't fall over.
- Purposely boring - ideal when the platform is the lesson.
Exam tip (CKA/CKAD): When you're practising node drains, PDBs, taints/tolerations, or topology spread, a deterministic container like
spurin/idle
avoids red herrings like slow shutdowns.
Hands-on labs (from zero to advanced)
Below are short, focused labs you can drop straight into your test cluster. Each one isolates a Kubernetes concept using idle
.
PodDisruptionBudget + Node Drain
Demonstrates eviction safeguards without app noise.
apiVersion: apps/v1 kind: Deployment metadata: name: idle-pdb-demo spec: replicas: 3 selector: matchLabels: app: idle-pdb template: metadata: labels: app: idle-pdb spec: containers: - name: idle image: spurin/idle:latest --- apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: idle-pdb spec: minAvailable: 2 selector: matchLabels: app: idle-pdb
kubectl apply -f idle-pdb.yaml kubectl drain <a-node> --ignore-daemonsets --delete-emptydir-data # Watch that only 1 pod is evicted at a time; PDB keeps >=2 available. kubectl uncordon <a-node>
Example:
% kubectl get nodes NAME STATUS ROLES AGE VERSION desktop-control-plane Ready control-plane 7m30s v1.33.2 desktop-worker Ready <none> 7m20s v1.33.2 desktop-worker2 Ready <none> 7m20s v1.33.2 % kubectl apply -f idle-pdb.yaml deployment.apps/idle-pdb-demo created poddisruptionbudget.policy/idle-pdb created % kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES idle-pdb-demo-5fd65667d9-285ld 1/1 Running 0 7s 10.244.1.14 desktop-worker <none> <none> idle-pdb-demo-5fd65667d9-rnpc5 1/1 Running 0 7s 10.244.2.7 desktop-worker2 <none> <none> idle-pdb-demo-5fd65667d9-zq5xn 1/1 Running 0 7s 10.244.1.15 desktop-worker <none> <none> % kubectl drain desktop-worker --ignore-daemonsets --delete-emptydir-data node/desktop-worker cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-vwbxw, kube-system/kube-proxy-49cnj evicting pod default/idle-pdb-demo-5fd65667d9-zq5xn evicting pod default/idle-pdb-demo-5fd65667d9-285ld error when evicting pods/"idle-pdb-demo-5fd65667d9-285ld" -n "default" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. pod/idle-pdb-demo-5fd65667d9-zq5xn evicted evicting pod default/idle-pdb-demo-5fd65667d9-285ld pod/idle-pdb-demo-5fd65667d9-285ld evicted node/desktop-worker drained % kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES idle-pdb-demo-5fd65667d9-mq26b 1/1 Running 0 14s 10.244.2.8 desktop-worker2 <none> <none> idle-pdb-demo-5fd65667d9-rnpc5 1/1 Running 0 39s 10.244.2.7 desktop-worker2 <none> <none> idle-pdb-demo-5fd65667d9-vkvkn 1/1 Running 0 8s 10.244.2.9 desktop-worker2 <none> <none> % kubectl uncordon desktop-worker node/desktop-worker uncordoned
TopologySpreadConstraints (Anti-Crowding your Pods)
apiVersion: apps/v1 kind: Deployment metadata: name: idle-spread labels: app: idle-spread spec: replicas: 3 selector: matchLabels: app: idle-spread template: metadata: labels: app: idle-spread spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: idle-spread containers: - name: idle image: spurin/idle:latest
After applying the yaml example, check distribution with:
kubectl get pods -o wide -l app=idle-spread
You should see even placement across nodes.
Example:
# In my cluster, the control plane has a NoSchedule taint, removing % kubectl taint node desktop-control-plane node-role.kubernetes.io/control-plane:NoSchedule- node/desktop-control-plane untainted % kubectl apply -f idle-topology.yaml deployment.apps/idle-spread created % kubectl get pods -o wide -l app=idle-spread NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES idle-spread-c5747959d-cbdx5 1/1 Running 0 6s 10.244.1.17 desktop-worker <none> <none> idle-spread-c5747959d-hnb6v 1/1 Running 0 6s 10.244.0.6 desktop-control-plane <none> <none> idle-spread-c5747959d-w4gd7 1/1 Running 0 6s 10.244.2.11 desktop-worker2 <none> <none>
Taints & Tolerations (Scheduling Controls)
# Label and Taint a node: kubectl label node <node> dedicated=training kubectl taint node <node> dedicated=training:NoSchedule
# A pod that tolerates it: apiVersion: v1 kind: Pod metadata: name: idle-tolerate spec: tolerations: - key: "dedicated" operator: "Equal" value: "training" effect: "NoSchedule" nodeSelector: dedicated: "training" containers: - name: idle image: spurin/idle:latest
Apply with:
kubectl apply -f idle-tolerate.yaml
You should see it scheduled, to the node that has the toleration
Example:
% kubectl label node/desktop-worker2 dedicated=training node/desktop-worker2 labeled % kubectl taint node desktop-worker2 dedicated=training:NoSchedule node/desktop-worker2 tainted % kubectl apply -f idle-tolerate.yaml pod/idle-tolerate created % kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES idle-tolerate 1/1 Running 0 7s 10.244.2.12 desktop-worker2 <none> <none>
Under the hood (for the curious)
- Minimalist C: a tight loop that sleeps and traps SIGTERM/SIGINT for graceful exit.
- Multi-stage build: compile with
musl
/gcc
, statically link, copy only the binary into ascratch
image. - Multi-arch: published images cover common CPU targets so mixed clusters "just work."
If you want to build it yourself, the repo includes a simple build script and Dockerfile you can study or adapt:
git clone https://github.com/spurin/idle cd idle # Build for your local arch: docker build -t my/idle:dev . # Run: docker run --rm my/idle:dev
Classroom & lab patterns I love with idle
- Clean scheduling puzzles: topology spreads, affinities, node selectors - all signal, no noise.
- Disruptions on rails: practise PDBs and drains without side effects from app restarts.
- Image cache hygiene: create a DaemonSet and pre-warm a cluster with the image.
- Lifecycle demos: termination grace, SIGTERM, restarts - all reproducible.
FAQ
Is this a replacement for Kubernetes' pause?
No. Pause is the pod infra container that owns namespaces; kubelet manages it. idle
is for your containers/tests. (If "pause vs everything else" is new to you, start with my primer on the pause container.)
Does idle
have a shell?
No - it uses scratch
with a single binary. That's the point.
Where is the image?
Docker Hub: spurin/idle:latest
. Source and examples: https://github.com/spurin/idle.
Further reading
idle
repository - features, YAML, and multi-arch notes: https://github.com/spurin/idle- Understanding the Kubernetes Pause Container - How pods share namespaces, and why pause exists: https://diveinto.com/blog/kubernetes-pause-container
Closing thoughts
When the lesson is Kubernetes itself - scheduling, disruption, readiness, topology - a tiny, polite container is your best friend. idle
gives you the cleanest possible baseline to teach and to learn, whether that's your first kubectl run
or a deep-dive into eviction signals on a tainted node.
Happy not doing much - and learning while you do. 💤