Everything is fine. Docker images will still work on future Kubernetes versions. Read this article if you’d like to know why this won’t actually affect you at all.
Kubernetes recently announced that it’s deprecating Docker as a container runtime after v1.20. But this announcement is more of a viral headline than a real revolution.
It creates an impression of “What?! Docker is discontinued for Kubernetes?! But isn’t that what it’s built on!? Shocker!” But in reality, there’s nothing to it – especially if you don’t manage Kubernetes clusters on your own. Those who set up Docker images can keep on doing that going forward too because Kubernetes will run them as usual.
So, what’s the point of Kubernetes deprecating Docker? What does this change mean for the end-users and developers? Let’s start with the basics.
Docker and Kubernetes
Here’s how Kubernetes explained the matter on their blog:
“Inside of your Kubernetes cluster, there’s a thing called a container runtime that’s responsible for pulling and running your container images. Docker is a popular choice for that runtime (other common options include containerd and CRI-O), but Docker was not designed to be embedded inside Kubernetes, and that causes a problem.
You see, the thing we call “Docker” isn’t actually one thing — it’s an entire tech stack, and one part of it is a thing called “containerd,” which is a high-level container runtime by itself. Docker is cool and useful because it has a lot of UX enhancements that make it really easy for humans to interact with while we’re doing development work, but those UX enhancements aren’t necessary for Kubernetes, because it isn’t a human.
As a result of this human-friendly abstraction layer, your Kubernetes cluster has to use another tool called Dockershim to get at what it really needs, which is containerd. That’s not great, because it gives us another thing that has to be maintained and can possibly break.”
To help you understand it, here’s a visualization of what’s going to happen after Kubernetes v1.20.
Source: CAST AI
So now you see that it won’t break anything and you’re safe – especially if you’re using a managed service like CAST AI.
Why change it now?
This change was long coming. It started back in 2015 when the development of containerd began. Containerd was designed to be used by Docker and Kubernetes as well as any other container platform looking to abstract away syscalls or OS-specific functionality to run containers on Linux, Windows, Solaris, or other operating systems. For Kubernetes, this would remove all unnecessary layers between the actual container runtime and the Docker API.
To have a unified API for different container runtimes, the Kubernetes v1.5 release introduced the Container Runtime Interface (CRI). CRI is a plugin interface that enables kubelet (the process controlling your pods on k8s node level) to use a wide variety of container runtimes without the need to recompile. Kubelet communicates with the container runtime (or a CRI shim for the runtime) over Unix sockets using the gRPC framework, where kubelet acts as a client and the CRI shim as the server.
All of this looks nice and dandy – apart from the fact that Docker API is incompatible with the CRI. That’s why it needs to go.
So, what’s next?
Starting at v1.20, kubelet will give a deprecation warning for Docker. Docker runtime support is planned to be removed in Kubernetes v1.22 release (late 2021). This means that Kubernetes clusters will have to be configured to one of the other compliant container runtimes like containerd or CRI-O. By using CAST AI managed Kubernetes clusters, you won’t need to worry about any of this maintenance under the hood.
Remember that the image that Docker produces isn’t a Docker-specific image. It’s an OCI (Open Container Initiative) image. Both containerd and CRI-O know how to pull those images and run them. This means that none of your current development flows that include building Docker images for running on Kubernetes need to change.
One thing to note: If you rely on the underlying docker socket (/var/run/docker.sock) as part of a workflow within your cluster today, moving to a different runtime is going to break your ability to use it. This is often called Docker in Docker. Fortunately, there are lots of options out there for this specific use case, including kaniko, img, and buildah.
In reality, this change is only overhead for those who manage or administer clusters – as we do at CAST AI. But we accept the challenge and see this as no big deal. There are several interesting options in the container runtime space which even let you change containers in the middle of runtime. We have started an investigation into which one would be most beneficial for our users. Monitor this blog to hear more about what we’ve discovered!