An Introduction to CNI
One key component of any solution is networking. This lesson talks about the Container Network Interface (CNI), which is what Kubernetes uses to enable networking.
Hi, my name is Olive Power and I’m a cloud native architect here at VMware, and then this KubeAcademy lesson. We’re going to give an introduction to CNI. What we’re actually going to cover in this session is that we’re going to talk high level, a bit about networking Kubernetes. Then we’re going to look at CNIs, what are they and why we need them. And then we’re going to finish out with some popular networking solutions. So whiles technologies like Kubernetes facilitate the running of containers en masse across multiple machines, it’s still the case that these containers need to be networked. So the first thing to point out when talking about networking in Kubernetes is that the container is not the first class construct. In Kubernetes, containers exists in the form of pods. And pod is one or more containers that are always collocated and coscheduled, and run in a shared context in terms of Linux namespaces and cgroups, etc.
So as far as networking in Kubernetes is concerned, the pod is the network end point. And therefore networking in Kubernetes becomes all about connectivity between pods, whether those pods are located on the same node in your cluster or whether those pods are located on different nodes in your cluster. So networking functionality in Kubernetes broadly addresses the following topics; cross node pod to pod communication, services discovery, services exposure for external access, network security and high availability. And while some of the other points are covered in some of the other KubeAcademy lessons, today we’re just going to talk a bit more about cross node pod to pod communication. And Kubernetes implements a network model with the following connectivity rules. Any pod in the cluster must be able to communicate with any other pod without any network address translation. And the same for a node, any node must be able to communicate with any pod in the cluster, again, without any network address translation.
And how this networking model is implemented in Kubernetes … Well, Kubernetes itself doesn’t really care. Networking is complex and can be implemented in many ways. And so abstracting this functionality away from the Kubernetes platform itself allows these networking solutions to evolve separately from Kubernetes. But in the same vein, Kubernetes needs to be able to consume these networking solutions. And it does this via CNI. So let’s look a bit what CNI is. CNI stands for container network interface and it’s a specification to configure network interfaces in Linux containers. And it is concerned mainly with adding, connecting and deleting disconnecting containers to networks. So it provides a specification for this. Why do we need something like CNI? Well, networking can be highly environment specific ,and as I mentioned before, it’s complex, and there can be lots of different problems and lots of different use cases which give rise to lots of different projects, which they’re seeking to solve those networking challenges.
So there’s a potential overlap and duplication of work there between those different projects. And so in that respect, it makes sense to try and implement some standard and have a common interface that these different networking solutions and projects can adhere to and consume. And so that makes that a very strong case for having something like CNI and it’s just why it’s proved so successful within networking in Kubernetes. So if we look at CNI and take a little bit more of a closer look at it, we see in the CNI project, which is a CNCF project, by the way, that there’s three parts to this project. There’s the specification itself for connecting container runtimes and the networking solution. There’s a code library that helps you build the CNI plugin and then there’s a command line for helping you to run your CNI plugins.
So within the CNI project, it’s got everything you need to build and run your container plugin. And that’s what a lot of folk have done. There’s a lot of CNI plugins out there and probably, there’s more coming all the time because there’s lots of different use cases and lots of different networking problems that these CNI plugins are looking to address. And these CNI plugins are implementing the solutions to networking problems in different ways. And it is worth knowing that you can have more than one CNI plugin running in your Kubernetes cluster, and they could be doing different things. There’s a lot of plugins out there that are fairly popular and have got widespread use, and I’ve listed some of them there. There’s NSX-T, calico, flannel, weave, cillium, and canal. They’ve all got widespread use in Kubernetes clusters running in production today.
So if we look at that from a diagrammatical point of view, Kubernetes doesn’t really care how networking is implemented in the cluster. It only is concerned with making sure that all pods can communicate with each other. And it is the CNI that abstracts away that networking functionality through a common standard that network plugins can implement and fulfill the networking requirements for your Kubernetes cluster. And in that diagram there, I’m pointing out that you could have more than one CNI plugin running and they could be doing two very different things for you within your Kubernetes cluster in terms of networking.
One could actually be doing the routing, issuing IP addresses and routing traffic to the correct pod, to the correct end point. And the other scene, I could actually be implementing something in around security, like implementing network security policies that prevent certain traffic from reaching certain destinations. So in summary, we talked a little bit about networking in Kubernetes and how it was not really fulfilled by Kubernetes itself, but how it’s abstracted away using specification and network plugins. The CNI defines the plugins, defines the specification for networking, and it is network plugins that actually implement this solution. Thank you for listening today, and I hope you join us on another KubeAcademy video soon. Goodbye.
Have questions about the material in this lesson?
We’ve got answers!
Post your questions in the Kubernetes community Slack. Questions about this lesson are best suited for the #kubernetes-users channel.
Not yet a part of the Kubernetes Slack community? Join the discussion here.
Have feedback about this course or lesson? We want to hear it!
Send your thoughts to KubeAcademy@VMware.com.