KubeAcademy by VMware
CNI Providers
Next Lesson

Join 80,000+ fellow learners

Get hands-on practice with Kubernetes, track your progress, and more with a free KubeAcademy account.

Log In / Register

In this lesson, we will look at several of the most popular CNI providers and some of the unique qualities each one presents.

Eric Smalling

Docker Captain and Senior Developer Advocate at Snyk

Eric is a Docker Captain and Senior Developer Advocate at Snyk.io where he helps developers secure the applications, containers and Kubernetes platforms they build and deploy to.

View Profile

Welcome back to the KubeAcademy Course Networking in Kubernetes. I'm Eric Smalling, a Staff Field Engineer at VMware. And in this lesson, we'll discuss the differences between a few of the more popular open-source Kubernetes networking interface or CNI provider plugins. In the time we have, I'll be focusing on these two provider plugins, Calico, an extremely flexible and possibly the most popular plugin deployed today, and Cilium a unique implementation that uses some advanced kernel features.

I'll also touch on a few of the other plugins and aspects that make them unique at the end. By the end of this lesson, you'll understand how the CNI abstraction layer allows for a wide variety of networking implementations, while isolating your applications from their details. You'll also get a better understanding of these plugins, the differences between them, and be better able to investigate which CNI plugin, whether these or others is the best fit for your application and network teams requirements.

As we discussed in our previous lessons, the CNI provides a standard way for networking implementation details to be abstracted away from the core Kubernetes code base. The CNI specification is actually very simple. We have ADD to add a container to the network, DEL, deletes a container from the network. We also have a check to see that it's working as expected and a way to get a version report. The full specs available here and I'll link that below the video. These along with the arguments past are the entirety of the contract between CNI and the Kubernetes kubelet. The two main operations ADD and DEL are probably the most interesting. When a pods created ADD gets called and once removed, the DEL command gets called, pretty straightforward. The plugins job is to do the configuration steps necessary for these ADD and DEL tasks using whatever strategies or technologies that it's designed to use, which allows the kubelet to remain completely decoupled from that implementation. At a very high level though, here are the basics of how the kubelet uses any CNI plugin to ADD remove pods.

When the kubelet decides to create a new pod on a given node, one of the first things it does is to create a network namespace for the new pod. As you might remember from the pod networking lesson, all containers in a pod, share a network namespace. Next the kubelet calls ADD on the CNI plugin, the CNI plugin then determines any specific configurations needed for the new pod by querying the Kubernetes API. The CNI plugin will then get an IP address from either his own IPAM or possibly an external IPAM provider. And finally, it will create a Veth adapter on the pod network namespace and it's pair on the root namespace. It'll set the IP address at any needed routing settings, and it'll also do any other configuration for the specific routing method used as needed. Finally, the CNI will respond, letting the kubelet know that it's done and which had created, it'll also return the IP address assigned to it.

The kube can then continue with whatever container creation tasks it has to do in that network namespace. So as you can see, the CNI plugin allows the kubelet to simply create the pod and hand off all the networking complexity. I won't go into the deletion flow, but it's pretty much the reverse process, stop the containers, call the CNI plugin to delete the pod from the network, and then delete the namespace. As we look into each of the providers in this lesson, we'll see what options they offer and examine the potential benefits, costs, or restrictions unique to each of them. Before we dive into the CNI plugins themselves, let's cover some common terminology that you'll see used in most plugins. iptables is a tool that configures the IP packet filter rules of the Linux kernel firewall. Most CNIs that support Kubernetes network policy enforcement do so by automated manipulation of iptables across the cluster hosts. It also was used historically by kube-proxy for implementing service routing and load balancing, but there are scalability issues and very large clusters and that implementation while still available as an option is not usually recommended anymore.

IP Virtual Server or IPVs is basically a load balancer that's built into the Linux kernel. As of Kubernetes version 1.11, the kube-proxy now uses this by default for implementing service types like cluster IP, or note pod. Nearly all CNI plugins offer some form of an overlay network option, which is basically an abstraction on top of the underlying or underlay network. Conceptually similar to the way virtual machines abstract the actual hardware from the operating systems and processes running on them, an Overly network is a software defined construct that allows the pods to communicate with each other simply without burdening them with the specifics of the underlying network and its topology. In the Kubernetes cluster we aren't usually talking about a completely virtualized network model, but rather these plugins are often using the overlay technologies to do simple packet encapsulation.

There are many overlay implementations used and plugins often give you a choice to best fit your networking preferences or requirements. The plugins we'll talk about today can use some or all of the implementations listed on this slide. Probably the most commonly offered option is VXLAN, which is the tunneling network built into the Linux kernel and can run on top of just about any underlying network topology with minimal requirements as long as the underlay MTU is adequately sized and multicast is allowed, unless the CNI plugin handles the routing details for it.

So as you can see, Kubernetes networking implementations are largely built on top of well-known battle-tested technologies that have been used in Linux servers for many years. Now that we've gone through the basics, let's take a look at our first CNI plugin, Calico. Project Calico focuses on high performance and scalability and very dynamic environments. To tackle this combination of problems, they took the approach of using a routing fabric instead of defaulting to overlay networks, as many other CNI solutions do. Calico also implements its own IPAM, several routing modes and a robust network policy capability, including and extending beyond the Kubernetes network policy API. The plugin uses Linux native tools to facilitate traffic routing and enforced network policy. It also hosts a BGP routing daemon for distributing routes to other nodes and to the external network. Calico can operate in one of three routing modes. Native mode does not encapsulate packets to and from pods. As such, it's a highly performance routing method and makes troubleshooting simple as analyzing network traffic does not involve looking inside a packet for another packet.

IP-in-IP is a simple form of encapsulation that wraps the packet with an outer IP header that represents the host source and destination rather than the pods. Therefore, when a host receives an IP-in-IP packet, it examines the internal IP header to determine the target pod. This is Calico's default routing method. While this rotting method incurs a little more overhead than Native routing, it does work in most environments without modification, especially in environments that cross multiple subnets. IP-in-IP mode also offers a kind of hybrid configuration called cross subnet. With this setting enabled Native routing is used for an all internet subnet communication and IP-in-IP encapsulation is used when crossing the subnet boundaries.

And as we mentioned before, VXLAN is a feature rich form of encapsulation that you can use as well. That enables the creation of virtual layer two networks. As you can see, you will incur more overheads since each packet encapsulates another complete L2 packet plus some VXLAN header metadata, but the requirements on the underlay network are less restrictive. Let's take a high level, look at the components that make up a Calico installation and how they interact. Calico's main components are the calico-node pods, the calico-kube controller, process name Typha, and a data store, which we'll discuss in a moment. The calico-node pod runs on every host, although I've not drawn one on the manager to simplify the diagram, there would be one there. It's responsible for route programming and sharing, and it accomplishes these tasks by a pair of tools named Felix and BIRD.

The calico-kube controller is responsible for recognizing changes in Kubernetes objects that impact routing. The controller actually maintains multiple controllers inside of it, watching for changes in things like network policies, pods, namespaces and other networking related things. Based on the change seen by the controllers, Calico can update its data store, which eventually will be seen and enforced in each calico-node. The Calico datastore is used to store Calico configuration like routing policy and other information. Calico supports two data store modes, Kubernetes as I've illustrated here or its own ETCD cluster. In almost all cases, it's preferable to use the Kubernetes data store instead of a separate ETCD. In this model all persistent data used by Calico is stored through the Kubernetes API server as custom resource definitions or CRDs.

The default and recommended mode is the former, as it eliminates the complexities of managing and securing a separate ETCD cluster. In large clusters, the Kubernetes API could start to be impacted by the high number of Felix processes querying this data store. So a caching process called Typha was introduced to offset that load. If for some reason you do choose to use a separate ETCD based data store do not use the one that Kubernetes cluster uses. No process outside of the core kubernetes should have access to the clusters ETCD for security and performance reasons. Now that we've gone over a high level mechanics of how Calico works, let's install it and take a look at a couple of examples of traffic routing between nodes in my lab.

Okay. First thing we're going to do is we're going to start up a Kubernetes cluster. Now I'm using a little script that starts up a kind of cluster, Kind stands for Kubernetes and Docker. It's a desktop tool used for running a Kubernetes cluster with the infrastructure as using Docker for that node. It's really great for workstation testing of things and having a fast way to start a quick, simple Kubernetes cluster. I've got it configure to start up without a CNI provider, which is not the default, but it's easy to config. This example and all of the setup files will be available in a GitHub repository that will link below the video. So you can try this out yourself, okay. And through the magic of video editing that didn't take any time at all, but let's take a look at what we've got running. I'm going to start a Teamwork session so we can see a few things at the same time.

Very top we were seeing a kubectl get nodes. So you can see the four nodes that make up my cluster. You also can see that they're not ready in their status. So let's try to figure out why that is. Let's jump into kubectl getube, oh sorry, describe, describe one of the nodes. So we'll pull out the one I named calico-worker. And let's scroll up to the conditions and show it off, there we go. The ready type is false, kubelet not ready. And if we look to the far right of this, we'll see that CNI plugin is not initialized. That means one of two things either you don't have a CNI plugin, which that's our case, or it's just, there's something wrong, it's not coming up correctly. So let's deploy Calico because we need to do that to demonstrate Calico.

So Calico has installation instructions for many different environments. Since this is a small scale on-premise type cluster, we're going to follow the instructions they call on-premise. Now like most CNI installations, there's a single YAML manifest that you'll get and want to run. So I've downloaded their manifest into this calico.yaml file. Let's take a look at a couple of things in here before I go ahead and apply it. The first one is right here on lines, 10, 11, excuse me. We see the Typha is disabled. Now we talked about Typha, that's the caching service in front of the kube API that Felix would use so as to not put a lot of load on the kube API. Well, since we have a small cluster, we don't need Typha, it's added complexity and added processes running that we just don't really need.

So we're not going to deploy Typha in this case. And the next thing I'm going to show you now, there's 3000 plus lines of CRD definitions in our back and all sorts of things. We're not going to go through all that, but I did want to show you that the calico-node, which runs on every node in this cluster is just a DaemonSet. It's just a deployment DaemonSet type that named calico-node that runs in the kube-system namespace, as you see right here. And that's labels and it's just a standard DaemonSet. So if you're familiar with Kubernetes deployments, you can feel right at home here. You can see there's no magic going on. You can additionally, if we keep scrolling down, let's see, we get to 36, yeah, right here. Here's the controllers. This is just a simple deployment, goes into the same kube-system namespace and deploys the controllers. So again, no magic here, all just standard Kubernetes deployment stuff. And you should feel right at home playing with this. So let's go ahead and we'll do a kubectl apply -f calico.

Now the first thing we're going to see is that, I sent all that stuff. Let's take a look at the pods that are starting up in that kube-system namespace. So there's a bunch of Calico named stuff, there's the core DNS part of Kubernetes. And they're all in various States of start. And you can see now, as things are turning, all of our stuff is ready. We're going to start seeing on this panel. These are the route tables on each of the three worker nodes. And you're seeing things getting added as the plugin starts to get settled in. So if we do another, get down here, we see all but one of the pods are up and running. There we go, everything is up and running now. As you saw all of our nodes now say they are in a ready state and the iptables got some things added to them.

There are some interesting things here, but before we look into it, I'm going to start some workload on here. So I've got a simple application. I'm not going to get into what's in it. It's just a web server that we use for some diagnostic things. We're not really going to use it. We just want to have a pod running that can do something. So it's called cord. And we can see the pods that are running in the default namespace, there's four of them. They are all four running now. And if you noticed our route tables changed as they came up. So let's take a look at these round tables. Let's take a look at the first pod on the list. This is running on the calico-worker two node and its IP address is 192 168 88.2.

We look at worker three. The next pod is on that node. It's pod IP is 192 168 173.65. There's another one on worker three with the same three octets followed by 66. And finally the fourth one is on 192 168 9.130. And what's important to note here is every node is going to have a separate sub-net for the pods that run on it. That's why these two have the same first three and these other ones don't. Now the actual pod IP or sorry, node IPs are not the same. These are 172 20, two, three, four and five in that order, but what's interesting is if we were to take a look at this pod that's running on calico-worker two, and let's say that we wanted to get to it from calico-worker three for instance, well, the process, whether it's a container or otherwise running on worker three wants to get to 192, 168, 88.2.

It's going to send traffic to that. It's going to look through the route table and it's going to see a 192 168 88.0/26, there's the cider, the network range that fits that. So it's going to say, "Oh, I route traffic." For that via 172 20 05. And it's going to send it through this tunnel adapter. So what that's going to do is it's going to do a IP-in-IP table wrap. It's going to put this in here, send it off to 172 20 05, which is worker two. Worker two is going to get it, unwrap it and it's going to say, "Oh, this is addressed to 192 168 88.2, which is statically routed in my route table to this crazy named adapter." Now adapters, remember we talked about in pod networking, you have a Veth pair and every pod network namespace gets one of those halves of the pair.

Well, that's what this is. This is the virtual adapter that's in the network namespace for this pod. So that it's going to route right into that and right to the process that's waiting and listening in there. And that's as complicated as it is. It's really not that bad. You can take a look at any pod IP and on any worker in the cluster, there will be a route to it, whether it's a direct route to the adapter, if it's running on that pod or via the node that it's running on. And that's because the sub-net for every node is unique. That's all there is to a routing in Calico

As we mentioned in the network policy lessons, Calico not only implements the Kubernetes policy APIs, but also adds many of their own additional policies. You get things like policy ordering or priority, global policy scoping options. As you remember, the Kubernetes Native policy is allow policy with implicit deny for anything that doesn't match. Well, Kubernetes adds more granular actions, you can explicitly allow or deny you can log, which doesn't stop traffic, but it will log any accesses that match the policy. There's also a pass action that is more applicable to their enterprise paid product. You can look into that on your own if you want. You can also apply rules to service accounts as well as VMs or host interfaces. So not just pots, like the default Kubernetes policies allow.

Some more topics you might want to look into on your own include there the Calico Public Cloud interoperability. So Calico's modular approach has also led it to be used by a number of Public Cloud providers for network policy enforcement in their managed Kubernetes offerings. The Public Cloud lesson later in this course will provide more details. Calico recently added functionality to provide built-in encryption for pod-to-pod communications using WireGuard. This allows your pods to secure securely communicate without having to implement your own TLS solutions or relied on side card implementations like Envoy, which can kind of complicate your deployments. The Calico project also just released a data plan option based on eBPF the Extended Berkeley Packet Filter, which is built into more recent versions of the Linux kernel. As of this recording, Calico 3.16 was just released with eBPF generally available. Up until now it's been in tech preview. The early performance numbers are showing it may considerably increased network throughput and reduce CPU requirements and could take over services functionality from the kube-proxy.

We'll talk actually a lot more about eBPF in the next section as the Cilium CNI is really based on. So as you can see, Calico is a very feature rich plugin. We've only scratched the surface here on his capabilities. Project Calico has a robust open-source community that produces a lot of content about its different features. One such resources, is Josh Rosso's blog and video on Calico routing modes, which I'll add to the links below this video. He goes into more depth on the routing modes and shows you packet tracing so you can see our packets were presented on the network and each of them. Next let's talk about the Cilium CNI plugin. Like Calico, Cilium provides both native and overlay networking options and IPAM provider and network policy enforcement, although the way it implements them is very interesting.

Cilium is unique in that the project's goal is to deliver a software defined network that considers the application layer or L7, the entry point for traffic. Focusing on this higher abstraction layer allows them to provide API aware security that's based on Kubernetes object identity, rather than simply network packets. The project also aims to deliver superior performance and high scalability.

They're accomplished in these goals by leveraging new Linux kernel technologies, such as eBPF, which stands for the Extended Berkeley packet Filter. BPF is a byte code language that allows you to run code in the Linux kernel in a safe sandbox environment. The extended version, the eBPF is available in Linux kernel, starting with version 3.18. With eBPF based tooling in place, they can manipulate the packet or the payload within, as they are operating at the application layer. Quoting their documents, the Linux kernel supports a set of BPF hooks in the networking stack that can be used to run BPF programs. The Cilium datapath uses these hosts to load BPF programs that when used together create higher level networking constructs. By using this technology, Cilium is rethinking how network policy is done by moving away from IP tables and into the more flexible eBPF based systems. One benefit of this flexibility is that you gain some interesting capabilities like L7 Aware Policies, instead of just the usual L3 and L4 that you get with IP tablespace solutions.

This allows you to implement web application firewall API level security, right into your Kubernetes cluster and keep the configuration in source control alongside your deployment manifests. For example, let's say you have a web service that has multiple endpoints with requests coming in from end-users, automated monitoring systems and management reporting tools. With simple L3, L4 policy enforcement, unless you host the end points on different pods or ports, you'd have to have some form of external L7 aware firewall or else all of these different users will have access to end points that they should not have, which could lead to sensitive information being exposed or possible abuse, such as a denial of service, be it intentional or accidental.

Using the Cilium policy configurations, you can restrict access by HTTP, verb and path as appropriate per persona. As of today so Cilium L7 policy allows for enforcement for ingress and egress for HTTP and [inaudible 00:23:28] protocals. Cilium also aims to decrease latency and improve performance by reducing the steps network packets traverse over typical iptables enforcement configuration. If you remember from the pod networking lesson, when it containers application requests a TCP or UDP connection, that request goes through the pods networks namespace stack, which will create packets that will be usually sent through a virtual adapter pair and into the host machines root namespace, and then onto their destination. Digging Into this a little deeper, the process is actually making a connect call to the TCP stack, which will create the network packets. These packets traverse the Veth pair. And at this point, the host iptables rules where they would send that packet on towards the destination or will drop or refuse it.

On a policy violation, the application then will either timeout or receive the connection refusal to response propagated back up the chain to the container. Using the same example with Cilium, the connection call from the application is immediately caught by a hook that the Cilium BPF had registered with kernel. The BPF code then finds it to be in violation with a connection refusal that is immediately returned without the need for the network stack to even create a packet. Otherwise it performs any necessary rewriting and sends that off towards the destination. Comparing this to traditional IP address, filtering approaches, you can quickly see how this solution can scale so well.

Now let's talk about Cilium Load Balancing. This is a simplified view of the Linux stack, as it relates to networking. Traffic flows from the hardware layer to the device driver via interrupts and buffers, and then up to the network stack. Most container networking solutions leverage the IPVs kernel level load balancing functionality, which basically wraps this layer. And depending on runtime situations and changes to the packet, it might get called a number of times.

Cilium instead uses something called express data path or XDP mode to run BPF code right in the network driver, which allows it to execute very close to the hardware with access to the DMA buffer, short-circuiting the call flow and allowing load balancing and denial of service mitigation at a much, much lower point in the stack. In conference talks, Thomas Graf, one of Cilium creators has quoted impressive performance gains, including an example from Facebook where they're seeing 10X throughput increases over IPVs in L3, L4 load balancing. By running in this XDP mode the packet processing happens right in the driver, not higher up in the network stack. This also makes distributed denial of service mitigation, highly performance by rejecting packets at that low level.

We've talked a lot about how Cilium uses BPF for policy enforcement, along with a XDP does load balancing, but not actually how it routes traffic around the cluster. Like many CNI plugins, Cilium implements both Native and overlay routing protocols and uses either VXLAN or GENEVE encapsulation formats. Although thier documented their documentation states that any such format supported by Linux can be enabled. Native mode is also similar to Calico's native mode, requiring the network to be able to route IP addresses of all containers on all hosts in the cluster. Overlay mode works on any underlay network. And its only requirement is IP connectivity between the hosts.

Cilium has agents that run on each node in the cluster, which watch for events to learn when objects that impact networking change. It reacts to those events to configure the eBPF programs to control network access to and from the effected containers. There's also a Cilium operator that deals with cluster wide, as opposed to node specific duties and updates the data store appropriately. The cilium-agents will pick up those configurations and do the necessary changes on the nodes. Cilium uses a data store to hold and propagate state between agents and offers both Kubernetes CRDs or key value store options, including ETCD or console. Like Calico the CRD solution is the default.

Okay, just like my other demo, let's start with a clean kind cluster. This one is only going to have one worker. While that's installing, let's take a look at Cilium documentation page. They have several introduction and install guides. We're getting started installation. We're going to do a sandbox environment here using Kind. Now I'm not going to walk through everything here. I'll implement these steps, but basically we're going to create the cluster and then we're going to use a helm chart to deploy. There we go, let's create it. One of the steps there is to load the image into the kind nodes which Kind load does. This is pre-populating the Docker image caches on both nodes. And then finally this helm command, which gets copied from their website, begins the installation of Cilium. And there we go. So we now have that going. Let's take a look at the kub-system pods, and we'll watch these as they all come up and begin running.

You can see there's the operator actually is been deployed and the nodes are getting ready to run. There's an init also that's running. Shrink the font a little there and we'll just wait for these to finish and all be in ready state. That's done. So we have the Cilium agents running on both nodes. We also have a couple of operators, again, these run on whatever nodes. Since I only have the two, the worker and the control plane, picked one of each. If you had a bigger cluster, you still would only have a couple of these running. That's just for HI purposes.

So I'm going to do a demo that's actually written by the Cilium folks. So if you go to their website under documentation, they have a getting started guide section network, policy, security tutorial. The first one here, the identity-Aware HTTP policy. I'm going to give an abbreviated version of this. You can follow along and do it yourself and see some of the more details here. But basically this is a fun demo that I love. And I'm just going to use, it's Star Wars theme which I also love. You're going to see that we'll have a deathstar pair of pods deployed as well as a tiefighter and an xwing. And we'll talk about some connectivity needs that these pods should and shouldn't have. So let's jump over to our demo room my font, a little here. And the first thing we're going to look at is the actual deployments themselves. So we have the deathstar service and deployment, which you can see has two replicas. And we have a tiefighter, which is labeled empire organization or org class tiefighter as well as a xwing, which is an alliance org.

Okay. So let's go ahead and apply this. Will do. Just to see if they're up. Still creating them, they're there, they're all running now. The deathstar really should only allow pods that are in the org empire to connect and request landing. Right now however, there aren't any policy rules being enforced, so anybody's able to do so. So if I were to do a kubectl exec jump into the xwing pod and inside of that, we're going to run a curl -S -XPOST and ship landed. Okay. xwings aren't supposed to be able to do that. We'll also do this for the tiefighter. Okay. That's fine. But that xwing should not be allowed to do that. So let's figure out how would we restrict this? So first we're going to look at a simple L3, L4 policy. This is going to specify that for the deathstar, we only want ingress for pods that match the selector org empire and on port 80 TCP. So.

Okay, There we go. Now let's try this again. Let's check the tiefighter first, still works that's good, but let's try that xwing. There we go it's timing out. This is a Cilium BPF policy has been written to not even respond to that actually. That's good and that works. Okay. So we have that policy in place, but there are other APIs that the deathstar has that really should not be exposed to anybody. For instance, if you go back up here, the tiefighter, let's let him request a put method on exhaust-ports. Well, that's not good. We look, we can see that deathstar has had to restart one. The real deathstart didn't have a restart.

Nobody should be able to put anything in that exhaust-ports outside of the maintenance crews. So let's take a look at a policy that can restrict based on both method and path. So here's an L7 Aware Policy that extends the prior policy we looked at. It's still only allowing the empire to access on port 80, but we're also saying nobody should be able to do anything except post to that request landing API. That's the only public API we want to expose. So we'll go ahead and apply this. Here we go. Now let's try this again. So the tiefighter access denied. Yeah. So we're, we're getting an actual restriction, even though the tiefighter is in the right org. It's that path and action and methadone are not allowed.

So let's just make sure everything's good, xwing can't even access. It's not even given a CCP access. Let's also make sure that we haven't broken the request landing rules. So the xwing, same thing. Again, it's being restricted at the L3, L4 layer. But if we go back up to the tiefighter, let's make sure it can request landing and it can. So we now have a good set of rules set up and all of this was done in BPF, no iptables were written, everything is being done right as the sockets are coming in and being requested. Now, again, as I said, if we go back and take a look at their full text for this demonstration, it might be worth going through this and taking a look. They've got some other entries that are interesting around looking into the actual endpoint lists and seeing what policies are being enforced and disabled. Again, we just don't have time to really dig into it that much here, nice diagrams and everything else. So check out their documentation. It's a really well done project page.

In addition to all the topics we've covered so far, Cilium offers these additional features that you might want to look into. They offer transparent encryption, similar to the wire guard solution calico has. DNS-based policy rules so you can restrict traffic based on host name, not just IP ranges. They offer some unique Envoy acceleration benefits. They also offer a Multi-Cluster Service Routing feature to allow you to route traffic between multiple communities clusters. Additionally, they have a web UI, a tool called Hubble, which gives you a nice visualization and troubleshooting tool for looking at your cluster.

If you'd like to dig further into Cilium and eBPF, here's a few videos I recommend. In TGI K episode 103, Josh Rosso, digs into Cilium and demonstrates some of the advanced features as well as the Hubble UI. Linked at the bottom of their homepage, Thomas Graf, DockerCon 2017 presentation, does a great job explaining eBPF and how it relates to containers. The CNCF published a webinar from [inaudible 00:36:00] that goes deeply into use cases. And about 21 minutes in Dan Wendlandt co-founder of Isovalent goes even deeper into eBPF. Links to these as well as other topics will be posted below this video. We've looked at a couple of the popular open-source CNI plugins here, but there are dozens more out there. Here are a few more of the ones you may commonly hear about, Antrea leverages the open vSwitch or OVS, a high-performance programmable virtual switch kernel module that's been around for over 10 years. Because of the maturity of OVS, there are already many monitoring and diagnostic tools that network operators are already familiar with. OVS also has some unique performance benefits, including hardware offload on certain vendor nicks.

One of the other interesting benefits of using OVS is that it's fully supported on Windows, which makes Antrea a compelling choice if you're venturing into the world of Windows Native Kubernetes. Other popular providers, Weave Net, offers simple configuration built in data plane encryption capabilities. And it's one of the few that support multicast traffic. One of the oldest container networking projects Flannel has been around for about as long as Kubernetes has. It's a simple implementation that sets up a single flat overlay network, it spans all hosts and pods attached to that. Often you'll see it combined with network policy functionality of Calico. This used to be called project Canal. Although that name is now deprecated. Most managed Kubernetes providers in the public clouds use their own CNI provider and many allow for other CNIs network policy to be used alongside in a fashion similar to Canal that I just mentioned. We have a separate lesson in this course that goes into the public cloud networking. So check that up for details.

So we've covered what the CNI API is and how the kubelet uses it to provision and deprovision pods on the network. We also looked at how Calico and Cilium provide network connectivity for Kubernetes and took a look under the covers that how they work and some of the interesting design differences between them, as well as touched on some of the other common plugins out there. Armed with this information, you should now be better prepared to choose between the various plugins, leverage the unique features each one provides and troubleshoot any issues you run into them when using them. Obviously there's a lot more of these plugins than we had time to talk about here. So I highly recommend checking out some of the deeper dives in these conference talks. You can find links to all of these, as well as the CNI projects discuss below this video. Thanks for listening. And we'll look forward to seeing you in the next lesson.