KubeAcademy by VMware
Networking Considerations
Next Lesson

Join 80,000+ fellow learners

Get hands-on practice with Kubernetes, track your progress, and more with a free KubeAcademy account.

Log In / Register

Kubernetes has lots of layered networks from containers to pods to nodes and service level networks. All these networking requirements have to be considered to build an effective stack. This lesson covers how Kubernetes implements this via its Container Networking Interface or CNI.

Boskey Savla

Senior Technical Marketing Architect at VMWare

Boskey Savla is Technical Marketing Manager @VMware focusing on Cloud Native Applications, she has 15 years of experience in systems and operations.

View Profile

Hi, and welcome to lesson two. In this lesson, we are going to talk about what are some of the considerations when building out a networking stack that is compatible and that works or makes your Kubernetes optimum. Now, as we spoke in lesson one, Kubernetes talks to various infrastructure services and one of them being networking through a set of plugins and for the networking feature, this is called essentially the Container Networking Interface or the CNI plugin. And you'll see that in the upstream communities project, there'll be multiple networking vendors or open source projects that are contributing to create a compatible interface for their set of technologies that can work with the CNI interface. And this is how Kubernetes understands how to work with AWS ELB or v-Sphere NSX or Google Cloud Networking to make things happened in within that infrastructure.

Let's take a look at a Kubernetes cluster when it comes to networking, right? On the left-hand side, I have a standard Kubernetes cluster. It has three nodes within that cluster and then each node will have a set of PODs that are running on that. Now PODs are essentially containers plugged together or there could be more than one container running within a POD. So the overall idea over here that I'm trying to tell you is they will be nested virtual networks within your stack. So you will have, let's say a container network, these hexagonal boxes or containers, or they have multiple containers within them. And these container networks will be up linking to a POD network. That POD network might be up linking to the virtual machine interfaces network. And then eventually the virtual machine network would be up linking to either your on-prem backend networks or to the cloud provider network.

And so networking within Kubernetes can get pretty... or nestored pretty quickly. So if you imagine you will, from a services perspective, you will need to have switching and routing at each layer. And at the same time, if you want your applications or a container that is sitting within this tiny POD over here to allow traffic or to allow external traffic, then you will need a way to implement that to an ingress or a load balancing service. When somebody, as a developer says, "I need a Kubernetes cluster... Or to a Kubernetes cluster that "I need a load balancing service." Effectively what they're trying to say is that "I need to provision a load balancing type within my infrastructure that is going to route traffic across these different networking stack and reach to my container within a POD appropriately." So that's the idea.

And if you don't have this integrated stack, let's say you don't have ingress or load balancing between your Kubernetes environment, then the experience is overall broken in terms of what development teams can too. And the way all these different networking stacks are implemented, whether it's a POD network switching into a POD network, whether it's a layer two capability versus a layer three capabilities. Whether all the way to layer seven, all of these implementations are defined by the container network interface or the CNI. Now the CNI essentially runs on each of those hosts. That is part of your Kubernetes cluster. And depending upon what you select as a plugin, it has various capabilities to implement various things. So your plugin is essentially, you pick a plug in of your choice depending upon your infrastructure. And then the plugin understands how to talk to the continue networking interface sitting within your Kubernetes cluster and the host or the nodes within it to make everything work.

There are different types of CNI plugins available in the market from open source to vendor, to proprietary, et cetera. But here are a few examples... And each container networking interface project may not do the entire stack of things that you want it to do. So some of them may be for example, that only do layer two stuff and they don't do layer four layer seven. And so it's pretty critical to figure out what kind of a networking interface or a plugin you're picking depending upon the needs. Ideally you want your networking interface to be such that it gives you least amount of operational overhead in terms of implementing, deploying, managing, and upgrading it. And at the same time, it gives you an entire full stack of networking services all the way from layer two to layer seven and even more.

So, for example, Calico here provides native and overlay options. It also supports PGP routing, IPM network, network policy enforcement, and things like that. Cilium on the other hand, supports native and overlay networking options for containers. It also supports network policy, but it doesn't have PGP, et cetera. So there are very different capabilities between the two and you can pick and choose whatever you want. Antrea on the other hand create overlay networks, it configures and enforces network policies. It does better network performance by leveraging open virtual switch rather than IP tables to figure out switching. And so again, there are tons of projects would recommend you log on to, or to look at different projects within the Kubernetes ecosystem and figure out capabilities versus what you need. It could be that you don't need everything, but in case you do need everything, then you need to figure out which particular project you want to go with.

To select the CNI, I think some of the factors that you'll have to figure out is apart from layer two to layer seven capabilities, that is routing, switching, load balancing, et cetera, North, South, East, West, everything. Kubernetes also helps to implement certain security policies, like network policies. And these network policies define, for example, whether POD A can talk to POD B or not. Similar to what in the virtual machine world you would have in terms of firewall, ACLs, et cetera. Similarly Kubernetes has an equillant and it implements that through the CNI apart from just networking capabilities. And so you'll have to figure out what kind of CNI you want. Depending on all these different criterias now a lot of people may not definitely need, for example, egress policies, or they might need for example, network policies. They may not be using service mesh.

So again, it depends upon what kind of requirement your development team has, and you can pick an appropriate CNI. So that was all about Container Network Interfaces and plugins. Now, Kubernetes has one more construct when it comes to networking, which is services. And we are going to talk a little bit about services. So Kubernetes service is basically you can say a way of exposing a POD or an application that is sitting within a container to either the traffic that is east west. So for example, one POD can talk to another POD or even north south, which means an external traffic and talk to a container or an application running within a container in a POD. Service is an abstraction. Let's say you deploy your application, which is called backend. And you deploy maybe two PODs for that. Now for both these PODs to talk to each other, you will need to define services for each of these PODs.

And then each of these services together, or each of the service, then they'll talk to each other. And that's how traffic will flow. On the other hand, if you want an application, let's say backend to have external traffic flowing in, you want to define an external service type load balancer, which exposes the application or the app running within the POD with a public facing IP address and the POD that your app is running on to the external traffic. And this is a very important aspect of giving ideas because this is how services get defined, this is how the concept of microservices gets implemented. And a service is defined not just by IP addresses. So in traditional virtual machine land, when we would say, "We need to load balance our workload... We need to load balance and application." What typically would happen is we would create a load balancing service, that would front-end external traffic. And then it would pass that traffic to the backend virtual machines, depending upon what was configured.

And typically these configurations were defined by our IP addresses or host names to route traffic to. What Kubernetes does is it implements label selection, and which makes traffic routing super powerful. So let's say you deploy an application and you don't know really what that POD IP is. You don't have to know. All you have to define in a service is say, "Route anything that comes over here to a POD that has a label match for my app." So Kubernetes works with this concept of label matching and look up using labels to route traffic. So even if you don't understand, or even if you individual know the IP address of a particular backend POD, it doesn't really matter because the service is going to look up your application based on the label that the POD has.

And that's how a Kubernetes service is created. Now there are three different types of community service. One is called the ClusterIP type, one is called the NodePort type, and one is the load balancing service. The difference between all three is pretty standard, right? A NodePort is basically even the application running in your POD is occupying because it's by in itself, it will have a private IP address. It won't have a public facing IP, so it's going to need some way to expose itself. So in the type NodePort, what tends to happen is it will pick up the port of the host, right? So let's say if it is running on host A within that cluster and the app is running on port 80, then it will use the port 80 of that node and bind that service to that particular IP of that host. So that's a NodePort concept.

It is not something that is very dynamic because let's say if your POD moves to a different host then your IP will change. So again, it's only for testing purposes or when you really quickly want to expose your application using NodePort to just test and verify and validate. That could be another use case. And then the other type of service is called ClusterIP. This is more commonly used. So what ClusterIP does is it won't provide an external facing IP address to your service, but it will create an IP address that the cluster itself understands, the IP addresses within the cluster understand. So this way you set aside a pool of IP addresses that the cluster can use. And then when a service is needed, one of the IP addresses from that cluster can be used [inaudible 00:12:32] by that service. And then your routing can happen through that ClusterIP address.

Again, ClusterIP won't expose because it's the IP address is internal only. You won't be able to route external traffic through a ClusterIP based service type, but at least you can do east-west traffic routing using a ClusterIP. And this is the most common thing. And we'll see that when somebody is deploying an application, you'll see that they will most likely to create a service type ClusterIP, so that services within the cluster can start talking to each other.

Then the third type of service is called a load balancing service, which is basically the external facing, or it gets us the external facing IP address, such that networking traffic can enter your cluster through... Or external traffic can enter to that cluster. So these are the three different service types.

Another concept in Kubernetes is ingress, right? So basically let's say you have an application with multiple services running, and you want to use a single URL to route traffic based on that URL to different PODs within that Kubernetes cluster, then you are going to use ingress. The ingress resource operates a one layer higher than the service. Anyways, an ingress is always a service type load balancer because it's meant for routing and manipulating external traffic coming into the Kubernetes cluster. So you can think of it as a service for services. However, ingress in itself is not a service. It is a collection of rules that direct external inbound connection to a set of services within a Kubernetes cluster. And an ingress can be configured to give services externally reachable URLs, load balance, terminate SSL, TSL, and often name based what you're hosting.

So for example, you have abc.com as your URL. And within the cluster you're holding multiple services. For example, if traffic is coming to sales.abc.com, you want that traffic to go to a specific POD. If you need abc.com/video, to go to a different video service POD and correspondingly the PODs that are behind the video service. So what tends to happen is let's say a traffic comes in, it hits this URL, abc.com/video. The ingress rule knows that if it is a slash video type incoming URL, you're going to redirect that traffic to the video service. And then the video service in turn directs the traffic to the appropriate POD that has the tag video service. So this is how you can do a lot of cool stuff with ingress. You can use ingress to create different kinds of load balance traffic, terminate, SSL, reroute, things like that.

One thing to note is ingress, if you want to use ingress, you will need to have additional tooling in your Kubernetes clusters. So again, this is something that you have to decide when you are selecting your CNI. You want your CNI to support proxy services so that a proxy POD can run and can route traffic coming from an ingress service appropriately. So for example, right now, there are multiple ingress support technologies. The most common ones are Envoy, which creates a sidecar proxy. But then each of your PODs that are running within the cluster, and then you can manage traffic on Envoy using something like Istio. There are other projects that help manipulate traffic for an ingress gateway, and you should check out the Kubernetes project to see which ones you like. All right. So this is all about Kubernetes networking. We'll see you in the next lesson. Thank you.

Give Feedback

Help us improve by sharing your thoughts.

Share