KubeAcademy by VMware
Service Networking
Next Lesson

Join 80,000+ fellow learners

Get hands-on practice with Kubernetes, track your progress, and more with a free KubeAcademy account.

Log In / Register

In this lesson, we'll cover what services are, why we need them and how they work. We also explain the different service types and common service problems.

Lee Xie

Senior Education Engineer at VMware

Works on creating and delivering Kubernetes based education.

View Profile

Hi, my name is Lee Xie, Senior Education Engineer at VMware. And in today's lesson, we're going to be talking about Kubernetes Networking with respect to services. I'd like to start this lesson by talking about the need for a service, and the motivation behind it. So in our pod networking lesson, we learned that every single pod in the cluster gets assigned its unique IP address, and that unique IP address is actually routable from anywhere in the cluster, okay? This means that, any pod can already talk to any other pod out of the box without any additional setup, or configuration. So that begs the question, why do we additionally now need the service to facilitate the connection between the pods?

Well, here's the explanation. If you guys remember, pods are actually ephemeral, right? Meaning that they were born to die. It's not a question of if, it's a question of when, right? And they could be terminated for any number of reasons, whether they're getting evicted because a node is under maintenance, or if the scheduler thinks that they could run better off somewhere else on a different node, okay? It doesn't matter what those reasons are, but once a pod is terminated and restarted on different node, the new pod is not guaranteed to have the same IP address, or the same DNS name.

Meaning that, if you have existing connections directly from pod to pod, suddenly those connections will be broken, and this is where services come in, okay? Unlike pods, services have DNS names and IP addresses that exists for the entire lifetime of the service. So basically, unless you explicitly delete the service, those IP addresses and DNS's names will be reachable.

All right, so let's talk about services and how they work, okay? And we can do that by tracing through the service manifest that we have here on the right side, okay? We start with apiVersion equals to v1, and because service is a part of the core object group, we can actually just specify v1 here without appending it to a group name, okay? And then below that, we have kind equals to Service. This is the object type, okay, just like pause deployments that you may have learned about already. A service is just another object, and here we say kind equals to service. Then, you have metadata and under metadata, you have name, right? Which is the UID for the object, right?

And below that, you also have labels, which is a key value pair describing the service, right. And you can have as many labels as you want, there are unlimited number of labels that you can add to any object in Kubernetes. Below that, we have something called spec, which stands for specifications. And under specifications, we have the selector and another pair of labels here. Okay, I want you guys to pay attention to this section, because this is the key mechanism that the service uses to select the pods that it's going to be running in front of it. Be a load balancer for okay.

So, the way it works is basically, the service selects for labels, they don't select for pod names, because those are ephemeral, but they actually select for labels, okay? Meaning that, in this case, if you look on the left, pod one, and pod three will both be selected by this service. Okay? So basically, because pod one and pod three, have both labels, app equals to blog interior custom backpack, and they fit the criteria, and they both get selected, and their Endpoints will be added to the service. Okay?

Now, you may be thinking, "Well, what about Pod Two? What would I need to do to make it so that pod twos Endpoint is also added to the service? How do I do that?" Well, in order to do that basically, you can do one of two things, right? First thing you can do is, switch on Pod Two itself, switch the label from app equals to email to App equals the blog. So, that's one thing you could do. And as soon as you do that, all three pods will take traffic, when traffic is sent to the service.

The other thing you could do, if you don't want to change the labels on the Pod, or the object itself, you could come to this service YAML right under the selector, and instead of selecting for both AP equals the blog and TF is a backend, you could remove AP equals to blog and only select tier equals the backend. And then, all the pods fit that criteria. Okay? Now, if you look below that you have a section called ports and under ports you have protocol, right? And the protocol can either be TCP or UDP. You can choose that. And then, you have port and target port.

What's the difference between these two? Well, everything is relationship to the service. Okay? So basically, when port is being specified here, it's in relation to the service and port here is going to be the Ingress port for the service. So what port does this service take traffic on? Okay. And then, target port is actually the egress for the service. What port does this service send traffic to? Right? Or, provide Endpoints for? So, that's going to be the listening port on your pods. Okay?

Now, some people will ask the question, "Hey, I've seen YAML's and manifests for services before where there was no target port at all. What does that mean?" Well, in that case, if you do not define a target port, the target port will be set to the same thing as the Ingress port. So sometimes services run with both the same Ingress port and the Ingress port. In that case, you can shorthand it and skip reading target port, because the ADU here. In that case, target port would just be set to 80. All right, so let's talk about what is services. Since we've already covered why we need a service and how services work.

The term service is actually quite overloaded. But we're going to break it down specifically in terms of Kubernetes, and what a service means in Kubernetes. So, the first thing is a service is a Kubernetes resource, right? We saw that when we're looking at the manifest. It's another kind of type service. And the basic functionality it provides is that, layer four load balancing for your pods, right? All the pods that are selected by a service, around robins that traffic to the different pods.

The second thing actually a service does is that, it provides service discovery and application discovery for your pods through the internal DNS of the cluster. Okay? Another good definition of a service is basically, a service is an abstraction, which defines the logical set of pods by and policy by which to access them, okay? And this pattern sometimes is referred to as a micro service.

And again, as we saw how it works, right? The set of pods targeted by a service is basically determined by a selector. Services also come in different flavors, and that they have different types. The most common types of services are cluster IP, node port, and load balancer. And as we go through the lesson, we're going to look into each of them individually. Okay? But for now, at a high level, just think of cluster IP as a way to expose your services inside the cluster. Meaning, they're only accessible by pods within the cluster.

And node port service type is for exposing services external to the cluster so that, services can be accessed externally. And a load balancer allows Kubernetes to configure and create external load balancers outside of Kubernetes for example, on GKE or EKIs, or Azure clouds, right? Basically, they'll create a load balancer there and, map that load balancer to the Kubernetes service. All right, the first service type we're going to talk about is, Cluster IP. And cluster IP is the default type of service in Kubernetes.

And by that, I mean that, this is the type in that's going to get assigned if you leave the type empty or you don't define a service type in your service manifest. Okay? A good example of this is, when we were in talking about how services work, right? This manifest right here, it did not have a type defined, right? So by default, it became a type cluster IP.

Now, the way that this cluster IP type is implemented is that, it's going to allocate an IP address from a service cedar range that's already been defined in the cluster one available IP address to the particular service. And by default, that service range is going to come from 10.96.0.0/16. So this is why in this picture here, on the right side, we see the cluster IP defined as 10.96.0.4, and this is going to be a virtual IP address, that gets created and attached to the service. This IP again, is going to be accessible from anywhere within the cluster. However, it's not going to be externally accessible.

All right? This is really useful for things like backend databases, or any other backend servers as well as, anything that needs to be reached from an application that's within the cluster. Right? Now, Kubernetes clusters actually have two built in cluster IP type services.

The first one is going to be called the Kubernetes Service. And basically, this is an internal representation of the Kubernetes API. And it's meant for use by the applications running with inside your cluster. By default, if you are using the default cedar block range for your services, IP address for that Kubernetes service is going to be 10.96.0.1.

The second service that's exposed by default is DNS. Okay? And that service is always running as part by default as part of any Kubernetes cluster is responsible for basically, providing DNS names for each of your objects inside Kubernetes. And by default, that IP address is going to be 10.96.0.10 for that service.

Okay, let's do a demo of the cluster IP service type. So you guys get an idea of how things work on a real system. So, the first thing I did here was I alias k equals Kube CTO, that's gonna just save me some typing, you guys can feel free to type that on your own systems to avoid having to type out Kube CTL every time. Okay? So the first thing I'm going to do is, create the deployment for three NGINX pots. Okay, it's a deployment with three replicas. And I could do that with just a single directive via the command line, right? So go ahead and hit enter here.

And you guys can go ahead and ignore the warning about Cuba and being deprecated. It's still going to create the deployment for you so we can enjoy that. Now, let's verify that the deployment got created properly. So we do Kube CTL, get pods. And now we have our three replicas of ngnix that we just created. You see just created 20 seconds ago. Now that we have the ports available, we'll go ahead and set up a service to target these three ports. Okay, we could do that with expose directive, Kube control, expose deployment, a genetics, and I fed at Port 8080 for the Ingress port and Port 80 for the Ingress port. And you notice since I didn't specify type here, by default, it's going to be a cluster IP. Okay? We can verify that by typing out k get service ngnix.

Right. Yeah. Yeah, cluster IP. Here's the IP address. Okay, now that we have the cluster IP, right, and the service, let's show how this service targets that's the these three pods, okay? The easiest way to do that is by typing up Kube control, get Endpoints, because Endpoints are what's going to point to the different parts. So, let me go ahead and do that. And you can see that all the pods are listed here, we have three Endpoints, that one, that one and that one. Okay? And for this service, ngnix.

Now, let's see what happens when we delete some of the pods from this appointment. Okay? And one of the ways I can do that, is using the scale command, right? If I do a scale down, and I say replicas now equal to one, then a full query will delete two of the three ports. Okay, let's just make sure that, that actually happened. Okay, get ports. And the reason I did this, was to show that when you delete the port, the service is going to dynamically remove those Endpoints from the service. Okay? So now, if I go ahead and do Kubernetes, get Endpoints for nginx one more time.

Right? Now, you see that I'm down to just a single pod a single Endpoint. Okay. Let's go ahead and bring it back. Having the two pods back on, right, already have two new pods basically, and there should be a total of three pods now. Get the Endpoints again. And you should see 1,2,3 again. Now, you might be saying to yourself, "Whoa! The Endpoints again?" Now, you should see one, two, three again. Now, you might be saying to yourself, "Whoa! These IP addresses, right? Like, how do I know that they match up with the pods?"

Well, first of all, you'll notice that IP addresses they've changed, right? So it's instead of 1.27 now, it's 1.28. And here, it's changed too it's 2.19 here instead of 2.80. Right? So that's one of the things we've talked about how if we had everything configured for pods, talk directly pods, you'll notice that the IP addresses are going to change when you delete and add pods back, right? So this is how the services help.

Now, how do we verify that these IP addresses match up with the pods? Well, there's another command you can invoke, an option you can invoke on the get command, basically dash oy. And that lets you not only see the list of pods and their status, but they also list you the IP addresses of the pods. Okay, so now we can verify that, that 1.28 matches the 1.28t right here, the 2.19 matches the 2.19 right here on Endpoints, and then the 0.27 actually matches the 0.27 on the other Endpoint.

All right. Now, let's show that this service can be actually accessible by some other pod in the system, right? So that, it can be reachable. So what can we do to do that? So, we can first we want to get the IP address again, because I don't remember it. I'll get the IP address. I'm gonna save it in my buffer. Okay. There's the IP address. And now, I'm going to run a curl container, or container that basically has a shell with the curl command inside of it. So, just for testing this functionality. Let me get the command to run a BusyBox container with a curl command.

Now I'm in the root of this container. Okay, and then inside... This is just another container or pod that's in on the same cluster, right on the same cluster. So from here, I should be able to curl the nginx service on its Ingress Port 8080. Okay, so let's do that we add DNS name first, and verify that works. Perfect, it works. Now, I'm going to go ahead and grab that IP address again. Okay. And I'm going to curl http://. And, I'm going to call it by an IP address to verify the IP addresses works as well 8080, and perfect to get the same call.

Alright, so now we verified that from another pod, you can reach this particular service, right? And then that service will pass one of the Endpoints, right? I'm going to exit. And then, the final thing I want to do as part of this demo is, show the built in services that I talked about, right? There are two built in services. One is the Kubernetes service. And the other one is the DNS service, right? To see those built in services on the system, you can basically just type the command, Kube control, get service Kubernetes. And you should see the Kubernetes, both in service against a cluster IP service that's internal only for the cluster and representation for the API server to different applications inside the cluster, if you need to query the API server.

And then secondly, we also have this built in service for DNS. And it's always going to be on 10.92.0.10 for the DNS one. And we can verify that here. And see, okay, just remember, it's always going to be a dot 10. Obviously, you can cluster your, customize your cedar block here for your service, cluster IP addresses. But, it's always gonna be dot 10 for the DNS, and then dot one for the Kubernetes API service. So, whenever an application or pod that's already in the cluster is trying to access a service IP, this is handled by either IP tables or ipvs Under the hood, okay?

So when a service is defined, right? There's another object defined called Endpoints, we saw that. And this object keeps a list of all the different Endpoints from those pods that are targeted by the service. And, when we did the Kube CTL get Endpoints, we saw a list of all the different accessible Endpoints like live Endpoints that are available for the service. And the list of these healthy Endpoints is maintained by the Endpoint controller. It's one of the controllers under the controller manager and Kubernetes clusters, okay.

Now, once we have a list of potential Endpoints for any given service, we can implement the rest of the service connectivity. Okay, so this is how it works. When one pod tries to access a service by its cluster IP, the initial handshake is intercepted by either IP tables or ipvs. And the destination address is changed the one of the pod IP addresses that identified in the potential Endpoints for your service. The handshake then proceeds as normal, with the pod communicating with one of the healthy Endpoints defined by the service over the period of that TCP flow.

Okay? So even though we've talked about how services front or load balance pots, that actual connection is still going to be made directly from pod to pod, right? Because it's just going to the service and getting the information about which pod I should go actually connected. Right. So, when a new connection from the same part is sent to the same service, it may be that, a new Endpoint from the list is chosen, right? There's no guarantee that the traffic destined for a service will terminate on the same Endpoint over time. Okay, so that's something you got to be keep in mind of when you're designing applications, or when you're connecting applications.

So, since the implementation of cluster IP is typically handled by manipulating the destination address of the initial packet normally, service IP addresses are not payable, right? This is intentional as a service isn't process running somewhere. It's a logical construct that lets us send traffic to a well known IP or host name, right for the service, and assume that, that traffic will terminate on one of the Endpoints that implement that service, or targeted by this service. Okay, you want to think of it that way.

All right. So, this brings us to our next type of service, the node port. If we create a service of type node port, we can see that the service will also allocate a cluster IP. The difference is that additionally, all nodes will also bind the configured node port. This is usually done via Kube proxy by the minute manipulating IP tables on the nodes. By default, the range of ports that can be used for the node port services is from 30,000 to 32 767. Okay? And, that's configurable. This means that, even if the service only exposes a pod on a single node, you can access that pod externally by addressing any node, IP, colon, node port.

The service will be exposed externally on all nodes at the defined node port. Let's take a look at how this works. Okay, let's walk through the traffic flow of the node port option. We start with our actor. And let's assume he wants to connect to an application in the cluster. He can either use one of the two node ports, right? He either access node A, or node B. And to do that, he either puts the node IP address of node A or node B and puts 30,000. Okay?

So, let's assume that this actor wanted to go through node B, he would put the IP address of node B, and then 37,000. And when he gets to the node B, then he reaches proxy or IP tables on node B. And then the cluster IP is going to round robin, a choice between Part B and Part C, which are replicas. This is where the decision point happens. Now, if pod C is chosen, then he's already on the node, he goes directly to the pod Cs IP address, okay? If pod B is chosen, then he has to jump in bow through to node A and then, therefore get to pod B.

Either way, this infrastructure is set up for you, set up in place so that it doesn't matter which one is selected. Okay. So the big point to get across here is that, it doesn't matter if the actor goes to node A or node B, it's still up to the cluster IP round robin to determine whether he ends up on pod B, or pod C. So now let's do a demo for service type node port and see how things work on a real system.

So the first thing I want to do is, change the current service type from cluster IP to node port. We are continuing from our cluster IP demo. So the deployment is still in the system, deployment system, the system as well as the service that's currently targeting, that is pods from the deployment. Okay, but that sort of is currently a cluster IP. So the first thing I want to do is, modify that service. I could do a K edit, service and the name of the service is nginx, type that in. And then here, you see that it's currently type cluster IP. And we want to change that to node port, so simple as changing that variable to node port, and then saving the file. And this should take effect real time.

Okay, so the next thing I'm going to do is, look at the service and validate that it's been switched to a node port. I can do that by doing a Q, ctrl, get service nginx. And now it's a node port. And I still have my cluster IP. But additionally, now I have this node port that's been exposed on all of our nodes. Okay, that's the term node port. Now, the next thing I want to do is confirm I can connect externally to this node port. And because I'm running this cluster on Google Kubernetes engine I actually have to fire open up a firewall for that, okay. And to do that, I basically type in following Command G Cloud compute firewall open, and I can open this port on all the nodes, which is our node port right here.

306 go ahead and type it in, 07. And make sure I don't use the previous one, so I can use it here. Okay, so it's going to take that firewall rule for me. Okay, so once that firewall rule is created, the second thing I need to do is get all the IP addresses from those nodes. Okay, I can do that by typing, Kube control, get nodes, dash OY. Now we have all the different nodes in the system. Okay, so here are the external IP addresses.

Now, before I go verify that these extra Endpoints are connectable and everything select set up correctly for node ports, I want to actually delete one of my pods. And the reason I want to delete one of the pods is, to show you guys that even if the node does not host any of the pods that, the service is trying to connect to or the node ports are connected to, it'll still take care of routing that to a different node that has the pod, as we saw in that traffic path flow, right.

So, let's go ahead and decrease the number of pods in our deployment from three to two, which with our handy scale command, right? If I do Kube, a get pods, dash OY, excuse me, will actually see two pods instead of three. And it looks like only the pods that are running are only on nodes that end with 8196 and 3d40, so 3d40, so have it and 8196 still has it still has a pod running on it. Now you want to pay attention to this node six L. Jam, right this IP address right here. Because even though this node won't actually have the pod, you'll still be able to use that node IP, combined with the node pod to access the application. Okay, we're going to do that now.

So I'm going to go through each one. So let's start with this one. The dot 185, right, I copy that. And I'm going to move on to my browser. Okay, I'm going to put that address in. And then I'm going to put the node port, which I remember as 3607, which I open on my firewall, and boom, maybe they hit the nginx application. Cool. Now, let's try the IP address from the node that actually doesn't have it, right? Does not have the pod. Right.

So we'll verify that, "Hey, look." Change the IP address here to that middle node. And right, just to make sure I'm not caching anything, I can refresh the page right? I still am saying we're gonna get to it right. Now, I can go back and use the final nodes IP address, copy that into my clipboard and go back to my browser. And look, all three addresses. Work doesn't matter which note I hit right. Once I hit the node is going to go to the service and the service of a round robin between these two pods right here. Okay. Cool.

So, we verify the node pod, and we See that that works. So as you can see, the node pod method is sort of cumbersome, right? You're not actually going to be giving these node ports here end users, and expect them to connect to the applications on the cluster this way. This is not the normal way, this could be used for either dev testing and validation, right. But the proper way to actually provide access to customers is going to be sort of like the swivel chair approach, right? You actually end up having to create an external IP and hardware load balancer or software load balancer like metal lb, or f5, etc, right. And then adding all these node ports, combined with the node IP addresses as Endpoints, okay, so in this example, you would log into your f5, load balancer, right, and whether it's a hardware appliance or software load balancer, that's fine. And then you create an external IP add IP for this nginx application.

And inside that extra IP, you can pop in different Endpoints, right. And as for as many nodes as you have, you'll have that many Endpoints. So in this case, we have three nodes, you plug in all three Endpoints along with the node port. And then that's how you need provide the external IP here and customer. Okay, so that's how it would work. All right, let's talk about service type load balancer. This is the final type we're actually going to be talking about in this lesson.

So this one is actually a special service type in that, it's not implemented by Kubernetes itself. Okay. Typically in this model, the load balancer will be provisioned using IIS API, and then the traffic will be directed to all the nodes in the cluster at the node port. Okay. This means that, service type load balancer just adds one more abstraction layer on top of service type node port, just like node port, added one abstraction layer on top of cluster IP. This also means that, instead of manually having to create those external IP, IP addresses for nginx, like we had to do with the node port example. And adding all the Endpoints to that extra IP in your software hardware load balancer, we can actually leverage the Kubernetes service type load balancer to make this happen automatically.

Of course, this requires actually a little bit more configuration, if you're using one of the cloud providers, it's already baked in for you like, if you're using GKE for example, or EKS or Azure Kubernetes service that's already there for you. Okay. But if you are setting up on your on prem Kubernetes cluster, there may be additional configuration that you have to take care of, okay. It basically requires an integration with the external infrastructure API to provision the load balancer on that system, whether it's f5, or metal lb.

Since we're going to be working on a GKE cluster for the demo, you're actually going to be able to see this in action. All right, let's take a look at the traffic flow for a load balancer right now, in this case, our actor no longer goes directly to the node ports, right, even though he can in this situation, because again, if you do specify type load balancer, you actually still getting a cluster IP and still getting node port, it's just that you're getting this additional load balancer step on top. But, our actor is actually going to go directly to the load balancer, right? Which is going to be on top of either your f5 load balancer, or it's going to be an elastic load balancer from Amazon, or GKE a load balancer, right.

So there's basically this external IP address, and this external IP address will be able to connect to all the node ports on your cluster. Okay, in this case, we have two node ports, node A, IP colon 30,000, or node B colon 30,000. And the rest of it will work just like how node ports work, you're just getting this additional step on top, where instead of the user, or your end user having to specify a node port, and a node IP address, right, which he might not even have access to, and might be cumbersome, he gets a direct, external IP address. You know, demo the load balancer service type.

So, let's get started, I want to look at the Google Cloud Platform UI. And basically, here is where I have my service nginx. And it's of type node port. Okay, so I just want to show them the UI. Before we get started. If I click on that, I can see that basically, these are the two top pods that are taking traffic based on where we left off. And then here's our cluster IP, which is only accessible within the cluster. And then finally, if we set up a node port for it so that we're able to access it externally, as long as we can access the different nodes, right, we tried three nodes and they all worked.

We have three zap 30,000 and 607. All right. So now, we're going to take it one step further, right? By going back to our terminal, right. And then, what I want to do first is basically, switch our node port type service to a load balancer type service. Okay, and we do that by K, edit, service, and nginx. It should pull up. And again, right now the type is load node port based on where we left off. So, I'm going to go ahead and quickly switch that over to load balancer. And then save the file, and see what happens.

Again, it's instantly reflected and applied to the server as soon as I save that file. So the first thing I want to do is, at the command line, I want to look at the service ID, I can't get service and give it the name. Next, I want to make sure that it got switched to load balancer. Great, now we have type load balancer. And we have a cluster IP still, and our node ports, like we said before. But now, we have this additional thing for extra IP. And now currently, it says pending.

So, on the backend, what it's doing is it's going to Google Kubernetes engine and setting up a load balancer on the backend, which is why it may say pending, it may take like a minute or so. So let's go ahead and try that again. Okay, cool. Now we have actual XM IP available. Now, because this external IP is exit exposed externally, we can actually now just use this IP with the ADE port as opposed to having to specify the node port each time. Okay. The difference here is that, when we connect to this IP address this external IP that's been created on GKE load balancer. And we specify ADE 8080 effectively, what we're doing is we're actually going to the load balancer, and the load balancer will randomly select not randomly select... I'm sorry.

Round Robin, select one of the Endpoints and Endpoints are actually at this point, the actual nodes, okay, the different IPS of the nodes, either k get dash OY, you actually see the different nodes, right? So these three are backing this extra IP. Okay, so let's test that out real quick. So the first thing I want to do is, go back to my browser. Well first, I want to copy this actual IP address. Okay. And then I want to go back to my browser. Yes, and let's pull up a new page. Add that in, and then colon 8080. And cool.

Now, we can access our application just like we did when we do it via node port, except now we have a friendly, static, externally accessible IP address. And we have like a friendly port like 8080, as opposed to something like random 30,000 609. Right. Now, let's go back to our Google Cloud Platform on Kubernetes engine UI, and see what it did on this side on you know, the GKE side of things. Okay? So now under service Ingress, we actually have the type as external load balancer, so recognize it. And if I dig into it, it says, "Here's my load balancer. Here's the Endpoint," right? That's the external IP, and it maps to our different nodes in the cluster. Okay.

So now, what I could do further to look at this is actually go over to our networking section in GKE and go to network services and go to load balancer in our GCP platform. Cool, now we have a brand new load balancer that was created, right, and the target pool is three nodes, see how these match up. And this is the Endpoint of the load balancer, the external IP, and what's backing it are these three nodes that are part of the cluster. Okay. So normally, what we you would do is, you would have to go into a software load balancer and you still create a new load balancer and click give us your IP, but then you're plugging in all of these nodes yourself manually after you created the node port, okay. With something like GKE, or Amazon Web Services with EKS or Azure Kubernetes service.

This happens automatically when you specify a load balancer via the Kubernetes API. Okay, so now we've created this, if so, and then this is what's backing it and just to verify that these are the actual nodes, we can just remember these last numbers 81963, 3d406j6ljm. Right? And if I go back to my right, we can verify that these are the nodes being backed. Okay.

So that's pretty much load balancers. I mean, you just, it's just an one extra layer on top of node port. Okay, so what happens when your application is not reachable? How do you diagnose these problems? What are the common troubleshooting things to look at? Okay, what are the symptoms? The first thing you should look at is the Endpoints, right? That command I showed you guys back in the cluster IP demo, K get EP, and they give you the name of the service. Okay, that's the first thing you should look at.

If your service is not working and denying any kind of connection, make sure has healthy Endpoints, right? Maybe the pod is going through a recycle, or the pod is run out of CPU, or RAM, etc, etc. all these reasons could exist, that's the first place to start. Okay? Now, once you've looked at the different Endpoints, and maybe you don't see the Endpoints showing up, the second thing to look at is the label selector, maybe your labels are not correct. And maybe that's why the pods aren't being targeted, because you're searching for a label that isn't labeled on the pods, okay?

So you got to make sure those things are in sync. That's the main functionality or the correlation between services and pods. Okay. And then the third thing is, maybe you have your container image changed, right. And if your container image changes, and it's maybe listening on a different port than the one that's configured on the service, whether and you know, maybe it's the target port or the main port, that could cause a connection problem, right. So if you target port is 3306, for a database for example, but then maybe you're using a Post ingress pod database that's listening on some other pod, or a MongoDB database, for example, right, you just got to make sure that that target port is in sync with whatever port that your pods are listening, okay, the target pods.

And then the fourth thing is, potentially too many services defining the cluster, this could cause some confusion as to what service is targeting which pod. And just having so many services could cause some complex additional unneeded complexity, and problems when you're diagnosing issues as well. And then also, another one you want to look at look for are the DNS for the pods can be misconfigured. So there may be times when services talk directly to pods via its DNS name instead of IP address. So you want to make sure that the DNS for the pod is configured properly on the system. Okay.

And then, because Kube-Proxy is the main backbone of how all the services work, especially with node ports, and cluster IP, right? You have to make sure that, Kube-Proxy is running on the nodes. If it's fairly on the nodes, it could potentially cause an outage, or inability to connect to some of the pods that are running on those nodes. So just to wrap up, guys, in this video, we covered what services are, right? We covered why we need them, and then how they work not necessarily in that order, but we covered all those things.

And furthermore, we also explained that there are different types of services and the three most common ones right cluster IP, node port, and load balancer. Right? So, this concludes lesson on services and please check KubeAcademy for other Kubernetes network videos.

Give Feedback

Help us improve by sharing your thoughts.

Links

Share