Services are how we expose components of our applications in Kubernetes. In this lesson, we’ll review the common patterns used to expose applications.
Tim Carr is a Staff Kubernetes Architect at VMware and a member of the Heptio acquisition.View Profile
Hi, I’m Timmy Carr. I’m a Cloud Native architect with VMware, and today we’re going to talk about services in our Kubernetes clusters. The service object provides a logical capability in our cluster to target pods, and that does a couple things internally in the cluster. It really does three things.
Number one, it provides an IP address and ports such that we can connect and make that network connection to a pod that’s targeted. Number two, it provides a DNS record of what that IP address should look like in the cluster for service discovery. So if we perhaps wanted to have a middleware tier have application changes made to it, but still always keep the same DNS entry so that our other applications can target it, no problem. The service will do that.
Finally, it propagates end points, and those end points are always pods that have an appropriate label attached to them. So just like, you know, in this diagram here, just like the deployment kind of controls the replica sets, which control pods, and all of that magic is done by labels and label selectors, the service does the exact same thing to populate the end point list, right? It’s using labels, and most importantly, targeting via label selector in our cluster to understand which pods it should forward traffic to. And it’s doing that load balancing on our behalf.
On the right side of this diagram here, you’re going to see that service targeting the pods with the appropriate app. App: kuard. The other thing that you see is you see the port. Now typically speaking, when defining a service manifest in Kubernetes, you only have to define a port, and if the port is the port that your application’s running on in the back end, great. You just defined port, and the port and that target port are always going to be the same.
But let’s say you wanted to expose an application that happens to be running on port 8080 on some different port to your end user. Well, you can specify 8080 is the target port, and that different port to your end user as something else. And I’m just doing that here to show that example.
Let’s look at a basic manifest. Let me pop over to the terminal, and you’ll see that our basic manifest gives us a service of type cluster IP if none is specified. In this case, none is specified, so let’s look to see what that looks like in our cluster. K get svc, for service, we’ll do that first. That’s kubectl get svc, I’m just shortening in my world. You’ll see that I created the service called kuard.
Now I mentioned that we also get a DNS entry on the internal cluster network as well, right? Using the internal DNS resolution. In my bottom window, I actually have a Ubuntu pod that’s running on this cluster, and I’ve installed dnsutils in Ubuntu here so I can dig kuard.default, which is our name space, this is in the default name space, .svc.cluster.local. And that should give us, and it does give us that exact same IP address that we were looking at in the cluster IP up here for kuard.
Now, exiting out of that terminal below, I want to talk about how we can expose this to the outside world. My cluster network, and most cluster networks which those cluster IPs are on, is not routable to the external world. And in order to get to it, if I were a developer, I could test using the kubectl port-forward command, and I would use svc/kuard, and once again, that’s exposed on port 9000, so I will port forward for my local machine using this command to port 9000. And if I pop over to my browser, and pull up local host port 9000, you can see that that kuard service has been exposed.
In fact, if I jump back to my terminal, you can see the port-forward command down here has handled that connection on my behalf. Now that’s no way to actually expose an application to the outside world. We need something better. We have other options leveraging service for doing that.
The next one up is probably NodePort, and really the only thing different about the NodePort service is the fact that I’ve actually specified a type here in my config, so I’ve said, “Hey, type NodePort, and nothing else changes.” So if we go k get svc once again, you’ll see that on my NodePort, I am running on 31324. And all a NodePort is doing is saying that on any of my nodes of my cluster, if you hit port 31324, that node’s going to take care of jumping you back onto the cluster network, and getting into the appropriate pod.
Let’s have a look real quick. So, from my browser, if I go to one of my worker node’s IP addresses. I know that 26, for example, 192.168.10.26 31324 happens to be one of my worker nodes. There you go. We’re in good shape there, but I said any. And really it is any, so I also am running on 25, and I’m also running on 24. As you can see, that port 31324 is reserved on every one of those nodes.
Yeah, so that’s also kind of problematic, because no one really wants to go to those ports, but this is a good way to expose applications for folks that are maybe running on- prem, and don’t have a load balancing solution available to them. You can also look into Ingress exposed by NodePort. That’s a good option there. Check out an Ingress video that we have for you there.
As far as how we may expose this in another way in our environment, you know, I happen to have in this cluster, and if we jump over to our terminal again, I happen to have the ability to leverage a load balancer. I’m using a project called MetalLB in my environment, which simply provides a load balancer where I can specify the number of IPs that it should hand out. And you can see that for the first time in the external IP column, I have something, which is pretty interesting.
In my cluster the way this is done is by simply… cat service-lb, is done simply by specifying type equals LoadBalancer. If you’re running MetalLB, that’s the way it works. K get svc. Again, you’ll see that if we jump over on our load balancer to this external IP at port 9000, we should get our application. Let’s jump over to the browser and do that real quick. So if we go to 192.168.10. It looks like 220, port 9000, that is actually going to give us our application. And boom, we’re running.
Okay, so we’ve covered three different types of services, the cluster IP, the NodePort, and the service of type load balancer. All three of these are very commonly used. There are other types as well. I encourage you to jump into the Kubernetes documentation, or maybe catch another video on the KubeAcademy to actually have a look into that. I’d like to thank you for your time today, and we’ll see you next time.
Have questions about the material in this lesson?
We’ve got answers!
Post your questions in the Kubernetes community Slack. Questions about this lesson are best suited for the #kubernetes-users channel.
Not yet a part of the Kubernetes Slack community? Join the discussion here.
Have feedback about this course or lesson? We want to hear it!
Send your thoughts to KubeAcademy@VMware.com.