KubeAcademy by VMware
Deploying Your Application
Next Lesson

In this lesson, you’ll learn how to create Kubernetes Deployment manifests that can be used to deploy your app to various environments.

Rich Lander

Independent consultant and Platform Field Engineer

I am an independent consultant and Platform Field Engineer. I was an early adopter of Docker and Kubernetes and have spent the past few years helping enterprises adopt cloud native technology.

View Profile

Hi everyone. This is the fourth lesson in the KubeAcademy series, Building Apps for Kubernetes. In this lesson, we'll be covering the subject of deploying your application. My name is Rich Lander, I'm a senior Kubernetes architect at VMware. In the last lesson, we covered how to spin up a local Kubernetes cluster and create a pod on that cluster. In this lesson, we're going to build on that by using deployment and service manifests to run those resources in Kubernetes. We're also going to look at how you can customize those manifests for deployment to different environments. Okay.

Before we get started, let's touch on what deployment and service resources are in Kubernetes. But to get to that, we need to go back to what a pod is. In lesson three, we deployed a pod. As you know, a pod is a collection of containers with a single IP on the pod network. There are several other Kubernetes resources that build upon the pod resource and add features, and one of those is a ReplicaSet. It is a convenient way to deploy multiple identical pods at once. However, ReplicaSet's are rarely used directly. It is more common to use a Deployment resource, which manages ReplicaSets because it layers rolling update functionality on top. The service simply provides a stable network address for your application, including a virtual IP and a local DNS entry within the cluster. The pods for an application may come and go due to errors or infrastructure failures, or upgrades, but the network address provided by the service remains the same. The service load balances traffic across all the pods, and will automatically update the list of pods throughout traffic to, as you add, or remove pods from the Deployment.

So let's get this up and running. So here in the top left pane, I have a watch on the pods in the default namespace of my coin cluster, and at the bottom left pane, have a watch on the services in the default namespace. Currently, there's one pod, the crash cart pod, which I'm going to use to illustrate some concepts in a minute. And down here in the services, we just have the Kubernetes service, which is for the Kubernetes API. Let's first look at our deployment manifest. This looks a lot like the pod manifest that you've already seen, but it adds a couple of things. One is the replicas, this instructs the deployment to create two replicas of the pod, and then there's a selector which tells this deployment to manage any pod with the label app KubeAcademy. And then down here under the template, we add that, label app KubeAcademy, to every pod that gets created. So let's create this deployment

And there we go. We've created that deployment, and we can see two containers, two pods, I should say, being created for this deployment. Perfect. So now let's look at our service. So under the spec for this service, we can see that there's also a selector here, and for the service, this means to route traffic to any part in the namespace that has the selector app, KubeAcademy, and we have one port definition here. It tells this service to expose a TCP port 80 and then route traffic to port 8000 on the pod. So let's apply that service.

And, there we go, here's our service. Now there's a couple of things I want to illustrate here. We're going to use this crash cart to send some co-requests, and we'll do that using the exact command, specify our crash cup pod, and we're going to curl the service IP. And that gets us back a response, "Building apps for Kate's app says hi." And that's the expected response from this application. We can do that as many times as we like, we're going to get the same response. Now let's do it to one of the pod IPs. So we'll do a get pod, and we'll give it wide output. This gives us the pod IPs over here. So we'll do the curl again, but this time we'll target a pod IP. However, this won't work, we get a connection refused. And that's because the pod is exposing, is listening on port 8000.

So, if we do the same curl request to port 8000, there we go, we get the expected response. And that illustrates that the service accepts requests on port 80, but forwards to port 8000 on the pod. Now there's one other thing I would like to illustrate. So if we go back to the pod IP and we do a ping, this will work just fine. We should get a response to the ping, which we do. However, if we do this for the service IP, this won't work, and that's because the service, as you remember, we defined that we wanted to expose a TCP port and ping uses ICMP, different protocol, so it doesn't work. Just wanted to point that out because that's important to understand when you're working with pods and services.

All right. So, now we're going to look at Kustomize, and we're going to overlay our base manifest that we just used for a different environment. First of all, though, let's clean up behind ourselves. We're going to delete our service, and we going to delete our deployment. Before we jump into the Kustomize demo. I just wanted to bring you quickly to their website, kustomize.io, and show you how they describe their tool. It says that "Kustomize introduces a template-free way to customize application configuration." And personally, I love this template-free method. You can take a working useful manifest and just overlay those changes to fields that you need in order to deploy two different environments.

So, now that those are cleaned up, let's look at the configuration files that we're using for Kustomize. We have a base directory, and we have an overlays directory. The base directory contains a deployment and a service channel. These are identical to what we just used, they're just in a base directory for Kustomize. Then we have a Kustomization YAML, which tells Kustomize what to do. Let's just have a look at that file, customization, and all it does is tell Kustomize that the resources we're using are defined in this deployment YAML and service channel, which are right next to it in the base directory. In the overlays, we have one overlay defined, which is our production overlay, and there's a customization file again, and a replica count overlay. So let's have a look at what they look like.

So this is our customization file. So when we apply the production overlay, this tells Kustomize to prep and prod- to every name of every resource. This tells Kustomize to add an additional label to your prod to every resource. This just points to our base manifests that it uses. And this defines a single patch. You can apply as many patches as you like, but we're just going to do one, and it's called replica count, and what that replica count patch looks like is this, it says for the deployment, with this name, overlay this in the spec instead of replicas to apply replicas five. So the way Kustomize is used is if we do Kustomize build, let's do it on the base. What Kustomize will do is generate the manifest, which includes the service and the deployment manifest. But this looks just like we already deployed.

So how about if we build The production overlay, now we get a service with prod prepended to the name, same down here. And we also get the additional tier prod label, and we get the replicas five overlaid. So this is how Kustomize is really, really useful. And now we can take this Kustomize build, and then we can pipe it, kubectl, apply.

And instead of passing it a file name, we just end the command with a dash, and then kubectl will take standard in and use that for the resource creation. So here, we have our service prepended with prod. We have five replicas, all prepended with prod, and this illustrates how useful Kustomize is in taking base resource manifests and then patching them with overlays for different environments. It's super handy. All right, cool. So in this lesson, we've looked at how to deploy an application using a Kubernetes Deployment resource. This will be, normally, a good fit for any stateless workload. However, if you need to run a database or similar stateful workload, you should look at a stateful set resource. So we're going to quickly jump over here to the Kubernetes docs, this is kubernetes.io, in the docs, in the concept section. On this left pane, if we look over here under Services, Load Balancing, and Networking, you can visit this page to learn more about the service resource that we deployed.

Under Workloads and Controllers, we have deployments and stateful sets. The stateful set is a good choice if your app requires stable, persistent storage or if you need a stable network endpoint for something like a database master or something of the sort. So please visit these docs if that's interesting and read all about it here, all right. If you'd like to learn more about deployments and services, I highly recommend the KubeAcademy course, Kubernetes in-depth, which will explore these and other subjects related to Kubernetes in greater detail. So, thanks for watching. Hope you found this video helpful, and we'll see you in lesson five.

Give Feedback

Help us improve by sharing your thoughts.

Links

Share