KubeAcademy by VMware
Managing Application Rollouts with Kubernetes Deployments
Next Lesson

Deployments control pods, but they are also leveraged to help roll out new versions of your application’s code. This lesson covers the process of rolling out new code in Kubernetes.

Tim Carr

Product Line Manager at VMware

Tim Carr is a Product Line Manager at VMware and a member of the Heptio acquisition.

View Profile

Hi, I'm Timmy Carr. I'm a cloud-native architect with VMware and I'm here to talk with you about how to roll out new versions of your application, leveraging the native Kubernetes deployment mechanism. We've talked about how deployments control replica sets, which control pods. Now we're going to get into a little bit more about how those deployments can actually interact with those replica sets to help with that application rollout functionality.

And to do so we're going to leverage the Kubernetes Up and Running Cordy application. Love this app. It's a part of the Kubernetes Up and Running book. And the reason why this is awesome is because it has a lot of capabilities to help you test Kubernetes clusters, meaning you can specify the memory that this thing will use. You can specify all kinds of interesting things via flags and I highly recommended it.

When looking at this, you'll notice that we're using gcr.io core-demo cordy dash AMD 64 colon blue.

And there are images, container images out there that are both blue and green. So in our work today, we're going to move between blue and green, so let's hop over and start playing with this Cordy application. In our demo environment. You can see that I have a couple of different things set up here. I have a lower window in the lower left hand corner that's just watching replica sets in my environment. We're primarily concerned with replica sets today. I'm going to show you what our deployment yammel looks like in the top screen here. This deployment yammel itself is calling that odd template that actually specifies container being pulled from GCR.IO as kind of discussed before. The other thing to consider here is that the number of replicas of that container that we're going to run in our environment, it's about 20. I specified 20 because I want to show you the rollout capability that the deployment has in Kubernetes. So let's just get this going. K apply dash F deployment.yammel will get that deployed into our environment.

Here we go. So you can see that in the lower left hand corner, we have 20 replicas of this pod up and running in our Kubernetes cluster and that pod itself is running that version of the blue container. So if we edit that deployment, and I've got this deployment spec open, you'll see that we have container pulling as blue. I'd like to show you just what this thing looks like before we go and make any fancy changes. So it's a K port forward.

In all deployments, cordy. All this is really doing this K port forward command is really just opening up a connection leveraging this cube control process to the API server and tunneling traffic back to my local host and forwarding traffic to my local host on port 80 80. So to view this website, all we have to do is pop open a web browser and go to local host port 80 80 in fact, if you jump back to the terminal here, you'll see that it's taken care of addressing my connection, the queue control command itself. So the only thing to understand here as being critical is that we're running version .ten.zero dash blue let's change that to something else right now and we'll do that with that edit command that I showed you before. So K, edit deployment, core D.

The important thing to understand here, the one thing that wasn't specified as a part of my deployment specification was a strategy. The strategy here is leveraging rolling update. There are a couple of different ways to roll changes into your environment, but making a change to a pod spec within a deployment is going to trigger a new replica set creation by the deployment controller. And I'll show you that right now. If we change this to green, not green one, that's going to, that's going to break everything. But if we change that to green and you have a look down here, you'll notice right now that we have a new replica set created and you can see that the containers from one are draining as the containers, as the pods from the other one are spinning up. This is unique to that rolling update capability.

And this gives us the ability to seamlessly roll out a new version of our application that might be exposed behind something like a Kubernetes service. We'll talk about Kubernetes services in the future, but we can leverage this to make a rollout seamless. Let's say that I'm doing this in my environment and I realized I just rolled out a bit of code that I didn't want to roll out. Maybe that's bad, right? So one of the things that we might actually consider here is how do we roll back? Well the deployment gives us that mechanism as well. So we can do K rollout, undo deployments, core D.

And this is actually going to roll. You can see if you watch right down here, it's actually rolling back to the previous version of the replica set. This mechanism is smart. It will roll back and forth between the last two that you did between the last rollout. So if I were to undo it again, you would actually be able to see that, hey, we're rolling back. So that shows us how to roll back a change. Now I've done a lot of editing and let's just, you know, check our browser to see what versions running here. So in the background I'm doing that port forward command again and we're just going to refresh and you can see that we are right back to the blue version of this code.

Yup. Okay. Back to the terminal. The one thing that I do want to call out is that if we were to say edit this deployment in line, let's say we were to edit either the container image specification or even the number of replicas. Our normal operating procedure was to maintain all of this in a Yammel file somewhere that could be problematic and you'll see that my replica sets actually updating. This is something that you need to consider. Where should changes happen in your environment? And often our answer is to maintain this in some version control system somewhere specifically, you know, maybe Github or something like that along those lines. But either way, the idea is we want to avoid what's going to happen right now, which is me as someone who didn't make a change using cube control in the system or cube control edit, but me making a change, leveraging and apply and likely that's been done by Jenkins or some sort of automation.

And so it's important for us to understand that this K apply dash F for the deployment that's coming is ultimately just going to change our system in a way that we didn't consider. So please be mindful of where you're changing your system from and also look towards leveraging these deployments to roll out new versions of your code. It protects you and it also helps with automating that process for you. Thank you very much for your time and we'll see you in another episode.

Give Feedback

Help us improve by sharing your thoughts.

Links

Share