KubeAcademy by VMware
Kubernetes as an Evolution of Infrastructure Modernization
Next Lesson

In concluding the course, we look at how Kubernetes is the natural evolution of infrastructure and how the roles of virtual infrastructure admins once again converge to become a Kubernetes system administrator.

Boskey Savla

Senior Technical Marketing Architect at VMWare

Boskey Savla is Technical Marketing Manager @VMware focusing on Cloud Native Applications, she has 15 years of experience in systems and operations.

View Profile

Now let's take a look at Kubernetes and how it's going to help us further. Now why is Kubernetes the solution to a lot of these things, right? There are three different or three key elements that Kubernetes brings to the table that can be helpful. Now there are three things that Kubernetes does very well, right. One is it helps you maintain desired state, which means like, let's say you have a workload or you have an application that is containerized, but you want that application to run in a specific manner. You can define what that end state or that end goal is and Kubernetes would automate your infrastructure to make sure that the end state is maintained. Second, it distributes containerized workloads effectively across your infrastructure. And then the third one, of course, is it is decoupling the application requirements from the infrastructure requirements.

I think the first two are pretty self-explanatory, but what really shines for Kubernetes is this third aspect of it, which is decoupling application from infrastructure. And I'll get into that in a bit. Now, if you look at any application that gets deployed there are certain key characteristics or there are certain elements to that application that needs to be taken care of when it is provisioned. One, you need a form factor in which your application binaries can be transferred. Second, you need environment configurations. So for example, if your application needs a service that it needs to talk to, a database that it needs to talk to and run time, you need to understand what those environment configurations are, and you need the capability to pass those and environment configurations to the appropriate applications. Third, you need to be able to define resources like network and routing and ingress and storage capabilities for that application.

You also need replication or state management capabilities, but when something goes wrong you need things like disaster recovery policies in place so that you can recover an application in case something goes wrong. You need load balancing and security properties so that you can automatically scale that application, load balance it effectively, apply policies for security for that application to run. And then lastly, you need that particular platform in order to operate or to operate to maintain patches and things like that. These are very common tasks that every application life cycle goes through. These are some of the common tasks that are needed for any application to run effectively. Now, traditionally, what we have done is, again, we have done this using different infrastructure APIs. Let's say you're running your application on Amazon versus [Peacefair 00:03:04] versus Google. Then you talk to the particular infrastructure API to provision along with the configuration management tools so that you can automate a lot of this process.

And maybe sometimes this is not enough. You need additional tools in order to, let's say, upgrade your application, right? So, in effect, you need to work with different systems with different APIs to maintain your application. Now, what Kubernetes does is its abstracting away these requirements from your underlying infrastructure and what it does is that it is the flow forefront API that you define your application requirements into and then Kubernetes then transfers those requirements to the corresponding infrastructure. So instead of reading three different APIs, one is infrastructure API, one is maybe your configuration management system and something else, you just talk to Kubernetes and then Kubernetes in turn transfers those requirements into whatever the native APIs might be for that particular infrastructure, right.

So, for example, application binaries in Kubernetes are defined using deployments and ports. Ports are nothing but essentially a logical grouping of containers where your application codes service sites. If you want to define environment configurations for your application, you can define that using something like secrets and config maps. If you need persistent volumes you can do so by defining persistent volume claims in Kubernetes. If you want your application to have a certain set of ports replicated, you can define that using Kubernetes and Kubernetes will make sure that so many number of ports are constantly running. Even if one of those boards dies or one of the container dies, it will re-spin up that container because it has to match the end state. If you want to define a load balancing service, again, that is a service type element within Kubernetes, but you say, okay, whatever this application is, I want it to have a load balancing IP, or I wanted to have a load balancer and Kubernetes will automatically figure out what the ports are deployed.

It will figure out based on labels and other selectors which traffic needs to be routed to which particular port for the particular service. It will let you define network policies so that you can secure your applications that are running in port and containers, et cetera, because Kubernetes is abstracting away the entire concept of application from infrastructure. You can upgrade your application pretty easily. You can just deploy a new set of ports with newer versions and when you're satisfied with how those deployments have run or how those new ports have been deployed, you can simply change your service gateway to point to the new version of the new ports and simply upgrade your system without having to carve out a maintenance window.

So, Kubernetes is going to help us define a lot of these application capabilities and once you define these different application requirements through these different Kubernetes objects, Kubernetes is then going to help you maintain and manage those states effectively by working with the different infrastructure APIs. So for example, when an application is deployed, you create a deployment type in Kubernetes and say, here is my application or a container image code run, run it and make sure there are five replica images, Kubernetes is going to talk to the corresponding infrastructure API and it's going to make sure that it deploys or downloads those five ports and runs it. It will make sure that the port networking is in place so that the ports are talking to each other. If you defined a service element it will go ahead and create a load balancing service within your infrastructure and bind those ports to that particular service. All of this is completely automated in Kubernetes.

And if one of those ports dies it will spin up a new one making sure that your service is always available. So I think this is what Kubernetes brings to the table. And, again, if we go back to our four pillars in terms of how it is helping our deployment cycles, the interaction between siloed teams is considerably reduced now, because rather than defining... When we would define DevOps practices, we still have to define based on our infrastructure that we were running on. What are my configuration elements? What do I need to pass in that particular application? How do I upgrade that application? So there was some interaction needed between teams that were supporting the platform and the teams that were developing those things. But then with the advent of Kubernetes as a way of defining those requirements, application teams can just define them and provide it to a Kubernetes API and Kubernetes automates that entire process.

Again, like I mentioned, in my earlier slide, maintenance window is heavily reduced because of the capabilities of the services or the services element within Kubernetes, which is basically using label selectors to route traffic rather than IP addresses that are hard-coded into VMs or into containers. Labeled selectors help route traffic to the appropriate version. If you need a new version or if you need to upgrade a version you simply had to do is deploy a new version and point your service API or the load balancing service to the new versions. It also helps you reduce the time to fix bugs. Because again, this whole process is pretty dynamic. It will let you automatically scale your applications. You can define automatic quad-scalers where you'll say, okay, go ahead define A and here's how much traffic I want each port to run and if it exceeds such and such value, please scale my port and it will do that automatically.

So overall with Kubernetes coming into play, with the help of virtualization, with the help of containers, the entire time it takes for us to run between application that are being developed and deployed could be reduced to a matter of weeks from months, right. So this is the ultimate optimization platform and why it is becoming so popular. And to summarize, the way our roles have evolved within the various stages of this entire revolution, like people who are doing Linux system admin jobs then started moving to virtualization, then started moving into the DevOps practices, and now there's room for becoming a Kubernetes platform admin, right, so that you can help manage and maintain these platforms that Kubernetes needs in order to automate things effectively.

Give Feedback

Help us improve by sharing your thoughts.

Share