KubeAcademy by VMware
Introduction to Operational Considerations
Next Lesson

Join 80,000+ fellow learners

Get hands-on practice with Kubernetes, track your progress, and more with a free KubeAcademy account.

Log In / Register

In this lesson, you'll gain an overview of the course and why it's important to look at building an integrated Kubernetes stack.

Boskey Savla

Senior Technical Marketing Architect at VMWare

Boskey Savla is Technical Marketing Manager @VMware focusing on Cloud Native Applications, she has 15 years of experience in systems and operations.

View Profile

Hi, and welcome to this lesson. Today, I'm going to talk about what are some of the key operational considerations to keep in mind when building a Kubernetes stack. Now, when we think about why Kubernetes is a super popular, one of the key elements of Kubernetes is that it's helping development teams deploy workloads that are containerized across a Cloud or across an infrastructure, whether it's hybrid, private, public, et cetera. But at the same time or more specifically, it helps people or development teams define application needs such as, how many copies do I want to run for my workload? What kind of load balancing do I need? How do I want to scale whether I need automated scaling for my workloads, et cetera. And Kubernetes supports a ton of different functions that application developers can define, and Kubernetes goes ahead and does the magic in the backend.

It doesn't really matter whether the backend infrastructure is a public Cloud provider like AWS, Google Cloud or even Azure, or even in on-prem environment like VMware vSphere. For example, let's say, a developer wrote a code and they said, okay, here's my application. I've already containerized it, and here's my container image. Can you please run this on your cluster and while you're running that, can you make sure that there are three copies of the application constantly running. Also please create a load balancing service and also the application will need a persistent storage mounted so that it can process data. And so it's pretty simple for dev teams to define this with the kubectl CLI or even the Kubernetes API directly talking to it or through EML files, whatever is the comfort level for that team. But the important thing to keep in mind over here is that the Kubernetes API then talks to your underlying infrastructure and makes all this magic happen.

It doesn't really matter where that infrastructure is. It doesn't really matter what that infrastructure is. Once you define these common standard methodologies defined by a Kubernetes, Kubernetes will implement those in the backend. Now, the way Kubernetes does this automation or magic in the backend is through some plug-ins. How does Kubernetes know what to do? So for example, if I tell Kubernetes, give me a load balanced service for my application, for an AWS environment, that could mean an elastic load balancer. For a Google Cloud provider environment, it could be something else. For vSphere it could be an NSX load balancer, things like that. So depending upon the infrastructure, it could mean so many different things. And how does Kubernetes know what to provision, how to talk to that provider to make all of this happen? So to do this Kubernetes has the concept of plugins.

So it has three key plugins. One is a Cloud provider plugin, one is a network provider plugin and the third one is a storage provider plugin. The names of these plugins may not be exactly what's written on the screen here but just to give you an idea of what they're called. So for example, what tends to happen in the upstream Kubernetes project is all these different vendors come together to integrate their infrastructure with the Kubernetes plugins. So for example, somebody like Google Cloud, for example, will create a network provider plugin for Kubernetes so that when a pod gets created or a load balancing services requested by a developer, Kubernetes understands how to work with Google Cloud provider to create a corresponding load balancing service. Similarly, if somebody requires, let's say a storage provider or persistent disk when they are running a cluster on top of a vSphere, that Kubernetes understands how to talk to the VMware vSphere volume provider or the storage provider to ask the corresponding vCenter to create persistent disk for us.

So with the help of these plugins Kubernetes understands how to talk to a particular infrastructure to make what is being asked to happen. And a lot of these vendors work within that Kubernetes project. If you look at all the different six that are part of the Kubernetes' overall project, you also realize that a lot of these vendors come together for all of this to make work. Now, while this is awesome, it gives so much flexibility from a developer's point of view. But from an operator's point of view, this can also mean that if a developer has an access to a Kubernetes API, they can effectively change that particular infrastructure, the Kubernetes infrastructure running on to whatever they like. As a developer, I could ask Kubernetes to give me one terabyte disk quantity or whatever crazy idea that I have at that point in time.

Now all of these ideas may not be crazy, but at the same time just thinking about how containers work and how Kubernetes operates and leverages the underlying infrastructure, security can be a key concern for a lot of operators utilizing, or at least managing a Kubernetes cluster. From a developer experience perspective, they love that. They love the flexibility to build different objects for their application to run, and they love that experiment or they love that experience. The key to maintaining great operational stack is to balance between the two. As an operator, you don't want to completely restrict everything such that a developer cannot create a load balancing service or they cannot implement certain security features. At the same time you don't want to open up the cluster in a way where anybody can come in and access a dashboard or get into your network. So the idea between maintaining the rate stack is to balance this two features.

At the same time if a developer says, I want a load balancing service and they don't get that from that particular stack, it's really annoying, and the entire idea of running on Kubernetes feels backward then. So to summarize from an operational consideration perspective, you should think about, based on my infrastructure, what is it that I will need to plug into enabled storage, enable network, enable the Cloud provider that I'm running on. At the same time, what... How am I going to secure my cluster as I start giving developers the API to access Kubernetes. And from a developer's perspective, I think they really need that entire stack to be integrated. They want the entire stack to be integrated in such a way that when they are requesting or they're talking to the Kubernetes API, the Kubernetes API is responding in the way they expect it to. So finding the right balance between control and flexibility is the key to operating a Kubernetes stack. Thank you, and we'll meet in the next lecture.

Give Feedback

Help us improve by sharing your thoughts.

Share