KubeAcademy by VMware
Security Considerations
Next Lesson

Join 80,000+ fellow learners

Get hands-on practice with Kubernetes, track your progress, and more with a free KubeAcademy account.

Log In / Register

Because Kubernetes molds the infrastructure it runs on to match the application state, it becomes important to make sure there are effective security policies that make sure the API is secure and there are certain guardrails implemented. This lesson covers some of the basic security aspects to consider while building a Kubernetes stack.

Boskey Savla

Senior Technical Marketing Architect at VMWare

Boskey Savla is Technical Marketing Manager @VMware focusing on Cloud Native Applications, she has 15 years of experience in systems and operations.

View Profile

Hi, and welcome back to lesson four. So in this lesson, we are going to talk about security considerations while operating a Kubernetes stack. Now, Cloud Native Computing Foundation does an annual survey of users that are utilizing some of the technologies like containers, Kubernetes and all the ecosystem projects like Prometheus, Grafana, et cetera, that are part of the Cloud Native Computing Foundation. And they asked these users a very specific question like if you are converting or you're adopting containers for your workloads or Kubernetes in general, what are some of the top challenges that you face when thinking about moving to a containerized or to a Kubernetes space deployment? And one of the top challenges is networking, security and storage. So we already covered storage and networking, and we are going to now take a look at security. In general, the idea or the thought that most folks have when thinking about containers is because containers are based on a shared operating system model.

There is no defined security perimeter or security outline that separates it from another container running on the same host and hence this notion that they are more insecure compared to washing machines. So we're going to take a look at how can we secure our entire community stack better? Now, when you think about securing your community stack, there are multiple layers to the Kubernetes stack and each layer of that stack is going to play an important role. And so when thinking about security, you're going to want to have a holistic approach at how you think about security and look at all the different layers within this particular stack all the way from infrastructure that is going to hold your Kubernetes cluster and enhance your workloads to the workloads, to the cluster, to the containers, to the code that is running within that containers.

And what we are going to do next is we're going to take a look at each and every layer that comprises the Kubernetes stack and look at different methods of security in that particular layer. So let's start with infrastructure. Infrastructure is essentially the cloud or on-prem environment that is going to hold your Kubernetes whether it's bare metal servers or whether it's a washing machine hosted somewhere or deployed on prem. So some of the things to consider when you're looking at a Kubernetes cluster is access to the underlying nodes of that cluster. Now, because mostly your development teams are going to work with the API off Kubernetes to create and deploy workloads, the access to the underlying nodes should be restricted to only your operation teams. And again, it should be really controlled access implemented just to make sure that some of the common practices don't allow direct as a search or things like that to these different nodes, so that there's no reason for anybody to log into these nodes.

Now at the API layer, the Kubernetes API, again, that is a very powerful asset because you could directly access the Kubernetes API and many blade your infrastructure stack in some sense. So access to the API should be restricted and should be limited to the development team that is going to need it. Kubernetes also has this etcd server that is a data store or a database for Kubernetes to store it's Master/API state and things like that. So let's say a development team defines a state that it expects the workload to be in. The configuration or the expected state is stored in the etcd server. So controlling or managing access to your etcd server is pretty critical and should be limited to the control plane of the Kubernetes cluster itself.

Also, it's recommended to use TLS in order to access etcd. Also, another general practice that is recommended is to encrypt your etcd server at rest. And since etcd holds the state of the entire cluster is this does should separately be encrypted at rest. So these are some of the common practices to think about when implementing or securing your infrastructure stack, deploying the Kubernetes Cluster. Now, when it comes to the cluster itself, right? The Kubernetes cluster itself, there are different things you can think about. One definitely is the access to the Kubernetes cluster API. It is a general good practice to have your Kubernetes API access integrated with your eldap, your backend identity system so that you can easily create users, or you can easily define users that have access to that API. Now, each Kubernetes cluster also has its own our back concept.

So, for example, depending upon the different roles that are in Kubernetes, somebody could have super user access to every cluster, or to all the resources within that cluster and they can see, read, write, edit everything. So when giving the API access to end users, you should be defining what that role within the Kubernetes our back system is and who should have access to what role. Now, the thing to think about is network policies. By default, the pods within Kubernetes can talk to each other. And if you don't want that to happen, you can define network policies to not allow east-west traffic talking to each other. And then finally, TLS is pretty critical. So if you want to implement mutual TLS so that these services that are within a cluster, they can talk securely using TLS. We'll get into the details of network policies and what are the different pod security policies, et cetera, in detail later on. But at the Kubernetes cluster layer, these are some of the things that you should be thinking about by default, then how can you implement them within that cluster.

Now, when it comes to the pod itself or the container running within the Kubernetes cluster, you got to think about a couple of things. One is the container runtime has this notion of running privilege to users or privilege containers. So what this does is let's say you have two containers running within a pod or within different pods on that Kubernetes luster. Now these containers can access your underlying host resources. So for example, if that is a Linux host and you have etcd password, a container within privileged access running on that particular node can easily read your etcd password file. For example, it can listen to your local at zero running on that particular node. So within the Kubernetes cluster, you can define whether a particular cluster is allowed to run containers with privileged access or not. Another thing is to think about when it comes to containers and the way containers are built is through layers.

Typically, development teams tend to utilize a publicly available container base image, and then they will layer on their custom code on top of that base image using a Dockerfile or something like that. And so the final container image may have your development team's code, but the base image may be used from a publicly available container image. And it could have vulnerabilities. It could have possible areas of attack that you want to think about. And so when a container is running within your cluster, or even before it starts running within your cluster, it's a good idea to scan that particular image, sign that particular image and make sure that the base OSs that you're working on or that the development teams are working on are something that either as an organization you define or you're providing them. So vulnerability scanning, constantly giving a development team space OS images for the container images is another way of making sure that you're securing your container images.

Another thing also is updating and patching your Kubernetes cluster and the container OS. If at all you're using a base OS image for your containers, then it's a good idea to keep patching your containers to whatever's the next in the updated patch available for that particular operating system. Now, when it comes to the application code, the workload that's running within the container, it is by default. Assume that unless a workload is public facing, you're not going to use a load balancing type service to expose that workload, you're always going to use TSL to communicate to any service. Any pauses that don't need to have access directly from an external traffic will only have cluster IP-based service type so that they can talk to each other again over TLS, a mutual TLS, but they cannot necessarily have direct access to external traffic.

Now, most languages when a container or an application is written, they provide static code analysis. So that's a way to provide a snippet of code to be analyzed for any potentially unsafe coding practices whenever possible. This is another recommended way of figuring out if there are existing bugs that will open an area of attack within the code itself. And then of course, you're limiting the range of the ports that are available for that container to work with. And by default, disabling any external traffic or you are implementing network policies to stop a pod from being accessed either externally or by another pod running within that same Kubernetes cluster.

Now we talked about different privileged mode, et cetera. So we're going to get into some of the details of that. Now there is a way to implement network policy within Kubernetes. And what that policy does is it stops two pods from communicating with each other. By default, pods are non-isolated or running within a cluster, right? If you have multiple pods deployed on a community's cluster, they are most likely that they will be few that are consolidated in a single host or a single node with that cluster. And so if you want to stop two pods in two different namespaces, or even within the same namespace from talking to each other, you can create network policies. These policies are implemented by the network plugin you use. So again, this goes back to the network plugin selection. So if you want to implement network policy, you need to have a CNI that supports network policy implementation. Now, the [inaudible 00:11:58] policies are going to implement this is they're going to figure out what pods can talk to each other depending upon the label of a given pod.

So if you're unable to define a policy just by the pod name, the policy is going to look up the app name or any label that is associated with the pod. So only pods associated with labels will be affected by a network policy. Network policies are additive and they do not conflict. The order of evaluation does not affect policy results. And policy rules can be applied in ingress and egress direction. So this is how in the backend network policies are implemented and some of the things to think about. Now here's an example of how a network policy can be constructed. So it's again, an API kind and you would define the network policy. You give it a name, you're going to define the specifications for that network policy. So, for example, this policy should be applied to all the pods that select the label demo app in this example, and it should be applied to both ingress and egress rules.

We are allowing traffic to a particular pod from ingress, whether it's a cidr block, whether it's a particular namespace or a particular pod within that namespace. Now, this is an example of how a network policy can be implemented. Another thing we talked about was container image scanning. Container image scanning can help identify different vulnerabilities that are present in that container stack or whatever the stack the container is made of. Let's say even if a development teams creates a stack or creates a container based off of internet, if they download a base image from the internet and then they're using that to build the final container image, you can still figure out if their container image has any vulnerabilities before that container get pushed into a pod in a Kubernetes cluster. There are different ways to implement this, but most likely you're going to use a repository to store your container images.

And what tends to happen is there are projects where once the container images have been stored in those repositories or container registries, you can scan those images and see what vulnerabilities are present in that particular container image. You can even define policies through which you can say, okay, if the container image has more than 10 critical vulnerabilities or its level of CVCs is beyond nine or beyond 12, whatever value you can feel, please don't let that container to be deployed on any clusters. So you can implement different policies based on the number of vulnerabilities found within that container image.

Some of the best practices to think about when you're building that Dockerfile for that particular container image, some of the things to do is avoid installing package managers like yum, apt-get, things like that. Remove any networking tools that are present, so like ssh, curl. Remove any shells and use compiled languages instead and so on and so forth. But most importantly, you're going to restrict the number of libraries deployed within that container to what is just needed. And by this way, you're going to lower the attack surface overall for that particular container image. So these were some of the policies, things to think about when implementing security or when thinking about security on your Kubernetes stack. Thank you for watching this video and build me in the next lesson. Thank you.

Give Feedback

Help us improve by sharing your thoughts.

Links

Share