Namespaces provide a logical boundary for applications running in Kubernetes but there are caveats to that logical boundary. We’ll explore more in this lesson.
Hi, I’m Timmy Carr and I’m a cloud native architect with VMware. In this episode, we’re going to talk about namespaces in Kubernetes.
A name space gives us the ability to logically divide physical Kubernetes cluster. And objects within that namespace, we can apply policy to with things like resource quotas, rbac, and network policy. And so Kubernetes, by default, comes with many namespaces, and to find out what namespaces exist in your cluster today, you simply do cube control, get NS for name space. And out of the box, each cluster is going to come with a default name space and they’re going to come with some of these cube dash namespaces.
The one that I really want you to be aware of here is cube dash system. So if you were to look at the pods and cube system by doing K get pods in namespace cube system, you’re going to see some things that look really familiar to people who are familiar with Kubernetes architecture. You’re going to see things like the API server, the controller manager, cube proxy running on all of our nodes, the scheduler, and ultimately some of these other add on bits. Here I’m running Calico and also Core DNS to provide DNS for my cluster, but all of these run in cube system. You’re typically not going to run anything in cube system that’s not related to actually running Kubernetes.
Some of the other namespaces that I have in my cluster are related to other applications or maybe groups of applications. I have one for metal LB and that’s metal LB dash system. Metal LB provides load balancing to my Kubernetes cluster. I have one for Heptio Sonobouy, which scans my cluster to make sure that it’s passing conformance tests, that’s a single application. And then I have a monitoring namespace and I bring the monitoring namespace up because I’m running the Prometheus operator there and the applications running in the monitoring namespace are Prometheus and alert manager and [Gravano 00:02:02]. So you can see that in some cases in my cluster, I’m running a single application per namespace, but in other cases in my cluster, I’m actually running applications that are likely going to be life cycled or managed together in a cluster, like the monitoring namespace.
So not all objects within Kubernetes are actually capable of being placed in a namespace. And Kubernetes gives us a nice command to run using cube control to find out what those are. That’s cube control API resources and I love this command. I love this command because this command, first of all, shows us what can be namespaced and also the objects that are available in the API to us. And I like it for that reason too. So this is like bonus bingo right here. You can see that I’ve added Calico to my cluster and in doing so, we’ve added the capability to manage this BGP peers object within my cluster. I’ve added monitoring and Prometheus operator has added the concept of service monitor to my clusters API. There’s all kinds of things that could be added to your API server and you need to have a look at when you’re actually stepping into new cluster. This is one of those commands that I love for that.
So I mentioned that not all objects within Kubernetes are capable of being namespaced. So for example, pods. Of course pods are able to be namespaced. We’d put pods in some namespaces and pods in other namespaces, but nodes, for example, are not capable of being namespaced. The same comes for persistent volumes. Persistent volumes themselves are not namespaced, but persistent volume claims from those persistent volumes are. So it’s important to understand you can see the boundary. It’s typically cluster administrative type things versus things that would run on cluster. So in node, that’s like a cluster administrative thing. A persistent volume, that’s a cluster administrative thing. A persistent volume claim, that’s more of a logical thing that’s taken advantage of something. A pod, that’s a logical thing that’s taken advantage of running on top of that physical thing. So I love this command for being able to tell us exactly what’s running in our cluster and if it’s capable of being namespaced.
So let’s run a few things on this cluster and talk about how this logical separation actually works. So in this setup, I am going to deploy a couple of manifests that I’ve already set up here under my namespace directory. I’ve created two namespaces, a prod namespace and a dev namespace that you can see here. I have created a couple of services to expose those services in those name spaces to the cluster. And then I’ve deployed some applications that are going to be linked to those services via the selectors and labels and labels selectors. And then, finally, I’ve got some pods deployed to allow us to actually look to see what sorts of boundaries exist in our cluster.
And that’s exactly what I want to talk about next. Just because objects are placed in namespaces and we can view what objects are in namespaces by cube control get all dash N for namespace and the namespace. So in dev, you can see I have a deployment, replica sets, a service, and then some pods running. The same goes for my prod namespace. But they’re a little bit different in the prod namespace. The reason why I bring that up is from a administrative perspective, these seem to be separate right here. However, if I use a … I mentioned that I had a busy box pod running, so K get pods in the namespace dev, you can see that I have this busy pod running here. There’s no network policy that exists yet. I’m not doing anything to block that busy box’s pods capability of actually reaching out to my production namespace and querying it.
And I’m going to show you that right now. So I’m going to exec into the busy box pod in my dev namespace and then I’m going to try to get, using the service that I’ve deployed in the production space, a webpage from that production pod. And guess what? It works. So it’s really important for you to understand that while we have namespaced and placed these objects in buckets, without actually coming back to apply meaningful policy at the namespace level using something like a network policy, these objects are still going to be able to coexist and even talk to one another. So that can be a security concern and something you definitely need to think about from an administrative perspective in a Kubernetes cluster.
Finally, when we look at deleting objects within our cluster, so if we do K get namespace, you can delete all of the objects that exist in a namespace by deleting the namespace itself. So for example, if I do K get all from the namespace dev, you’ll see that I have all of these different appointments, replica sets, pods. If I simply then do K delete namespace dev, this is actually going to get rid of every one of those objects. Now, keep in mind this is not going to get rid of the objects that these were potentially associated with that were not a part of the namespace, more specifically things like your nodes or your persistent volumes. However, it will get rid of the pods that are running the services that are running the deployments that are running in that namespace. It takes a little while for Kubernetes to actually work through the finalization of deleting all of these objects, but shortly, all of them will be deleted within the cluster.
So while we do that, I’d like to thank you for your time, and I’d like to encourage you to catch us on another video here on the KubeAcademy.
Have questions about the material in this lesson?
We’ve got answers!
Post your questions in the Kubernetes community Slack. Questions about this lesson are best suited for the #kubernetes-users channel.
Not yet a part of the Kubernetes Slack community? Join the discussion here.
Have feedback about this course or lesson? We want to hear it!
Send your thoughts to KubeAcademy@VMware.com.