KubeAcademy by VMware
Running Kubernetes Locally
Next Lesson

In this lesson, you’ll learn how to start a local Kubernetes KinD cluster and run your app on it.

Eric Smalling

Docker Captain and Senior Developer Advocate at Snyk

Eric is a Docker Captain and Senior Developer Advocate at Snyk.io where he helps developers secure the applications, containers and Kubernetes platforms they build and deploy to.

View Profile

Hello. This is the third lesson in the KubeAcademy series, building apps for Kubernetes. I'm Eric Smalling, staff field engineer at VMware, and in this lesson, we'll be going over running Kubernetes locally. In the previous two lessons of this course, we've covered how to set up your workstation and how to build a container image. In this lesson, we'll walk through how to stand up a local Kubernetes cluster, and run a container there using the image we created in the last lesson. But first, we should discuss why you would bother running Kubernetes in development. In general, it boils down to testing your app in an environment that approximates what production will look like. In more specific terms, there are Kubernetes mechanisms like liveliness and readiness probes that you'll probably want your app to leverage, and a Kubernetes cluster is the natural place to test those. As you'll see in this lesson, it's pretty straightforward to stand up a local cluster using Kind and run your app on it.

Before we move on to the demo, it's worth pointing out that Kind is not the only way to run Kubernetes locally, and may not even be the best. Minikube spins up virtual machines on your local workstation and uses those as Kubernetes nodes. If you're on a Mac or Windows, you can use Docker desktop to run Kubernetes, it's built in, but Kind is the method we'll use today. So let's go ahead and start up our cluster. In my terminal, I'm going to type Kind creates cluster. Now this could take a minute, so I'm going to go ahead and skip forward. So you don't have to sit here and watch this.

There we go. Kind is finished, let's take a look at what it's done. There's another command Kind has called get clusters. So we'll look at that, and we see a single cluster has been created named Kind. Now I could have provided an argument to the create to give it a name, I just took the default, which is Kind. Let's also run our kubectl command to communicate with the cluster. So I'm going to do a kubectl cluster info. And there we go, we're getting responses, and the kubectl tool is telling me that a master is running at that host and port, and that the kube-dns is also running at that URL. Now you may be wondering, how does kubectl know where to talk? It's the tool we use for communicating with the Kubernetes server API at the master. We didn't tell it where it was. Well that's because Kind actually set up what's called a context for us. And if you look at kubectl get contexts... Oops, config get contexts.

There we go. I have three contexts personally on my machine. The third one has an asterisk, that's the one that's active. So Kind Kind is the name it gave it. And that's what we're communicating with when we run kubectl commands. For instance, if I do kubectl get nodes... There we go, we see Kind control plane. That's my one node that Kind has set up. Another interesting one we could run is kubectl gets all dash capital A, and we can see all of the pods and everything else that Kind has spun up to get our cluster running. Now that we have a Kubernetes cluster running, let's create a pod, but what's a pod? Well, in a prior lesson, we started a container which encapsulates a process, our Go application. The container runtime, we use Docker, can run multiple containers alongside each other.

By default, Docker will present a unique network namespace and file system to each container. In Kubernetes, a pod is a collection of one or more such containers that are deployed together on the same host. All containers running the same pod will share a network namespace, and optionally file system volumes. Many pods just have one container, and may or may not have a volume depending on the container's needs. As you can see illustrated here, pods can take many forms. The important thing to remember here is that all components in a pod are guaranteed to run on the same node, machine, and the pod is also the smallest deployable workload type that Kubernetes manages. We'll dive deeper into pods in other lessons, but for now let's write a basic pod manifest and submit it to the Kubernetes API server.

Okay, let's take a look at what we call a pod manifest. Now, Kubernetes manifests are the files that define the state you want applied to the Kubernetes cluster. In this case, this is a yaml file that is declaring a pod that I'd like to have Kubernetes run. Manifest files generally start with an API version and a Kind, this just basically tells Kubernetes that I'm targeting the V1 API version for this object, and that this object is a pod. Also, most objects will have metadata, and the metadata on this is... We're giving it a name, building apps pod, and we're giving a name value paired label, app KubeAcademy. So when this pod gets created, it'll be named this and it'll have a label called app whose value is KubeAcademy.

And then finally we have our specification or spec in the yaml file. In this spec we simply have a containers block with a single container. If you're not familiar with the yaml, the hyphen means list, and since there's only one of them, it's a list of one. And when I said you could have multiple containers, this is how you would do it. You'd have another hyphen in another one, in another one, in another one. In this case we have one, and it has a name, building apps container, and it's using the container image at Lander two K two building apps colon 0.1.

What will happen when we give this to Kubernetes is it should go out to Docker hub, get that image and run it, and give it the names that you see here in this file. So let's do that. So to deploy a pod we do a kubectl... Kubectl is going to be your best friend, you're going to use it all the time. Apply dash F and then the file then. So what we're saying here is, we're asking kubectl to apply the contents of this manifest file that's given in after the hyphen F argument. We run that and we see it created. Now, this happened really fast for me, because I have things cached locally. Might take a little longer when you run it, but to check its status, let's do a kubectl get pods. And we see I have the building apps pod, that's the name we have here. It is ready, it's running. It's not been restarted, so it's just started once and it's 14 seconds old.

So now we have a pod running in our local Kubernetes cluster. While this is a good start, it's not too useful yet, and it certainly doesn't look anything like a production application. In the next lesson, we'll demo how to write the manifest that can be used to run your app in different environments. Let's quickly go over how to clean up what we just created.

So we have this pod running. We want to go ahead and clean that off, so it's not continuing to run. The command to do that is kubectl get pod, to first see that it's running, and then we want to type kubectl delete pod, and then give it the name of the pod to delete. Another way to do it would be to do kubectl delete dash F pod yaml. That will basically read the pod yaml manifest and delete any objects it sees in there. Either one works. I honestly would probably use the delete dash F, because it's a little bit less typing. Once that's done, you can go on to the next class.

If you want to delete your Kind cluster, because you're not going to use it right away, that also is pretty simple. If we do our Kind get clusters command, we see we have the cluster kind. Now because that's the default name, it's very easy to delete. We do a Kind delete cluster enter. If I had named it something differently, say Eric's Kind cluster, I could do Kind delete cluster dash dash name Eric's kind cluster, and it would delete that specific cluster. Now that you've deployed a Kind cluster and your first pod, you're ready to move on to our next lesson where you'll learn about deploying your applications in Kubernetes. Thanks for watching and see you in lesson four.

Give Feedback

Help us improve by sharing your thoughts.