KubeAcademy by VMware
Setting up the vSphere Cloud Provider
Next Lesson

Before cloud providers can be used to interact with our infrastructure, they must be properly configured. This lesson walks through setting up the Kubernetes Cloud Provider on vSphere Infrastructure.

Eric Shanks

Senior Technical Marketing Architect

Eric Shanks is a Senior Technical Marketing Architect at VMware where he helps customers build solutions on Kubernetes.

View Profile

Hi, I'm Eric Shanks, a senior field engineer with VMware. And in this video, I'll go over how to set up your cloud provider for your Kubernetes cluster so we can get additional functionality from our underlying infrastructure.

Now, remember that a Kubernetes cloud provider is a way for the Kubernetes control plane to communicate with the infrastructure resources that it sits on, in this case, it's vSphere, so that it can take actions on its behalf. Before we can adequately do this, we have to assign permissions, which can be seen here. For this video, I've already created the roles and privileges and applied them to the appropriate entities in my vSphere lab. If you're doing this at home, please check they're the latest document and procedures, and apply them correctly.

Next, there's a special configuration that we need to apply to each of our Kubernetes Virtual Machines, and we do this so that our persistent volumes will know which disk go with which nodes. So we need to go into the Edit Settings, VM Options. Under Advanced, click the Edit Configuration, and then we're looking for disk.EnableUUID, and we need to set this value to TRUE. You'll need to do this for each of your Kubernetes VMs.

All right, so we've set up some permissions and some configs already, but one thing we need to do is have a mechanism for the Kubernetes control plane, to know how to talk to the vSphere infrastructure. To do this, we'll set up a vSphere.conf file, which has all the configurations that we need to pass to our Kubernetes control plane so that it can make those calls on our behalf. Let's take a look at each one of the sections of the vSphere.conf file.

The first section is the global section. Global is going to contain our usernames, our passwords, what port on the VCenter we're connecting to, and whether or not we're going to accept insecure certificates or not.

The next section is the VirtualCenter section. And essentially, what it's containing is, the name of the VCenter we're connecting to is also the data centers within that VCenter that we're going to be storing our Kubernetes cluster. So in this case, you can see I've got a mapping here, datacenter = "HollowLab", and that's what I've got in my lab.

Next, in the workspace, we're setting up the server, which is the VirtualCenter we have in the section above. They should match. And then we've got data center and a resource pool. So if our Kubernetes cluster lives within a resource pool, we'll put that here as well.

And then below that, we move on to the folder. So if we look in the VMs and templates folder view in the vSphere client, you'll see that I've got a Kubernetes folder, my Kubernetes nodes live there, and that's in the config file.

Then the last thing in the workspace section is the datastore, and this is the default datastore where our persistent volumes will be stored when we create them in Kubernetes.

Below that, we have the disk section. The disk section tells the vSphere infrastructure what type of SCSI controller we'll be using for our Kubernetes persistent volume disks when we assign those. I'm using SCSI controller type of PVSCSI.

The last section is the labels section, where we have specified two important labels called regions and zones. These labels are used to ensure high availability of our workloads, so they are important to get configured properly. These labels must match up with the tag categories in your vSphere environment, as you can see here. I've followed the Amazon scheme, where a region represents a large geographic location, such as the Midwest or Chicago, and zones representing a fault domain, such as a server rack or data center within that region. But your zones and regions can match up to anything you want.

Now, within each of those tag categories, I've got multiple tags. For region, I have a single tag named Chicago, and it's applied at my data center object in vSphere. All resources below this object will be part of that region in the Kubernetes world. My zones are assigned as either AZ1 or AZ2, and are set at individual ESXi nodes. You could also set these at the cluster, but in my small lab, I've placed these on the ESXi nodes, And I've placed two nodes in AZ1 and one node in AZ2.

Great, so we've got our vSphere.conf file all straightened out. The next step is to tell the Kubernetes components that need to know about this vSphere infrastructure how to access that vSphere.conf file and read it. We'll do this through the bootstrap process, thought kubeadm. And to do that, there are three components in the Kubernetes control plane that need to know about this file, the API server, our controller manager, and of course, the kubelet. Now, each one of those needs to be passed two configurations. The first is the cloud provider in this type of vSphere, and the second is the cloud-config, which is the location of where our vSphere.conf file is, that we just created.

Okay, so I've switched over to a console view and I've SSH'd into my first control plane node in my Kubernetes cluster. I've changed the directory to the etc/Kubernetes directory, and here, you can see I've got my vSphere.conf file, which has got all the configurations we talked about earlier. But now, we need a way to tell those components about where this vSphere.conf file is. To do that, we'll look at the kubeadm.conf file, which is going to be passed to the kubeadm utility when we bootstrap our cluster.

So if we do a little look at the kubeadm.conf file, you'll see that I've got different sections in here. The first one being under the API server, you can see that we've got a cloud-config and a cloud-provider argument in the extraArg section. Now, since the API server is running as a pod, we have to mount this file so that it can be read by the API server.

So under that, you'll see we have extra volumes and we're also mounting that file. Likewise, we're doing the exact same thing for the controller manager. So under controllerManager, extraArgs, you can see we also have those same configs, and we're also mounting that file so that it can be read by the container.

Now, after this, we have to go tell the kubelet about where this is at. So the kubelet is running as a systemd service, so for that, we're going to have to go look in a different location. Here, we'll go into etc/systemd/system, the kubelet service, and then my configuration file. And you can see I'm passing those arguments along at startup as well so that they'll know where this file is and can read it.

All right, so we can quit this screen and we should be ready to bootstrap our cluster. So we'll clear this, and then let's run that kubeadm init command, and we're going to pass it the kubeadm.conf file, which tells Kubernetes how to be bootstrapped and where that vSphere.conf file lives. And then we're going to upload the certs so that we don't have to pass our certificates to the other nodes. So the bootstrap process is starting, and after a few minutes, we've got a successfully bootstrapped control plane node.

So in this video, we covered how to set up the vSphere cloud provider for your Kubernetes cluster. We set up permissions, we configured the disk UUID settings for our VMs, we enabled the vSphere.conf settings, and we applied those to the API server, our controller manager and our kubelet. And then lastly, we ran kubeadm init to bootstrap our first control plane node in our cluster. Thank you for watching.

Give Feedback

Help us improve by sharing your thoughts.

Share