KubeAcademy by VMware
Setting up the AWS Cloud Provider
Next Lesson

In this lesson, you'll learn how to setup the AWS Cloud Provider with your Kubernetes cluster so that you can take advantage of AWS Infrastructure such as Elastic Block Storage (EBS) devices and Elastic Load Balancers (ELBs).

Eric Shanks

Senior Technical Marketing Architect

Eric Shanks is a Senior Technical Marketing Architect at VMware where he helps customers build solutions on Kubernetes.

View Profile

Hi, I'm Eric Shanks, a Senior Field Engineer with VMware. And in this video, we'll cover how to set up your Kubernetes cloud provider for use with Amazon Web Services. Remember that a cloud provider in Kubernetes world is a way for us to tell the Kubernetes control plane how to interact with the underlying infrastructure, which in this case is the AWS public cloud. Now this video doesn't cover any of the Kubernetes architecture, really. It's only about setting up the cloud provider, but I really wanted to give you an overview of what exactly is being deployed in the public cloud so you can get a frame of reference on what we're doing. So here I've got in my lab a virtual private cloud set up in AWS. I've already got a load balancer configured, and that load balancer's pointing to some EC2 instances that I'm using for my control plane nodes.

I've also got three worker nodes, and these are all split across three different AWS availability zones. That'll be the cluster that we're working with for this video. To be honest with you, most of the setup of the AWS cloud provider for Kubernetes comes down to setting up prerequisites within the AWS infrastructure, permissions and things like that.

So the first one I want to talk through is IAM Roles. Now, if you don't understand what IAM Roles are, think about them as a set of permissions, like we're assigning to a user. So in the AWS world, we have the AWS control plane API, which is where we're sending our commands and it's spinning up resources behind the scenes like VPCs and instances and things like that. Now in traditional world, we would have permissions assigned to a user and the user would then have a certain set of permissions to go and do things with the AWS control plane.

But AWS gives us another mechanism as well, where instead we can have the permissions assigned not to a user, but to a computer or a server or an EC2 instance. So in that case, we can assign permissions to go and use the AWS control plane to spin up additional resources or read information and those can be assigned to a EC2 instance instead of a user. This lets us assign permissions to our control plane instances so that they can go and spin up resources on demand for Kubernetes instead of assigning those permissions to a specific user.

Okay, now that we've discussed what an IAM Role is, we need to discuss where those IAM Roles need to be applied. And I'm not going to go through every single permission. For that you'll have to consult the documentation, but I did want to give you an overview of what those IAM Roles look like.

So the first set, we're going to have to apply an IAM Role to our control plane nodes. The control plane is going to be reading and writing to the API control plane to spin up resources on our Kubernetes cluster. For instance, like Load Balancers or EBS volumes. So for that, the control plane nodes need to have IAM Role with a policy assigned to it that allows us to have permissions to things like auto scaling, to read and write for EC2 instances, to set up elastic load balancing, also for identity and access management and the key management service so that we can assign security to our nodes. Also, the worker nodes themselves need to have EC2 read access from the API, as well as the elastic container registry permissions if you're going to store your images in the ECR.

The next prerequisite I want to cover is your host names and DNS. So a big gotcha with using the AWS cloud provider is you'll have issues when you start to stand up your Kubernetes cluster because the host name and the DNS name don't match exactly. So for example, here you can see I've got a shell terminal and I'm basically outputting the host name of my server, as well as getting the metadata so that shows the DNS name of my server. And you can see that they have the same prefix, but the suffix is missing on the host name. Kubernetes will have a problem with this when it tries to stand up the cloud provider and what we need to do is make sure that those things match. So what we can do is we can update our host names so that they match what's the DNS name. And then once they match, we can continue on to the next step. You'll need to do this on all of the Kubernetes nodes in your cluster.

The next prerequisite is tags. AWS needs to know some tags when it's provisioning resources on behalf of the Kubernetes cluster. For instance, things like elastic load balancer. When Kubernetes tells the AWS API to spin up a load balancer, it needs to know what subnets that that load balancer is supposed to be attached to, and it uses tags to identify those. So there are two different sets of items that you need to tag prior to setting up your Kubernetes cluster, and that's the Kubernetes EC2 instances themselves. So every instance needs a tag, as well as the subnets where those EC2 instances live. And the key value pair for that is a tag called kubernetes.io/cluster, and then slash cluster name. And then the value you can set to owned.

Here, we can see a couple of screenshots that I've taken of my AWS environment, just to show how those tags look. So on the left-hand side, you can see I've got my EC2 instances listed, and I've got a tag of Kubernetes.IO/cluster/my cluster name and a value of owned. Now I can have additional tags in there, but I at least have to have this tag as well as the subnets, which are on the right side. And I've got the same tag listed there for that.

Let's hop over to the lab and we'll SSH into what's going to be our first control plane node. So the first thing I'm going to check here is just to make sure that my host name is the full host name and not just the prefix. So I can list that here and everything looks good. The second step is I'm going to update the configuration that I'm going to apply when I first bootstrap my Kubernetes cluster. And I do that by editing this KubeADM.com file. In there, there's two pieces that need to be updated to say that we're going to be using the AWS cloud provider.

The first one is under the API server config. You can see, I have extra args, cloud provider is AWS, and again I have that same configuration under controller manager, so I have extra args and the cloud provider listed as AWS. Now there's one last place that we need to update that cloud provider config, and that's under the kubelet. Now the kubelet runs as a system service. So for that, we need to go update a different configuration file. So we'll go and look at Etsy system D system, the kubelet service, and then my configuration file. So in there you can see that I've got an additional flag for the cloud provider being AWS.

That's really it. Most of the work is setting up the prerequisites in your AWS environment. Then after that, you just have to set the flag that the cloud provider is going to be AWS for your controller manager, your API server, and the kubelet. From there, let's bootstrap the cluster. So on our first control plane node, we're going to run the Kube ADM and net command. We're going to pass at the configuration from our KubeADM.com file, and then I'm going to upload the search so that that can be downloaded on the other nodes. I'll let this run.

And at the end of this, I've got my first control plane node up, and you can see that I've got a join statement that I can use on my other nodes. So at this point, we can go over to the other node. We can run this join statement, and then after that's been joined, we can export our Kube config file that's been stored in Etsy Kubernetes. And then once we set that we can do our get nodes and you can see I'm well on my way to having a full cluster. I've got two nodes up now that are both master servers and I can join my worker nodes next.

I hope after watching this video, you've learned how you can set up your Kubernetes cluster for use with the AWS cloud provider. Thank you for watching.

Give Feedback

Help us improve by sharing your thoughts.

Share