KubeAcademy by VMware
Leveraging the AWS Cloud Provider
Next Lesson

This lesson will demonstrate how to use a properly configured AWS Cloud Provider with your Kubernetes cluster. You'll learn how to use Elastic Load Balancers (ELBs) for ingress access and Elastic Block Storage (EBS) backed persistent volumes.

Eric Shanks

Senior Technical Marketing Architect

Eric Shanks is a Senior Technical Marketing Architect at VMware where he helps customers build solutions on Kubernetes.

View Profile

Hi. I'm Eric Shanks, a Senior Field Engineer with VMware. And in this video, I'll show you how to use a properly configured Kubernetes cloud provider with your AWS infrastructure. There's a couple of capabilities that I want to demonstrate with the Kubernetes cloud provider for AWS. The first of which is using Elastic Block Storage or EBS volumes for PersistentVolumes and Kubernetes. Before we do that, let's take a look at my cluster just so we can get a lay of the land.

So if I do a kubectl get nodes, we can see that I've got six nodes here deployed in AWS. Three of them are master nodes. Three of them are worker nodes. Now, before I can use the elastic block storage, one of the things I'm going to do is deploy a storage class so that we can request those PersistentVolumes. Now let's take a look at the manifest I've created for that. You can see here, I've got a manifest of kind storage class and the provisioner here is kubernetes.io/aws-ebs. And that's going to tell the Kubernetes cloud provider that we need elastic block storage for these PersistentVolumes. Okay. So let's clear our screen and we'll go ahead and apply our storage class. And it's been created. So we'll little double check here, make sure everything looks good. And our storage class has been created name of EBS.

So the next step is for us to go and deploy something that actually requests a PersistentVolume. So to do that, I've got another manifest here and we'll cut this out of my application that's going to have a backend PersistentVolume. Scroll up here, you can see that I am requesting some storage of two gigs and the storage class name is EBS volume, and this is going to be for my SQL database for my application. All right. So let's go ahead and clear our screen again, and let's apply our database manifest that's using a PersistentVolume. All right. It created a PersistentVolumeClaim. We've got a deployment created and a service for application.

So let's do a quick check here and do a kubectl get PVC and you can see that I've got my PersistentVolumeClaim created. It's bound. It's name is MySQL volume, the capacity is two gigs, and it's using our EBS storage class. So let's take a look at our Pods, the Pods still creating while downloading the image, but in another second here we can see that our Pods running. Okay. So what's happening in the background? Let's take a look over at the AWS GUI just to see what's going on. Okay. Here, you can see I've got my six nodes in my cluster. And if I look down at volumes then you can see I've got Kubernetes dynamic PVC volume, that's been created. It's an elastic block storage device, and you can see in the attachment information that it's attached to worker two.

So let's go over to the running instances and look at worker two. And if we scroll down, you can see that I've got another elastic block storage device here for our PersistentVolumeClaim that we created. So that takes care of our storage through the cloud provider. Let's take a look at the other capability I want to demonstrate, which is using load balancers as part of our services. So before we get into that, let's take a look at what load balancers are already created in my environment. So we'll go ahead over here to the elastic load balancers, and you'll see that I've got one load balancer that's already been created. That's the load balancer for my Kubernetes control plane. Now, I can have additional load balancers that are low balancing the services that are running on my Kubernetes cluster, and that's going to be done dynamically. So we'll check back here in a little bit.

All right, back to our command line. So I've got another application that I'm going to deploy, and let's take a look at that manifest now. So this is the application piece of my app. Down here in the service that I'm going to be deploying, you can see that I've got a service that's looking for back in Pods. I've got a port specified a port 80, a target port of 5,000. And you'll notice that the spec type is of type load balancer.

So let's go ahead and apply my app. And this app is going to talk to that back end database that we created earlier with our PersistentVolume. So the deployment's been created, our service has been created here. Let's do a quick check and you can see that a service has been created, my hollowapp service, it's a tight load balancer. It's gotten a ClusterIP. It's also got an external IP, which you can see here is an external... It's a DNS name. And you can see the ports that were created. So it's using port 80, but it's also using NodePort 30,364. That's going to be important. So let's take a look over in the AWS GUI to see what's changed. So let's take a look at the GUI again and see what's going on with our AWS control plane. You can see here, I've got a new load balancer that's been created.

You can see down in the port configuration, I've got port 80 forwarding to that port 30,364, which was created by our service in our Kubernetes cluster. If we look at the DNS name here, we can pull that up, throw that into our browser, and you can see that our application is working. So all of the security groups needed for this application, the load balancer and the planning to take that load balancer in and connect it to our service that lives in Kubernetes was all done through a manifest in the Kubernetes control plane. Now I'm not saying this is the best way to be using external services with our Kubernetes cluster. For instance, you might want to have a load balancer front-ending an ingress controller because load balancers cost money in AWS. So a typical pattern would be to deploy a load balancer, connect that to an ingress controller, and then have your services live behind that proxy.

So in this video, we demonstrated two capabilities of the cloud provider for AWS. The first one using dynamic elastic block storage devices when we need PersistentVolumes and the second having a way to dynamically provision elastic load balancers and connect that to our Kubernetes services. Thank you for watching.

Give Feedback

Help us improve by sharing your thoughts.

Share