KubeAcademy by VMware
Evolution of Platforms
Next Lesson

In this lesson we look at how infrastructure platforms evolved, we take a brief look at how compute virtualization changed how developers, operators and IT worked and what impact it had on the various roles.

Boskey Savla

Senior Technical Marketing Architect at VMWare

Boskey Savla is Technical Marketing Manager @VMware focusing on Cloud Native Applications, she has 15 years of experience in systems and operations.

View Profile

Now before we get into Kubernetes, I think we need to look at the evolution that has happened in terms of how applications are written and how applications are deployed. And Kubernetes is really the latest and the greatest platform that is helping this overall process.

If you look at the technologies that were there before Kubernetes, whether it was virtualization, than the advent of containers, and now Kubernetes, a lot of this process and the platforms that have been coming together are really... The way to look at it is, it is a constant evolution and optimization of this particular cycle.

The cycle that takes from an application that is being coded or written, and the time it takes to deploy. Every platform, whether it's socialization, containers, Kubernetes et cetera, has been about optimizing this process.

To look at this, let's take a look at history and go back in time before what virtualization. Now, before virtualization, if you look at the overall process in which code was written and deployed, it started with a developer wanting to code something. Even before they start writing code, they need a host in which... which was relevant to the operating system and the language that they wrote their application into.

So even before writing a piece of code, they would have to determine what operating system that they're going to use, what operative... or what language or what program language they're going to use, and then request hardware, because there was no virtualization. There was a physical one-on-one hardware port OS.

The right off the bat, the process of even starting to write an application code was pretty... or it needed a lot of time. Now with that in mind, this process was not just applicable to the development teams, but it was also applicable to the teams that were actually going to test this application. So that even the QE teams or the QA teams, they needed to do similar things. They would file a ticket within IT to provision a hardware that could then be installed with the right OS that the application was written for. And then that's when they would start testing their apps.

Now finally, once the application was developed, that application would be handed over to the IT team or the operation team that would then determine how or plan how to deploy that application. Again, from an IT teams perspective, they would have to figure out the capacity in terms of storage, the networking needs, et cetera.

So they will have to talk to a networking team to carve VLANs, make sure that there's enough routing to divert traffic to the appropriate application. They would have to talk to the storage team to provision storage that they could attach to this physical host that they were going to use to deploy that application.

And so overall, the process between the time and application was written and it was deployed was pretty long. If you look at this entire process from four key pillars, number one, how siloed the teams were. And when you have so many teams working together, the chances of error are adjust that they unfold as much as the number of team members involved.

The time it takes to carve out maintenance windows because if you have an application that has so many dependencies with so many different teams trying to work around the platform to make sure that the app lands correctly you need to carve out maintenance windows. And even if there is an error or if you want to scale something, or if you want to update anything, you still need a maintenance window that is of considerable and in order to deploy, upgrade patch, anything.

The time to fix bugs and issues is also pretty high because it's pretty much the same process. If there is a bug that is found it goes all the back to the dev teams starting from scratch with the code and then updating it, requiring a new host or requiring a host to be re-imaged with that application, provisioning it, giving it over to the QE teams.

Then the QE teams finally hand it over to the IT team, et cetera. And so time to fix bug issues is also equal, or actually as much as the time it needs to deploy a brand new application. And then the complexity to scale, imagine if you had a single physical host and you want to scale your application and the host isn't right for you, you need to start provisioning a new hardware, a completely new host in order to scale out.

You also need to talk to again, the network teams to make sure that there's enough pool of IP, there's routing enabled, and then equivalent storage capacity needed. So if we look at from these four specific pillars, the overall time it took for an application from being written to deployed was maybe it took like two to six months at times or even in a year, depending on how large that application was.

Now with the advent of compute virtualization wherein a single physical host could have now hosted more than one operating system, this helps simplify things a lot. So from an application developer's perspective if you look at it, the developer is not necessarily waiting for an IT team to deploy a physical host with the right operating system so that they can start working on their piece of code.

Because of virtualization, development teams can now actually start writing code right on their laptops with whatever operating system they wish. So compute virtualization simplified the overall process, at least from the point of view of application developers and the same applied to even the people who are testing that app. They didn't have to really talk to IT to deploy multiple VMs or deploy multiple physical hosts. They could actually test it out with a single host with multiple virtual machines on it.

Now, when it comes to actually deploying that app into our production system, even IT teams would then have to, rather than provisioning a server and things like that, they were, they would rather provision a big server with a really chunky size of hardware and then provision that application as a virtual machine. So overall time that it took to bring an app to production from it being written was considerably decreased.

As a result of that, there was some optimization within our overall process. But if you go back and look at our four pillars, the interaction... teams was still pretty siloed because you still need somebody, even though we were using virtual machines, you still needed somebody to provision storage. You still needed somebody to network. And now even maybe a virtual network so that the appropriate virtual machines could have enough IPs, could be routed, et cetera.

So the interaction between siloed teams still remained. But however, the overall process simplified because we were dealing with virtual machines as a unit of interaction versus just a piece of code that had to be deployed in a physical server somewhere. So the complexity to scale also reduced because now scaling means duplicating a virtual machine rather than bringing in a whole new physical host.

This also helped fix the time to take fix bugs and issues. And overall, our maintenance windows were reduced as a result of compute virtualization. This probably helped the overall cycle in terms of maybe six months to a couple of months or reduced to from a maintenance window perspective reduced to weeks.

But if you look at it, it's still pretty... It still has room for improvement. The process, even with compute virtualization is not very dynamic. You still have to work with a lot of factors, a lot of teams, a lot of hoops in order to deploy applications.

Give Feedback

Help us improve by sharing your thoughts.

Share