Cloud Native Part 2: How We Got Here
In this lesson, we extend the discussion of Part 1 and discuss the reasons why organizations are moving towards Cloud Native and Kubernetes.
Hi, this is Jonathan Smith, Director of Field Engineering and Education for Cloud Native Applications. In Part 2, I’ll extend the discussion around some of the forces that have shaped Cloud Native application architectures. So let’s go ahead and get started. First, just a brief reminder, when we talk about Cloud Native, uh, there’s a lot of technologies involved. Uh, some directly such as Kubernetes, uh, some, uh, more indirectly. The, uh, we talked about how all of those have shaped and influenced Cloud Native. We talked about DevOps, uh, again thinking about infrastructure as code rather than managing individual services. We can manage fleets of servers. Um, we talked about continuous integration and delivery, uh, where the focus is really on being able to deploy to production at any time so that we can really address, uh, customer challenges or customer, uh, requirements.
Microservices, uh, of course are, uh, an application architectural pattern that allow us to take our applications and break them apart into smaller components. Again, so that we can manage those, uh, components individually for scale, for speed of delivery and so on. And finally, we talked about containers. Uh, how we’re packaging up our applications into, uh, and their dependencies into, uh, uh, easily deployable, uh, uh, um, uh, components. And of course, the common theme between all of these is really around speed. Uh, we, we often talk about, uh, Kubernetes and really talk about Cloud Native specifically. Uh, but really what most customers want even though they ask for Kubernetes is they really want speed from a delivery perspective.
And so, what I thought I would do is really try to answer the question of why. Why is speed so important? Um, one of the models that I’ve seen that’s really good is this one by Gregor Hohpe, which really, uh, provides a way of looking at how most organizations have viewed, uh, introducing change. Typically of course, there’s an as is state. We have a problem or an existing state of where we are today.
We have, uh, a desired state, where we want, uh, to be either as an organization or a product or an individual application, and in order to address that change, we need to, uh, introduce it through what’s known as a project. And of course, a project has a discreet start, a discreet end, uh, and as we introduce that change over a period of time, we get to kind of the final state and we celebrate because we’re able to go back to our normal lives.
The challenge with this model of course is that it really is heavily dependent on completely understanding the problem and being able to address that problem through the course of introducing change via a project. And we tend to fall into a trap of guessing, right? It’s either the time it takes to complete the project is very long, or we’re a little bit off about our desired state. We may end up building something from a, from a technology perspective that no longer really satisfies the original requirements. Uh, maybe we were wrong and so on. And the reason why that tends to happen more and more in say, the last, uh, uh, few years versus many, many years ago is that we really are shifting from, uh, uh, the model on the left where we had known problems, where we were really just moving into the digital space.
To the model on the right, where now that we’re in the digital space, we’re actually dealing with a constant rate of change. We’re still working out our business models, we’re still working through many of the, uh, you know, customer challenges. And of course, there’s a lot of competitors that are able to, to take advantage of some of the technology advancements and they’re moving very quickly. And so, when you’re in the right hand model, you’re really dealing with innovation. You’re, you’re dealing with an unknown state. You constantly need to learn. Uh, and improve and so you need to be in a learning fast model, and that’s a completely different way of thinking about how to introduce change and project work, uh, and so many of the models that worked before from a technology perspective uh, to address the problems on the left no longer work to address the problems on the right.
Again, uh, you can refer to the left hand side as economies with scale, known problem, introduce change, scale up, complete the project and move on. On the right hand side, we’re really thinking about economies of speed. Uh, it’s much more important to move quickly, uh, and to adapt and to learn and even fail fast, uh, than it is to, uh, to know the, the solution to the problem out fr- uh, up ahead of time. And if we look at some of the changes that have occurred in, uh, our infrastructure and application architecture over time, uh, we can see, uh, how that pattern has emerged in the way technology has evolved.
We started with, uh, you know, if you look far enough back, of course we started with centralized management. Uh, and development platforms like mainframes. We moved to slightly more distributed architectures, uh, with Client Server. We could distribute our front end, uh, uh, to, uh, the customers themselves to be able to take advantage of it. Uh, and then we started again struggling with, uh, sort of the distributed architecture management, so many organizations, uh, looked at middleware, uh, vendors to provide products and solutions that made that centralized management, uh, and development of those, uh, applications easier. Uh, but over time, that centralized management again created enough constraints where, uh, customers or, uh, developers started feeling, uh, challenges in meeting their customer needs. And this kind of inflection point, uh, is probably right around when most organizations shifted from economies of scale to economies of speed but they may not have realized it.
They had gone through the, the initial digital transformation. They had, uh, uh, digitized many of their, uh, products and moved into sort of a more digital world, but they had no longer, uh, uh, or but they didn’t realize that they were now, uh, building, uh, applications or needing to build applications in a completely different way to satisfy, uh, this economies of speed model. And so Microservices again allow you to break apart your application into smaller components, uh, but the challenge there is that they can be difficult to manage. And so, I think where we are today with Cloud Native is that we really are looking to provide the value of distributed application architectures so that we can move fast, so that we can deal with change, and so that we can, uh, get those changes into the hands of our customers quickly.
But we really want the value of centralized management platforms, and I think that’s where Kubernetes and Cloud Native, uh, plays a, a significant role. So just again, uh, if we look back, economies of scale solved problems that were discreet, uh, with, uh, a known problem and a known, uh, solution. Uh, economies of speed, we’re dealing with problems where we’ll still learning and we’re still trying to understand, uh, how to satisfy those customer needs. And so we need to be able to deal with change much more fast, uh, than, than before. And that’s why the technologies that you see today are so focused on providing, uh, that infrastructure and application, uh, development speed, uh, that makes it much easier to deal, uh, with change.
Thank you, and hope you enjoyed this one and, uh, look forward to further discussions. Thank you.
Have questions about the material in this lesson?
We’ve got answers!
Post your questions in the Kubernetes community Slack. Questions about this lesson are best suited for the #kubernetes-novice channel.
Not yet a part of the Kubernetes Slack community? Join the discussion here.
Have feedback about this course or lesson? We want to hear it!
Send your thoughts to KubeAcademy@VMware.com.