The cloud promises to provide virtually unlimited compute power at a moment’s notice. It is thus an anti-pattern for workloads to hoard compute capacity in the cloud. Case in point is the use of virtual machines to run workloads. Each virtual machine comes with certain pre-baked capacity in terms of compute and memory. Workloads running on virtual machines do not utilize all of the available capacity all the time. Thus, the size of a virtual machine is dictated by the peak utilization demands of the workload running inside. This leads to underutilization of available resources and we can go as far to say the workloads are hoarding resources. Driven in part by the rise of containerization and micro-services, momentum has gathered to break free of this anti-pattern.

Cloud Native Computing amalgamates the philosophy of Containerization and Micro-Services along with technology that helps realize it. There are three broad areas that merit additional detail here.

An architectural style that favors breaking down a large software application into its constituents that can be designed, implemented, deployed and operated independently. These constituent parts communicate with each other by sending events or over REST / RPC endpoints.

In any given application, not all features have the same usage demand. Parts of the application are more heavily used than others. Thus breaking down the application into its constituent Microservices helps scale those parts that are heavily used differently from those that don’t receive as much traffic. This can lead to much more meaningful utilization of underlying hardware resources.

In many applications there are short sequences of business logic that needs to be run based on a change that takes place in the world. The change to the world can be packaged as an event. Instead of dedicating an entire virtual machine with fixed capacity to run these short sequences, it would be ideal if the computing resources are allocated Just-In-Time. This is the idea behind Serverless computing.

As we embrace the cloud as a utility provider, this pattern makes tremendous sense as it promises virtually unlimited number of executions at cheap cost. It eliminates almost all worries around capacity. This is as granular as it gets in terms of utilizing compute power only when its needed.

Containers are a way of packaging and deploying Microservices along with their dependencies to run on virtual or physical hardware. They can be sized much more granularly than a virtual machine. They can be brought up and taken down in seconds.

Container orchestration frameworks such as Kubernetes provide ways to specify deployment architecture and scaling rules using JSON/YAML. These containers are then scheduled to run on the hardware.

Cloud Native Computing movement is being termed as Cloud 2.0. If Cloud 1.0 was about adopting the modern public cloud then the next step is embrace it true to the spirit in which it is designed to help. Cloud Native Compute Foundation is the task force dedicated to realizing the Cloud Native vision. The New Stack has a trove of articles and blogs that help keep up pace with the changes that are taking place in this ecosystem.