The cloud promises to provide virtually unlimited compute power at a moment’s notice. It is thus an anti-pattern for workloads to hoard compute capacity in the cloud. Case in point is the use of virtual machines to run workloads. Each virtual machine comes with certain pre-baked capacity in terms of compute and memory. Workloads running on virtual machines do not utilize all of the available capacity all the time. Thus, the size of a virtual machine is dictated by the peak utilization demands of the workload running inside. This leads to underutilization of available resources and we can go as far to say the workloads are hoarding resources. Driven in part by the rise of containerization and micro-services, momentum has gathered to break free of this anti-pattern.
An architectural style that favors breaking down a large software application into its constituents that can be designed, implemented, deployed and operated independently. These constituent parts communicate with each other by sending events or over REST / RPC endpoints.
In many applications there are short sequences of business logic that needs to be run based on a change that takes place in the world. The change to the world can be packaged as an event. Instead of dedicating an entire virtual machine with fixed capacity to run these short sequences, it would be ideal if the computing resources are allocated Just-In-Time. This is the idea behind Serverless computing.
Containers are a way of packaging and deploying Microservices along with their dependencies to run on virtual or physical hardware. They can be sized much more granularly than a virtual machine. They can be brought up and taken down in seconds.
Container orchestration frameworks such as Kubernetes provide ways to specify deployment architecture and scaling rules using JSON/YAML. These containers are then scheduled to run on the hardware.