The key to the implementation of successful Cloud scale solutions is in the delivery of services that can operate effectively, efficiently and when required, independently.  Containers provide a level of process and service isolation that has historically been neither possible or practical.

Containers provide isolated, governed resources for applications to utilise that are extremely compute efficient when compared to run of the mill virtualisation.  With virtualisation (such as Hyper-V) each server guest runs uniquely on host infrastructure providing effective isolation but (relatively) inefficient usage of compute resources exacerbated by complex resource governing challenges such as having to restart services when compute resource allocations are scaled up or down.  The key benefit is that almost any workload can be delivered via virtualisation.

When deployed as containers (for example in Docker) resources are isolated from the perspective of the container but shared from the perspective of the host – resulting in much more efficient use of resources.  The governing of resources is also intrinsic to the environment (which means that workloads have to be container aware and capable which can be a challenge for incumbent workloads) resulting in efficient use of efficiently allocated resources – meaning that more is possible with a given quota of compute.