Pre-containers and pre-cloud, applications were hosted on physical servers that were stored within an organization’s own data center. Back then, servers would typically run one application at a time as there was no clear way to define resource boundaries. And as you might expect, this approach can become extremely expensive and very difficult to manage over time. Especially when a typical enterprise runs north of 450 applications.
Next came virtualization.
Virtualization platforms like VMware, isolate parts of a server so organizations can spin up what is known as a virtual machine (VM). Once done, VMs would be treated in the same way as a physical server would be. They’re just an abstraction of the underlying hardware. Virtualization was the first step towards better resource utilization. Although organizations would still need to run various operating systems (O/S) within a server, you could start to deploy various workloads on a single machine.
Today, the modern way to deploy new workloads is through containers. Containers share much of the same logic as virtualization in that they’re an abstraction of hardware, but containers go one step further by abstracting the O/S too.
As explained by Docker, one of the leading forces behind containers, “containers are a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another”. In comparison to VMs, containers don’t require you install multiple versions of an O/S on a single server, making containers extremely portable. They’re not tied to a single machine and can be easily deployed on any type of cloud.