Virtualization vs. Containerization: What is the Difference?

If you want to run an application, there are two ways of doing it: on a physical computer, or on an abstraction of a computer. The two most common forms of abstraction are virtual machines (VMs) and containers. But what’s the difference between these two forms of abstraction? 

To answer this question, let’s take a look at VMs and containers in more detail.

Server Virtualization

Server administrators have long had to deal with the fact that most servers are chronically underutilized. That’s because processors become more powerful every year, and the cost of resources such as RAM and disk storage continue to fall dramatically. As a result, many servers have the potential to run more than one application – but server admins are loath to do this for many reasons including security, reliability, and scalability.

The solution, popularized by VMware, is server virtualization. This enables a single physical server, or virtualization host, to run multiple virtual machines, or VMs. Each VM has its own operating system (and these operating systems can be different), onto which an application can be installed. 

Since VMs are designed to be isolated from each other and from their virtualization host, that means that security issues in one application should not be able to  affect another application running in a separate VM. Equally, if one application crashes and requires a server reboot, its VM can be rebooted without affecting the running of any other VMs. Unfortunately, it is occasionally possible for this isolation to break down — a phenomenon known as VM escape. 

When it comes to scalability, virtualization can help because VMs are portable. For example, there could be a situation where two applications are running in VMs on a single virtualization host, but one of these applications comes to need more resources to the extent that the two VMs can no longer co-exist on the same host. 

Without virtualization the job of moving one of the applications to a new server would be a serious administrative task. But a VM exists as a computer file, so this file can easily be copied or moved over a network (or even via storage media) to a new virtualization host. And, in fact, features such as VMware’s vMotion and Microsoft’s Live Migration even allow VMs to be moved to new hosts while they are running (a process known as live migration), ensuring that there is no interruption to the services they provide. 

This has important implications for disaster recovery too. That’s because if a disaster strikes, virtual machines can be moved to a secondary site and, crucially, this secondary site does not need to mirror the primary site. Essentially all that is needed is a sufficient number of virtualization hosts at the secondary site.

In order to become a virtualization host, a physical server needs to run a piece of software called a hypervisor (sometimes known as a virtual machine monitor) which acts as a resource broker between the physical virtualization host and the VMs. These can be “bare metal” hypervisors (also known as Type 1 hypervisors) such as VMware’s ESXi  which contain their own OS kernel and run directly on the physical server, or hypervisors built in to operating systems (also known as Type 2 hypervisors) , such as Microsoft’s Hyper-V, which runs on Windows Server and Windows 10.

The first hypervisors were developed by IBM in the 1960s, and today popular hypervisors include Hyper-V, ESXi, KVM, and Nutanix AHV.

Also read: Best Server Virtualization Software of 2021

Containers

In contrast to virtualization, a container host needs to run its own operating system, as well as a container system (which is analogous to a hypervisor). 

That’s because containers are not self-contained abstractions of computers in the way the VMs are. Instead, a container consists of a single application (or microservice), and any other vital files that it needs to run. It then makes use of the container host’s operating system kernel, binaries and libraries in order to function. These shared files are exposed as read-only files to containers.  Other containers running on the container host also share the host’s kernel, binaries and libraries.

Since containers are far “lighter” than VMs, and far quicker to start up, this also makes them ideal for running microservices, which can be called into existence when demand for those microservices scales up,  and then taken down when demand subsides. They can also be easily moved between public and private clouds and traditional data centers. 

By far the most popular container environment is Docker. Other notable container environments include rkt, Apache Mesos, lxc, containerd, Hyper-V Containers, and Windows Server Containers.

Dedicated operating systems such as Red Hat’s Fedora CoreOS have been built specifically for running containerized workloads securely and at scale.

Also read: The Growing Value of a Microservice Architecture

VMs vs. Containers

Since containers share their host’s operating system rather than have their own operating system (in the way that VMs do), this results in some important differences between containers and VMs:

  • Containers are far smaller or “lighter” than VMs, often consisting of a few megabytes rather than gigabytes, and require far fewer hardware resources. That means a single physical server can host far more containers than VMs. 
  • Containers can be started in seconds or even milliseconds. By contrast, VMs need to go through an entire boot process to start up. 
  • Since containers all share their host’s operating system, all applications have to run on the same operating system. By contrast, VMs running on a virtualization host can all run different operating systems (for example Linux, Unix, and Windows). 
  • When using containers, only the container host’s operating system needs to be patched and updated. With VMs, each VM’s operating system has to be patched and updated.
  • If a container crashes the container host’s operating system, then all containers running on that host will fail. 
  • A security vulnerability in a container host’s OS kernel will affect all the containers that it is hosting. 

How are VMs and Containers Used?

VMs are ideally suited to traditional resource-heavy, monolithic applications, especially as preparation for moving these applications to the cloud. 

Containers are more suited to hosting microservices used in web services, and in cases where scalability is important. When containers are used in this way they are usually managed by a container orchestration system for automating computer application deployment, scaling, and management. These are often based around Kubernetes, an open source system originally designed by Google but now maintained by the Cloud Native Computing Foundation

Containers are also very useful for software developers, because they can create applications in containers on a laptop and then ensure that they will work the same way in containers in a production environment. 

One more thing worth mentioning is that the benefits of containers and VMs can be enjoyed simultaneously. That’s because containers can run in VMs, allowing organizations to make use of existing virtualization infrastructure, such as VM management systems, to manage their containers as well. 

Read next: Transforming Networks: From Virtualization to Cloudification

Latest Articles

Follow Us On Social Media

Explore More