Container Instances – Designing Compute Solutions

Virtual machines offer a way to run multiple, isolated applications on a single piece of hardware. However, virtual machines are relatively inefficient in that every single instance contains a full copy of the operating system.

Containers wrap and isolate individual applications and their dependencies but still use the same underlying operating systems as other containers running on the host – as we can see in the following diagram:

Figure 7.4 – Containers versus virtual machines

This provides several advantages, including speed and the way they are defined. Azure uses Docker as the container engine, and Docker images are built-in code. This enables easier and repeatable deployments.

Because containers are also lightweight, they are much faster to provision and start up, enabling applications based on them to react quickly to demands for resources.

Containers are ideal for a range of scenarios. Many legacy applications can be containerized relatively quickly, making them a great option when migrating to the cloud.

Containers’ lightweight and resource-efficient nature also lends itself to microservice architectures whereby applications are broken into smaller services that can scale out with more instances in response to demand.

We cover containers in more detail later in this chapter, in the Architecting for containerization and Kubernetes section.

What to watch out for

Not all applications can be containerized, and containerization removes some controls that would otherwise be available on a standard virtual machine.

As the number of images and containers increases in an application, it can become challenging to maintain and manage them; in these cases, an orchestration layer may be required, which we will cover next.

Azure Kubernetes Service (AKS)

Microservice-based applications often require specific capabilities to be effective, such as automated provisioning and deployment, resource allocation, monitoring and responding to container health events, load balancing, traffic routing, and more.

Kubernetes is a service that provides these capabilities, which are often referred to as orchestration.

AKS stands for Azure Kubernetes Service and is the ideal choice for microservice-based applications that need to dynamically respond to events such as individual node outages or automatically scaling resources in response to demand. Because AKS is a managed service, much of the complexity of creating and managing the cluster is taken care of for you.

The following shows a high-level overview of a typical AKS cluster and it is described in more detail in the Azure Kubernetes Service section later in this chapter:

Figure 7.5 – AKS cluster

AKS is also platform-independent – any application built to run on the Kubernetes service can easily be migrated from one cluster to another regardless of whether it is in Azure, on-premise, or even another cloud vendor.

As already stated, we cover containers and AKS in more detail later in this chapter, in the Architecting for containerization and Kubernetes section.