Architecting for containerization and Kubernetes – Designing Compute Solutions
This section will look in more detail at AKS, Microsoft’s implementation of Kubernetes. To understand what AKS is, we need to take a small step back and understand containerization and Kubernetes itself.
Containerization
As we briefly mentioned earlier, containerization is a form of virtualization in that you can run multiple containers upon the same hardware, much like virtual machines. Unlike virtual machines, containers share the underlying OS of the host. This provides much greater efficiency and density. You can run many more containers upon the same hardware than you can run virtual machines because of the lower memory overhead of needing to run multiple copies of the OS – as we can see in the following diagram:

Figure 7.10 – Containers versus virtual machines
In addition to this efficiency, containers are portable. They can easily be moved from one host to another, and this is because containers are self-contained and isolated. A container includes everything it needs to run, including the application code, runtime, system tools, libraries, and settings.
To run containers, you need a container host – the most common is Docker, and in fact, container capabilities in Azure use the Docker runtime.
A container is a running instance, and what that instance contains is defined in an image. Images can be defined in code; in Docker images, this is called a Dockerfile.
The Dockerfile uses a specific syntax that defines what base image you wish you use – that is, either a vanilla OS or an existing image with other tools and components on it, followed by your unique configuration options, which may include additional software to install, networking, file shares, and so on. An example Dockerfile might look like this:
FROM node:current-slim
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
EXPOSE 8080
CMD [ “npm”, “start” ]
COPY . .
In this example, we start with an image called node:current-slim, set a working directory, copy a file into it, and install a package called npm. Finally, we expose the application over port 8080 and issue the npm start command.
This Dockerfile can create a new image but notice how it is based on an existing image. By extending existing images, you can more easily build your containers with consistent patterns.
The images we build, or use as a source, are held in a container registry. Docker has its public container registry, but you can create your private registry with the Azure Container Registry service in Azure.
Once we have created our new image and stored it in a container registry, we can deploy that image as a running container. Containers in Azure can be run either using the Azure Container Instances(ACI), a containerized web app, or an AKS cluster.
Web apps for containers
Web apps for containers are a great choice if your development team is already used to using Azure Web Apps to run monolithic or N-tier apps and you want to start moving toward a containerized platform. Web Apps works best when you only need one or a few long-running instances or when you would benefit from a shared or free App Service plan.
An example use case might be when you have an existing .NET app that you wish to containerize that hasn’t been built as a microservice.
Azure Container Instances
ACI is a fully managed environment for containers, and you are only billed for the time you use them. As such, they suit short-lived microservices, although, like web apps for containers, you should only consider this option if you are running a few services.
Web apps for containers and ACI are great for simple services or when you are starting the containerization journey. Once your applications begin to fully embrace microservices and containerized patterns, you will need better control and management; for these scenarios, you should consider using AKS.