Azure Kubernetes Service – Designing Compute Solutions
We looked at containerization, and specifically with Docker (Azure’s default container engine), in the previous section. Now we have container images registered in Azure Container Registry, and from there, we can then use those images to spin up instances or running containers.
Two questions may spring to mind – the first is what now? Or perhaps more broadly, why bother? In theory, we could achieve some of what’s in the container with a simple virtual machine image.
Of course, one reason for containerization is that of portability – that is, we can run those images on any platform that runs Docker. However, the other main reason is it now allows us to run many more of those instances on the same underlying hardware because we can have greater density through the shared OS.
This fact, in turn, allows us to create software using a pattern known as microservices.
Traditionally, a software service may have been built as monolithic – that is, the software is just one big code base that runs on a server. The problem with this pattern is that it can be quite hard to scale – that is, if you need more power, you can only go so far as adding more RAM and CPU.
The first answer to this issue was to build applications that could be duplicated across multiple servers and then have requests load balanced between them – and in fact, this is still a pervasive pattern.
As software started to be developed in a more modular fashion, those individual modules would be broken up and run as separate services, each being responsible for a particular aspect of the system. For example, we might split off a product ordering component as an individual service that gets called by other parts of the system, and this service could run on its own server.
While we can quickly achieve this by running it on its virtual server, the additional memory overhead means as we break our system into more and more individual services, this memory overhead increases, and we soon run very efficiently from a resource usage point of view.
And here is where containers come in. Because they offer isolation without running a full OS each time, we can run our processes far more efficiently – that is, we can run far more on the same hardware than we could on standard virtual machines.
By this point, you might now be asking how do we manage all this? What controls the spinning up of new containers or shutting them down? And the answer is orchestration. Container orchestrators monitor containers and add additional instances in response to usage thresholds or even for resiliency if a running container becomes unhealthy for any reason. Kubernetes is an orchestration service for managing containers.
A Kubernetes cluster consists of worker machines, called nodes, that run containerized applications, and every cluster has at least one worker node. The worker node(s) host pods that are the application’s components, and a control plane or cluster master manages the worker nodes and the pods in the cluster. We can see a logical overview of a typical Kubernetes cluster, with all its components, in the following diagram:

Figure 7.11 – Kubernetes control plane and components
AKS is Microsoft’s implementation of a managed Kubernetes cluster. When you create an AKS cluster, a cluster master is automatically created and configured; there is no cost for the cluster master, only the nodes that are part of the AKS cluster.
The cluster master includes the following Kubernetes components:
- kube-apiserver: The API server exposes the Kubernetes management services and provides access for management tools such as the kubectl command, which is used to manage the service.
- etcd: A highly available key-value store that records the state of your cluster.
- kube-scheduler: Manages the nodes and what workloads to run on them.
- kube-controller-manager: Manages a set of smaller controllers that perform pod and node operations.
You define the nodes’ number and size, and the Azure platform configures secure communication between the cluster master and nodes.