Public IP addresses – Network Connectivity and Security

A public IP address is a discrete component that can be created and attached to many services, such as VMs. The public IP component is dedicated to a resource until you un-assign it – in other words, you cannot use the same public IP across multiple resources.

Public IP addresses can be either static or dynamic. With a static IP, once the resource has been created, the assigned IP address it is given stays the same until that resource is deleted. A dynamic address can change in specific scenarios. For example, if you create a public IP address for a VM as a dynamic address, when you stop the VM, the address is released and is different when assigned once you start the VM up again. With static addresses, the IP is assigned once you attached it to the VM, and it stays until you manually remove it.

Static addresses are useful if you have a firewall device that controls access to the service that can only be configured to use IP addresses or DNS resolution as changing the IP would mean the DNS record would also need updating. You also need to use a static address if you use TLS/SSL certificates linked to IP addresses.

Private IP addresses

Private IP addresses can be assigned to various Azure components, such as VMs, network load balancers, or application gateways. The devices are connected to a VNET, and the IP range you wish to use for your resources is defined at the VNET level.

When creating VNETs, you assign an IP range; the default is 10.0.0.0/16 – which provides 65,536 possible IP addresses. VNETs can contain multiple ranges if you wish; however, you need to be careful that those ranges do not interfere with public addresses.

When assigning IP ranges, you denote the range using CIDR notation – a forward slash (/) followed by a number that defines the number of addresses within that range. The following are just some example ranges:

Tip

CIDR notation is a more compact way to state an IP address and it’s ranged based on a subnet mask. The number after the slash (/) is the count of leading 1 bits in the network mask. The complete range of addresses can be found here: https://bretthargreaves.com/ip-cheatsheet/.

For more in-depth details of CIDR, see https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing.

Subnets are then created within the VNET, and each subnet must also be assigned an IP range that is within the range defined at the VNET level, as we can see in the following example diagram:

Figure 8.2 – Subnets within VNETs

For every subnet you create, Azure reserves five IPs for internal use – for smaller subnets, this has a significant impact on the number of available addresses. The reservations within a given range are as follows:

With these reservations in mind, the minimum size of a subnet in Azure is a /29 network with eight IPs, of which only three are useable. The largest allowable range is /8, giving 16,777,216 IPs with 16,777,211 usable.

Private ranges in Azure can be used purely for services within your Azure subscriptions. If you don’t connect the VNETs or require communications between them, you can have more than one VNET with the same ranges.

If you plan to allow services within one VNET to communicate with another VNET, you must consider more carefully the ranges you assign to ensure they do not overlap. This is especially crucial if you use VNETs to extend your private corporate network into Azure, as creating ranges that overlap can cause routing and addressing problems.

As with public IPs, private IPs can also be static or dynamic. With dynamic addressing, Azure assigns the next available IP within the given range. For example, if you are using a 10.0.0.0 network, and 10.0.0.3–10.0.0.20 are already used, your new resource will be assigned 10.0.0.21.

Understanding IP addressing and DNS in Azure – Network Connectivity and Security

When building services in Azure, you sometimes choose to use internal IP addresses and external IP addresses. Internal IP addresses can only communicate internally and use VNETs. Many services can also use public IP addresses, which allow you to communicate with the service from the internet.

Before we delve into public and internal IP addresses, we need to understand the basics of IP addressing in general, and especially the use of subnets and subnet masks.

Understanding subnets and subnet masks

When devices are connected to a TCP/IP-based network, they are provided with an IP address in the notation xxx.xxx.xxx.xxx. Generally, all devices that are on the same local network can communicate with each other without any additional settings.

When devices on different networks need to communicate, they must do so via a router or gateway. Devices use a subnet mask to differentiate between addresses on the local network and those on a remote network.

The network mask breaks down an IP address into a device or host address component and a network component. It does this by laying a binary mask over the IP address with the host address to the right.

255 in binary is 11111111 and 0 in binary is 00000000. The mask says how many of those bits are the network, with 1 denoting a network address and 0 denoting a host address.

Thus, 255.0.0.0 becomes 11111111.00000000.00000000.0000000, therefore in the address 10.0.0.1, 10 is the network and 0.0.0.1 is the host address. Similarly, with a mask of 255.255.0.0 and an address of 10.0.0.1, 10.0 becomes the network and 0.1 the host. The following diagram shows this concept more clearly:

Figure 8.1 – Example subnet mask

Splitting an address space into multiple networks is known as subnetting, and subnets can be broken down into even smaller subnets until the mask becomes too big.

When configuring IP settings for devices, you often supply an IP address, a subnet mask, and the address of the router on the local network that will connect you to other networks.

Sometimes, when denoting an IP address range, the subnet mask and range are written in a shorthand form known as CIDR notation. We will cover CIDR notation examples in the Private IP addresses sub-section.

This is a relatively simplified overview of network addressing and subnetting, and although the AZ-304 exam will not explicitly ask you questions on this, it does help to better understand the next set of topics.

Understanding Azure networking options – Network Connectivity and Security

In the previous chapter, we examined the different options when building computer services, from the different types of Virtual Machines (VMs) to web apps and containerization.

All solution components need to be able to communicate effectively and safely; therefore, in this chapter, we will discuss what options we have to control traffic flow using route tables and load balancing components, securing traffic with different firewalling options, and managing IP addressing and resolution.

With this in mind, we will cover the following topics:

  • Understanding Azure networking options
  • Understanding IP addressing and DNS in Azure
  • Implementing network security
  • Connectivity
  • Load balancing and advanced traffic routing

Technical requirements

This chapter will use the Azure portal (https://portal.azure.com) and you need an Azure subscription for the examples.

Understanding Azure networking options

Services in Azure need to communicate, and this communication is performed over a virtual network, or VNET.

There are essentially two types of networking in Azure – private VNETs and the Azure backbone. The Azure backbone is a fully managed service. The underlying details are never exposed to you – although the ranges used by many services are available, grouped by region, for download in a JSON file. The Azure backbone is generally used when non-VNET-connected services communicate with each other; for example, when storage accounts replicate data or when Azure functions communicate with SQL and Cosmos DB, Azure handles all aspects of these communications. This can cause issues when you need more control, especially if you want to limit access to your services at the network level, that is, by implementing firewall rules.

Important Note

The address ranges of services in Azure change continually as the services grow within any particular region, and can be downloaded from this link: https://www.microsoft.com/en-us/download/details.aspx?id=56519.

Some services can either be integrated with, or built on top of, a VNET. VMs are the most common example of this, and to build a VM, you must use a VNET. Other services can also be optionally integrated with VNETs in different ways. For example, VMs can communicate with an Azure SQL database using a service endpoint, enabling you to limit access and ensure traffic is kept private and off the public network. We look at service endpoints and other ways to secure internal communications later in this chapter, in the Implementing network security section.

The first subject we will need to look at when dealing with VNETs and connectivity is that of addressing and Doman Name Services (DNSes).

Deployments and YAML – Designing Compute Solutions

A pod’s resources are defined as a deployment, which is described within a YAML manifest. The manifest defines everything you need to state how many copies or replicas of a pod to run, what resources each pod requires, the container image to use, and other information necessary for your service.
A typical YAML file may look like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
name: nginx
image: mcr.microsoft.com/oss/nginx/nginx:1.15.2-alpine
ports:
containerPort: 80
resources:
requests:
CPU: 250m
memory: 64Mi
limits:
CPU: 500m
memory: 256Mi

In this example, taken from the docs.microsoft.com site, we see a deployment using the nginx container image, requesting a minimum of 250 m (millicore) and 64 Mi (mebibytes) of RAM, and a maximum of 500 m and 256 Mi.

Tip

A mebibyte is equal to 1024 KB, whereas a millicore is one-thousandth of a CPU core.

Once we have our pods and applications defined within a YAML file, we can then use that file to tell our AKS cluster to use the information in that file and deploy then run our application. This can be performed by running the deployment commands against the AKS APIs or via DevOps pipelines.

Kubernetes is a powerful tool for building resilient and dynamic applications that use microservices, and using images is incredibly efficient and portable due to their use of containerization; however, they are complex.

AKS abstracts much of the complexity of using and managing a Kubernetes cluster. Still, your development and support teams need to be fully conversant with the unique capabilities and configuration options available.

Summary

This chapter looked at the different compute options available to us in Azure and looked at the strengths and weaknesses of each. With any solution, the choice of technology is dependent on your requirements and the skills of the teams who are building them.

We then looked at how to design update management processes to ensure any VMs we use as part of our solution are kept up to date with the latest security patches.

Finally, we looked at how we can use containerization in our solutions, and specifically how Azure Kubernetes Service provides a flexible and dynamic approach to running microservices.

In the next chapter, we will look at the different networking options in Azure, including load balancing for resilience and performance.

Exam scenario

The solutions to the exam scenarios can be found at the end of this book.

Mega Corp is planning a new multi-service solution to help the business manage expenses. The application development team has decided to break the solution into different services that communicate with each other.

End users will upload expense claims as a Word document to the system, and these documents must flow through to different approvers.

The HR department also wants to amend some of the workflows themselves as they can change often.

The application will have a web frontend, and the application developers are used to building .NET websites. However, they would like to start moving to a more containerized approach.

Suggest some compute components that would be suited to this solution.

Nodes and node pools – Designing Compute Solutions

An AKS cluster has one or more nodes, which are virtual machines running the Kubernetes node components and container runtime:

  • kubelet is the Kubernetes agent that responds to requests from the cluster master and runs the requested containers.
  • kube-proxy manages virtual networking.
  • The container runtime is the Docker engine that runs your containers.

The following diagram shows these components and their relation to Azure:

Figure 7.12 – AKS nodes

When you define your AKS nodes, you choose the SKU of the VM you want, which in turn determines the number of CPUs, RAM, and type of disk. You can also run GPU-powered VMs, which are great for mathematical and AI-related workloads.

You can also set up the maximum and the minimum number of nodes to run in your cluster, and AKS will automatically add and remove nodes within those limits.

AKS nodes are built with either Ubuntu Linux or Windows 2019, and because the cluster is managed, you cannot change this. If you need to specify your OS or use a different container runtime, you must build your Kubernetes cluster using the appropriate engine.

When you define your node sizes, you need to be aware that Azure automatically reserves an amount of CPU and RAM to ensure each node performs as expected – these reservations are 60 ms for CPU and 20% of RAM, up to 4 GB So, if your VMs have 7 GB RAM, the reservation will be 1.4 GB but for any VM with 20 GB RAM and above, the reservation will be 4 GB.

This means that the actual RAM and CPU amounts available to your nodes will always be slightly less than the size would otherwise indicate.

When you have more than one node of the same configuration, you group them into a node pool, and the first node is created within the default node pool. When you upgrade or scale an AKS cluster, the action will be performed against either the default node pool or a specific node pool of your choosing.

Pods

A node runs your applications within pods. Typically, a pod has a one-to-one mapping to a container, that is, a running instance. However, in advanced scenarios, you can run multiple containers within a single pod.

At the pod level, you define the number of resources to assign to your particular services, such as the amount of RAM and CPU. When pods are required to run Kubernetes, the scheduler attempts to run the pod on a node with available resources to match what you have defined.

Azure Kubernetes Service – Designing Compute Solutions

We looked at containerization, and specifically with Docker (Azure’s default container engine), in the previous section. Now we have container images registered in Azure Container Registry, and from there, we can then use those images to spin up instances or running containers.

Two questions may spring to mind – the first is what now? Or perhaps more broadly, why bother? In theory, we could achieve some of what’s in the container with a simple virtual machine image.

Of course, one reason for containerization is that of portability – that is, we can run those images on any platform that runs Docker. However, the other main reason is it now allows us to run many more of those instances on the same underlying hardware because we can have greater density through the shared OS.

This fact, in turn, allows us to create software using a pattern known as microservices.

Traditionally, a software service may have been built as monolithic – that is, the software is just one big code base that runs on a server. The problem with this pattern is that it can be quite hard to scale – that is, if you need more power, you can only go so far as adding more RAM and CPU.

The first answer to this issue was to build applications that could be duplicated across multiple servers and then have requests load balanced between them – and in fact, this is still a pervasive pattern.

As software started to be developed in a more modular fashion, those individual modules would be broken up and run as separate services, each being responsible for a particular aspect of the system. For example, we might split off a product ordering component as an individual service that gets called by other parts of the system, and this service could run on its own server.

While we can quickly achieve this by running it on its virtual server, the additional memory overhead means as we break our system into more and more individual services, this memory overhead increases, and we soon run very efficiently from a resource usage point of view.

And here is where containers come in. Because they offer isolation without running a full OS each time, we can run our processes far more efficiently – that is, we can run far more on the same hardware than we could on standard virtual machines.

By this point, you might now be asking how do we manage all this? What controls the spinning up of new containers or shutting them down? And the answer is orchestration. Container orchestrators monitor containers and add additional instances in response to usage thresholds or even for resiliency if a running container becomes unhealthy for any reason. Kubernetes is an orchestration service for managing containers.

A Kubernetes cluster consists of worker machines, called nodes, that run containerized applications, and every cluster has at least one worker node. The worker node(s) host pods that are the application’s components, and a control plane or cluster master manages the worker nodes and the pods in the cluster. We can see a logical overview of a typical Kubernetes cluster, with all its components, in the following diagram:

Figure 7.11 – Kubernetes control plane and components

AKS is Microsoft’s implementation of a managed Kubernetes cluster. When you create an AKS cluster, a cluster master is automatically created and configured; there is no cost for the cluster master, only the nodes that are part of the AKS cluster.

The cluster master includes the following Kubernetes components:

  • kube-apiserver: The API server exposes the Kubernetes management services and provides access for management tools such as the kubectl command, which is used to manage the service.
  • etcd: A highly available key-value store that records the state of your cluster.
  • kube-scheduler: Manages the nodes and what workloads to run on them.
  • kube-controller-manager: Manages a set of smaller controllers that perform pod and node operations.

You define the nodes’ number and size, and the Azure platform configures secure communication between the cluster master and nodes.

Architecting for containerization and Kubernetes – Designing Compute Solutions

This section will look in more detail at AKS, Microsoft’s implementation of Kubernetes. To understand what AKS is, we need to take a small step back and understand containerization and Kubernetes itself.

Containerization

As we briefly mentioned earlier, containerization is a form of virtualization in that you can run multiple containers upon the same hardware, much like virtual machines. Unlike virtual machines, containers share the underlying OS of the host. This provides much greater efficiency and density. You can run many more containers upon the same hardware than you can run virtual machines because of the lower memory overhead of needing to run multiple copies of the OS – as we can see in the following diagram:

Figure 7.10 – Containers versus virtual machines

In addition to this efficiency, containers are portable. They can easily be moved from one host to another, and this is because containers are self-contained and isolated. A container includes everything it needs to run, including the application code, runtime, system tools, libraries, and settings.

To run containers, you need a container host – the most common is Docker, and in fact, container capabilities in Azure use the Docker runtime.

A container is a running instance, and what that instance contains is defined in an image. Images can be defined in code; in Docker images, this is called a Dockerfile.

The Dockerfile uses a specific syntax that defines what base image you wish you use – that is, either a vanilla OS or an existing image with other tools and components on it, followed by your unique configuration options, which may include additional software to install, networking, file shares, and so on. An example Dockerfile might look like this:

FROM node:current-slim
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
EXPOSE 8080
CMD [ “npm”, “start” ]
COPY . .

In this example, we start with an image called node:current-slim, set a working directory, copy a file into it, and install a package called npm. Finally, we expose the application over port 8080 and issue the npm start command.

This Dockerfile can create a new image but notice how it is based on an existing image. By extending existing images, you can more easily build your containers with consistent patterns.

The images we build, or use as a source, are held in a container registry. Docker has its public container registry, but you can create your private registry with the Azure Container Registry service in Azure.

Once we have created our new image and stored it in a container registry, we can deploy that image as a running container. Containers in Azure can be run either using the Azure Container Instances(ACI), a containerized web app, or an AKS cluster.

Web apps for containers

Web apps for containers are a great choice if your development team is already used to using Azure Web Apps to run monolithic or N-tier apps and you want to start moving toward a containerized platform. Web Apps works best when you only need one or a few long-running instances or when you would benefit from a shared or free App Service plan.

An example use case might be when you have an existing .NET app that you wish to containerize that hasn’t been built as a microservice.

Azure Container Instances

ACI is a fully managed environment for containers, and you are only billed for the time you use them. As such, they suit short-lived microservices, although, like web apps for containers, you should only consider this option if you are running a few services.

Web apps for containers and ACI are great for simple services or when you are starting the containerization journey. Once your applications begin to fully embrace microservices and containerized patterns, you will need better control and management; for these scenarios, you should consider using AKS.

Automating virtual machine management – Designing Compute Solutions-2

For this example, you will need a Windows VM set up in your subscription:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the search bar, type and select Virtual Machines and select the virtual machine you wish to apply Update Management to.
  3. On the left-hand menu, click Guest + host updates under Operations.
  4. Click the Go to Update Management button.
  5. Complete the following details:
    a) Log Analytics workspace Location: The location of your VM, for example, East US
    b) Log Analytics workspace: Create default workspace
    c) Automation account subscription: Your subscription
    d) Automation account: Create a default account
  6. Click Enable.

The process can take around 15 minutes once completed. Go back to the VM view and again select Guest + host updates under Operations, followed by Go to Update Management.

You will see a view similar to the following screenshot:

Figure 7.8 – Update Management blade

You can get to the same view but for all the VMs you wish to manage in the portal by searching for Automation Accounts and selecting the automation account that has been created. Then click Update management.

If you want to add more VMs, click the + Add Azure VMs button to see a list of VMs in your subscription and enable the agent on multiple machines simultaneously – as we see in the following screenshot:

Figure 7.9 – Adding more virtual machines for Update Management

The final step is to schedule the installation of patches:

  1. Navigate to the Azure portal by opening https://portal.azure.com.
  2. Type Automation into the search bar and select Automation Accounts.
  3. Select the automation account.
  4. Click Update Management.
  5. Click Schedule deployment and complete the details as follows:
    a) Name: Patch Tuesday
    b) Operating System: Windows
    c) Maintenance Window (minutes): 120
    d) RebootReboot options:Reboot if required
  6. Under Groups to update, click Click to Configure.
  7. Select your subscription and Select All under Resource Groups.
  8. Click Add, then OK.
  9. Click Schedule Settings.
  10. Set the following details:
    a) Start date: First Tuesday of the month
    b) Recurrence: Recurring
    c) Recur Every: 14 days
  11. Click OK.
  12. Click Create.

Through the Update Management feature, you can control how your virtual machines are patched and when and what updates to include or exclude. You can also set multiple schedules and group servers by resource group, location, or tag.

In the preceding example, we selected all VMs in our subscription, but as you saw, we had the option to choose a machine based on location, subscription, resource group, or tags.

In this way, you can create separate groups for a variety of purposes. For example, we mentioned earlier that a common practice would be to test patches before applying them to production servers. We can accommodate this by grouping non-production servers into a separate subscription, resource group, or simply tagging them. You can then create one patch group for your test machines, followed by another for production machines a week later – after you’ve had time to confirm the patches have not adversely affected workloads.

As part of any solution design that utilizes VMs, accommodation must be included to ensure they are always running healthily and securely, and Update Management is a critical part of this. As we have seen, Azure makes the task of managing OS updates easy and straightforward to set up.

Next, we will investigate another form of compute that is becoming increasingly popular – containerization and Kubernetes.

Automating virtual machine management – Designing Compute Solutions-1

What to watch out for

Power Automate is only for simpler workflows and is not suitable when deeper or more advanced integration is required.

In this section, we have briefly looked at the many different compute technologies available in Azure. PaaS options are fully managed by the platform, allowing architects and developers to focus on the solution rather than management. However, when traditional IaaS compute options are required, such as virtual machines, security and OS patches must be managed yourself. Next, we will look at the native tooling that Azure provides to make this management easier.

Automating virtual machine management

Virtual machines are part of the IaaS family of components. One of the defining features of VMs in Azure is that you are responsible for keeping the OS up to date with the latest security patches.

In an on-premise environment, this could be achieved by manually configuring individual servers to apply updates as they become available; however, in many organizations, more control is required; such as, for example, the ability to have patches verified and approved before mass roll out to production systems, control when they happen, and control reboots when required.

Typically, this could be achieved using Windows Server Update Services (WSUS) and Configuration Manager, part of the Microsoft Endpoint Manager suite of products. However, these services require additional management and setup, which can be time-consuming.

As with most services, Azure helps simplify managing VM updates with a native Update Management service. Update Management uses several other Azure components, including the following:

  • Log Analytics: Along with the Log Analytics agent, reports on the current status of patching for a VM
  • PowerShell Desired State Configuration (DSC): Required for Linux patching
  • Automation Hybrid Runbooks / Automation Account: Used to perform updates

Automation Account and Log Analytics workspaces are not supported together in all regions, and therefore you must plan when setting up Update Management. For example, if your Log Analytics workspace is in East US, your automation account must be created in East US 2.

See the following link for more details on region pairings: https://docs.microsoft.com/en-gb/azure/automation/how-to/region-mappings.

When setting up Update Management, you can either create the Log Analytics workspaces and automation accounts yourself or let the Azure portal make them for you. In the following example, we will select an Azure VM and have the portal set up Update Management.

What to watch out for – Designing Compute Solutions

When running as a consumption plan, Azure Functions is best suited to short-lived tasks – for tasks that run longer than 10 minutes, you should consider alternatives or running them on an App Service plan.

You should also consider how often they will be executed because you pay per execution on a consumption plan. If it is continuously triggered, your costs could increase beyond that of a standard web app. Again, consider alternative approaches or the use of an App Service plan.

Finally, consumption-based apps cannot integrate with VNets. Again, if this is required, running them on an App Service plan can provide this functionality.

Logic Apps

Azure Logic Apps is another serverless option – when creating logic apps, you do not need to be concerned with how much RAM or CPU to provision; instead, you pay per execution or triggering them.

Important note

Consumption versus fixed price: Many serverless components, including Logic Apps and Functions, can be run on isolated environments, or in the case of Logic Apps, an Isolated Service Environment (ISE), whereby you pay for provisioned resources in the same way as a virtual machine.

Logic Apps shares many concepts with Azure Functions; you can define triggers, actions, flow logic, and connectors for communicating with other services. Whereas you define this in code with Functions, Logic Apps provides a drag-and-drop interface that allows you to build workflows quickly.

Logic Apps has hundreds of pre-built connectors that allow you to interface with hundreds of systems – not just in Azure but also externally. By combining these connectors with if-then-else style logic flows and either scheduled or action-based triggers, you can develop complex workflows without writing a single line of code.

The following screenshot shows a typical workflow built purely in the Azure portal:

Figure 7.7 – Logic Apps example

With their extensibility features, you can also create your custom logic and connectors for integrating with your services.

Finally, although the solution can be built entirely in the Azure portal, you can also create workflows using traditional development tools such as Visual Studio or Visual Studio Code. This is because solutions are defined as ARM templates – which enables developers to define workflows and store them in code repositories. You can then automate deployments through DevOps pipelines.

What to watch out for

Logic Apps provides a quick and relatively simple mechanism for creating business workflows. When you need to build more complex business logic or create custom connectors, you need to balance the difficulty of doing this versus using an alternative approach such as Azure Functions. Logic Apps still requires a level of developer experience and is not suitable if business users may need to develop and amend the workflows.

Power Automate

Power Automate, previously called Flow, is also a GUI-driven workflow creation tool that allows you to build automated business processes. Like Logic Apps, using Power Automate, you can define triggers and logic flow connected to other services, such as email, storage, or apps, through built-in connectors.

The most significant difference between Power Automate and Logic Apps is that Power Automate workflows can only be built via the drag-and-drop interface – you cannot edit or store the underlying code.

Therefore, the primary use case for Power Automate is for office workers and business analysts to create simple workflows that can use only the built-in connectors.