Category Application Security Groups

VNET peering – Network Connectivity and Security

Any two VNETs can be connected using peering, and there are two types of peering available:

  • VNET peering, which connects two VNETs in the same region
  • Global VNET peering, which connects two VNETs in different regions

You can connect two VNETs that are in different subscriptions. However, you must ensure that the address spaces in each VNET do not overlap. So, if VNET 1 and VNET 2 both use the address range of 10.0.0.0/16, the peering will fail.

Peerings between VNETs are also non-transitive – this means that if you have three VNETs – VNET 1, VNET 2, and VNET 3 – and you create a peering between VNET 1 and VNET 2 and VNET 2 and VNET 3, devices in VNET 1 will not be able to access a resource in VNET 3 – in other words, you cannot traverse the two peers. Instead, you would have to explicitly connect VNET 1 to VNET 3 as well, as we can see in the following diagram:

Figure 8.11 – Peerings are not transitive

Peerings between VNETs are not the only type of network you may need to connect; the other common scenario is connecting on-premises networks into Azure. For this, we can use a VPN gateway.

VPN gateways

When you need to connect an on-premises network to Azure, you can use a VPN gateway. A VPN gateway uses a gateway device on your corporate network and a gateway device in Azure. The two are then connected with a VPN that uses the public network to create an encrypted route between your two gateways. In other words, you use the internet but your traffic is encrypted and, therefore, secure.

You can use two types of VPN – a Point to Site (P2S) VPN, used by individual clients to connect directly to a remote gateway, and a Site to Site (S2S) VPN, used to connect networks.

When creating a VPN connection, you can choose between a policy-based VPN or a route-based VPN.

Policy-based VPNs

Policy-based VPNs are generally used for connections using legacy VPN gateways, as they are not as flexible as route-based. Policy-based VPNs use IKEv1 protocols and static routing to define the source and destination network ranges in the policy, rather than in a routing table.

Route-based VPNs

Route-based VPNs are the preferred choice and should be used unless legacy requirements prevent it. Route-based VPNs use IKEv2 and support dynamic routing protocols whereby routing tables direct traffic based on discovery.

Important Note

Internet Key Exchange (IKE) v1 and v2 are VPN encryption protocols that ensure traffic is encrypted between two points by authenticating both the client and the server and then agreeing on an actual encryption method. IKEv2 is the successor to IKEv1. It is faster and provides greater functionality.

When creating a VPN, you have different sizes available, and the choice of size, or SKU, is dependent on your requirements. The following table shows the current differences:

The basic VPN is only recommended for use for dev/test and not for production. Also, basic does not support IKEv2 or RADIUS authentication. This may impact you depending on the clients using the VPN. For example, Mac computers do not support IKEv1 and cannot use a basic VPN for a P2S connection.

When creating a VPN connection, you need several services and components set up.

Private endpoint connections – Network Connectivity and Security

We have said that service endpoints assign an internal IP to services that are then used to direct the flow of traffic to it. However, the actual IP is hidden and can therefore not be referenced by yourself.

There are times when you need to access a service such as SQL or a storage account via a private IP – either for direct connectivity from an on-premises network or when you have strict firewall policies between your users and your solution.

For these scenarios, Private endpoint connections can be used to assign private IP addresses to certain Azure services. Private endpoints are very similar to service endpoints, except you have visibility of the underlying IP address and so they can therefore be used across VPNs and ExpressRoute.

However, private endpoints rely on DNS to function correctly. As most services use host headers (that is, an FQDN) to determine your individual backend service, connecting via the IP itself does not work. Instead, you must set up a DNS record that sets your service to the internal IP.

For example, if you create a private endpoint for your storage account called mystorage that uses an IP address of 10.0.0.10, to access the service securely, you must create a DNS record so that mystorage.blob.core.windows.net resolves to 10.0.0.10.

This can be performed by either creating DNS records in your DNS service or forwarding the request to an Azure private zone and having the internal Azure DNS service resolve it for you.

Azure private endpoints support more services than service endpoints and are, therefore, the only option in some circumstances. In addition to the services supported by service endpoints, private endpoints also support the following:

  • Azure Automation
  • Azure IoT Hub
  • Azure Kubernetes Service – Kubernetes API
  • Azure Search
  • Azure App Configuration
  • Azure Backup
  • Azure Relay
  • Azure Event Grid
  • Azure Machine Learning
  • SignalR
  • Azure Monitor
  • Azure File Sync

Using a combination of NSGs, ASGs, Azure Firewall, service endpoints, and private endpoints, you have the tools to secure your workloads internally and externally. Next, we will examine how we can extend the actual VNETs by exploring the different options for connecting into them or connecting different VNETs.

Connectivity

A simple, standalone solution may only require a single VNET, and especially if your service is an externally facing application for clients, you may not need to create anything more complicated.

However, for enterprise applications that contain many different services, or for hybrid scenarios where you need to connect securely to Azure from an on-premises network, you must consider the other options for providing connectivity.

We will start by looking at connecting two VNETs.

Previously, we separated services within different subnets. However, each of those subnets was in the same subnet. Because of this, connectivity between the devices was automatic – other than defining NSG rules, connectivity just happened.

More complex solutions may be built across multiple VNETs, and these VNETs may or may not be in the same region. By default, communication between VNETs is not enabled. Therefore you must set this up if required. The simplest way to achieve this connectivity is with VNET peering.

Service endpoints – Network Connectivity and Security

Many services are exposed via a public address or URL. For example, Blob Storage is accessed via <accountname>.blob.core.windows.net. Even if your application is running on a VM connected to a VNET, communication to the default endpoint will be the public address, and full access to all IPs, internal and external, is allowed.

For public-facing systems, this may be desirable; however, if you need the backend service to be protected from the outside and only accessible internally, you can use a service endpoint.

Service endpoints provide direct and secure access from one Azure service to another over the Azure backbone. Internally, the service is given a private IP address, which is used instead of the default public IP address. Traffic from the source is then allowed, and external traffic becomes blocked, as we see in the following example:

Figure 8.8 – Protecting access with service endpoints

Although using service endpoints enables private IP addresses on the service, this address is not exposed or manageable by you. One effect of this is that although Azure-hosted services can connect to the service, on-premises systems cannot access it over a VPN or ExpressRoute. For these scenarios, an alternative solution called a private endpoint can be used, which we will cover in the next sub-section, or using an ExpressRoute with Microsoft peering using a NAT IP address.

Important Note

When you set up an ExpressRoute into Azure, you have the option of using Microsoft peering or private peering. Microsoft peering ensures all connectivity in the Office 365 platform. Azure goes over the ExpressRoute instead of private peering, sending only traffic destined for internal IP ranges to use the ExpressRoute. In contrast, public services are accessed via public endpoints. The most common form of connectivity is private peering; Microsoft peering is only recommended for specific scenarios. See https://docs.microsoft.com/en-us/microsoft-365/enterprise/azure-expressroute?view=o365-worldwide for more details.

To use service endpoints, the service itself must be enabled on the subnet, and the service you wish to lock down must have the public network option turned off and the source subnet added as an allowable source.

Important Note

Service endpoints ignore NSGs – therefore, any rules you have in place and attached to the secure subnet are effectively ignored. This only affects the point-to-point connection between the subnet and the service endpoint. All other NSG rules still hold.

At the time of writing, the following Azure services support service endpoints:

  • Azure Storage
  • Azure Key Vault
  • Azure SQL Database
  • Azure Synapse Analytics
  • Azure PostgreSQL Server
  • Azure MySQL Server
  • Azure MariaDB
  • Azure Cosmos DB
  • Azure Service Bus
  • Azure Event Hubs
  • Azure App Service
  • Azure Cognitive Services
  • Azure Container Registry

To enable service endpoints on a subnet, in the Azure portal, go to the properties of the VNET you wish to use, select the Subnets blade on the left-hand menu, then select your subnet. The subnet configuration window appears with the option to choose one or more services, as we can see in the following screenshot. Once you have made changes, click Save:

Figure 8.9 – Enabling service endpoints on a subnet

Once enabled, you can then restrict access to your backend service. In the following example, we will limit access to a storage account from a subnet:

  1. Go to the Azure portal at https://portal.azure.com.
  2. In the search bar, search for and select Storage accounts.
  3. Select the storage account you wish to restrict access to.
  4. On the left-hand menu, click the Networking option.
  5. Change the Allow access from option from All networks to Selected networks.
  6. Click + Add existing virtual network.
  7. Select the VNET and subnet you want to restrict access to.
  8. Click Save.

The following screenshot shows an example of a secure storage account:

Figure 8.10 – Restricting VNET access

Once set up, any access except the defined VNET will be denied, and any traffic from services on the VNET to the storage account will now be directly over the Azure backbone.

You may have noticed another option in the Networking tab – Private endpoint connections.

Application Security Groups – Network Connectivity and Security

An ASG is another way of grouping together resources instead of just allowing all traffic to all resources on your VNET. For example, you may want one to define a single NSG that applies to all subnets; however, you may have a mixture of services, such as database servers and web servers, across those subnets.

You can define an ASG and attach your web servers to that ASG, and another ASG that groups your database servers. In your NSG, you then set the HTTPS inbound rule to use the ASG as the destination rather than the whole subnet, VNET, or individual IPs. In this configuration, even though you have a common NSG, you can still uniquely allow access to specific server groups.

The following diagram shows an example of this type of configuration:

Figure 8.6 – Example architecture using NSGs and ASGs

In the preceding example, App1 and App2 are part of the ASGApps ASG, and Db1 and Db2 are part of the ASGDb ASG.

The NSG rulesets would then be as follows:

With the preceding in place, HTTPS inbound would only be allowed to App1 and App2, and port 1433 would only be allowed from App1 and App2.

ASGs and NSGs are great for discrete services; however, there are some rules that you may always want to apply, for example, blocking all outbound access to certain services such as FTP. A better option might be to create a central firewall that all your services route through in this scenario.

Azure Firewall

Whereas individual NSGs and ASGs form part of your security strategy, building multiple network security layers, especially in enterprise systems, is even better.

Azure Firewall is a cloud-based, fully managed network security appliance that would typically be placed at the edge of your network. This means that you would not usually have one firewall per solution or even subscription. Instead, you would have one per region and have all other devices, even those in different subscriptions, route through to it, as in the following example:

Figure 8.7 – Azure Firewall in a hub/spoke model

Azure Firewall offers some of the functionality you can achieve from NSGs, such as network traffic filtering based on port and IP or service tags. Over and above these basic services, Azure Firewall also offers the following:

  • High availability and scalability: As a managed offering, you don’t need to worry about building multiple VMs with load balancers or how much your peak traffic might be. Azure Firewall will automatically scale as required, is fully resilient, and supports availability zones.
  • FQDN tags and FQDN filters: As well as IP, addressing, and service tags, Azure Firewall also allows you to define FQDNs. FQDN tags are similar to service tags but support a more comprehensive range of services, such as Windows Update.
  • Outgoing SNAT and inbound DNAT support: If you use public IP address ranges for private networks, Azure Firewall can perform Secure Network Address Translation (SNAT) on your outgoing requests. Incoming traffic can be translated using Destination Network Address Translation (DNAT).
  • Threat intelligence: Azure Firewall can automatically block incoming traffic originating from IP addresses known to be malicious. These addresses and domains come from Microsoft’s threat intelligence feed.
  • Multiple IPs: Up to 250 IP addresses can be associated with your firewall, which helps with SNAT and DNAT.
  • Monitoring: Azure Firewall is fully integrated with Azure Monitor for data analysis and alerting.
  • Forced tunneling: You can route all internet-bound traffic to another device, such as an on-premises edge firewall.

Azure Firewall provides an additional and centralized security boundary to your systems, ensuring an extra layer of safety.

So far, we have looked at securing access into and between services that use VNETs, such as VMs. Some services don’t use VNETs directly but instead have their firewall options. These firewall options often include the ability to either block access to the service based on IPs or VNETs, and when this option is selected, it uses a feature called service endpoints.

Public IP addresses – Network Connectivity and Security

A public IP address is a discrete component that can be created and attached to many services, such as VMs. The public IP component is dedicated to a resource until you un-assign it – in other words, you cannot use the same public IP across multiple resources.

Public IP addresses can be either static or dynamic. With a static IP, once the resource has been created, the assigned IP address it is given stays the same until that resource is deleted. A dynamic address can change in specific scenarios. For example, if you create a public IP address for a VM as a dynamic address, when you stop the VM, the address is released and is different when assigned once you start the VM up again. With static addresses, the IP is assigned once you attached it to the VM, and it stays until you manually remove it.

Static addresses are useful if you have a firewall device that controls access to the service that can only be configured to use IP addresses or DNS resolution as changing the IP would mean the DNS record would also need updating. You also need to use a static address if you use TLS/SSL certificates linked to IP addresses.

Private IP addresses

Private IP addresses can be assigned to various Azure components, such as VMs, network load balancers, or application gateways. The devices are connected to a VNET, and the IP range you wish to use for your resources is defined at the VNET level.

When creating VNETs, you assign an IP range; the default is 10.0.0.0/16 – which provides 65,536 possible IP addresses. VNETs can contain multiple ranges if you wish; however, you need to be careful that those ranges do not interfere with public addresses.

When assigning IP ranges, you denote the range using CIDR notation – a forward slash (/) followed by a number that defines the number of addresses within that range. The following are just some example ranges:

Tip

CIDR notation is a more compact way to state an IP address and it’s ranged based on a subnet mask. The number after the slash (/) is the count of leading 1 bits in the network mask. The complete range of addresses can be found here: https://bretthargreaves.com/ip-cheatsheet/.

For more in-depth details of CIDR, see https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing.

Subnets are then created within the VNET, and each subnet must also be assigned an IP range that is within the range defined at the VNET level, as we can see in the following example diagram:

Figure 8.2 – Subnets within VNETs

For every subnet you create, Azure reserves five IPs for internal use – for smaller subnets, this has a significant impact on the number of available addresses. The reservations within a given range are as follows:

With these reservations in mind, the minimum size of a subnet in Azure is a /29 network with eight IPs, of which only three are useable. The largest allowable range is /8, giving 16,777,216 IPs with 16,777,211 usable.

Private ranges in Azure can be used purely for services within your Azure subscriptions. If you don’t connect the VNETs or require communications between them, you can have more than one VNET with the same ranges.

If you plan to allow services within one VNET to communicate with another VNET, you must consider more carefully the ranges you assign to ensure they do not overlap. This is especially crucial if you use VNETs to extend your private corporate network into Azure, as creating ranges that overlap can cause routing and addressing problems.

As with public IPs, private IPs can also be static or dynamic. With dynamic addressing, Azure assigns the next available IP within the given range. For example, if you are using a 10.0.0.0 network, and 10.0.0.3–10.0.0.20 are already used, your new resource will be assigned 10.0.0.21.

Understanding IP addressing and DNS in Azure – Network Connectivity and Security

When building services in Azure, you sometimes choose to use internal IP addresses and external IP addresses. Internal IP addresses can only communicate internally and use VNETs. Many services can also use public IP addresses, which allow you to communicate with the service from the internet.

Before we delve into public and internal IP addresses, we need to understand the basics of IP addressing in general, and especially the use of subnets and subnet masks.

Understanding subnets and subnet masks

When devices are connected to a TCP/IP-based network, they are provided with an IP address in the notation xxx.xxx.xxx.xxx. Generally, all devices that are on the same local network can communicate with each other without any additional settings.

When devices on different networks need to communicate, they must do so via a router or gateway. Devices use a subnet mask to differentiate between addresses on the local network and those on a remote network.

The network mask breaks down an IP address into a device or host address component and a network component. It does this by laying a binary mask over the IP address with the host address to the right.

255 in binary is 11111111 and 0 in binary is 00000000. The mask says how many of those bits are the network, with 1 denoting a network address and 0 denoting a host address.

Thus, 255.0.0.0 becomes 11111111.00000000.00000000.0000000, therefore in the address 10.0.0.1, 10 is the network and 0.0.0.1 is the host address. Similarly, with a mask of 255.255.0.0 and an address of 10.0.0.1, 10.0 becomes the network and 0.1 the host. The following diagram shows this concept more clearly:

Figure 8.1 – Example subnet mask

Splitting an address space into multiple networks is known as subnetting, and subnets can be broken down into even smaller subnets until the mask becomes too big.

When configuring IP settings for devices, you often supply an IP address, a subnet mask, and the address of the router on the local network that will connect you to other networks.

Sometimes, when denoting an IP address range, the subnet mask and range are written in a shorthand form known as CIDR notation. We will cover CIDR notation examples in the Private IP addresses sub-section.

This is a relatively simplified overview of network addressing and subnetting, and although the AZ-304 exam will not explicitly ask you questions on this, it does help to better understand the next set of topics.

Understanding Azure networking options – Network Connectivity and Security

In the previous chapter, we examined the different options when building computer services, from the different types of Virtual Machines (VMs) to web apps and containerization.

All solution components need to be able to communicate effectively and safely; therefore, in this chapter, we will discuss what options we have to control traffic flow using route tables and load balancing components, securing traffic with different firewalling options, and managing IP addressing and resolution.

With this in mind, we will cover the following topics:

  • Understanding Azure networking options
  • Understanding IP addressing and DNS in Azure
  • Implementing network security
  • Connectivity
  • Load balancing and advanced traffic routing

Technical requirements

This chapter will use the Azure portal (https://portal.azure.com) and you need an Azure subscription for the examples.

Understanding Azure networking options

Services in Azure need to communicate, and this communication is performed over a virtual network, or VNET.

There are essentially two types of networking in Azure – private VNETs and the Azure backbone. The Azure backbone is a fully managed service. The underlying details are never exposed to you – although the ranges used by many services are available, grouped by region, for download in a JSON file. The Azure backbone is generally used when non-VNET-connected services communicate with each other; for example, when storage accounts replicate data or when Azure functions communicate with SQL and Cosmos DB, Azure handles all aspects of these communications. This can cause issues when you need more control, especially if you want to limit access to your services at the network level, that is, by implementing firewall rules.

Important Note

The address ranges of services in Azure change continually as the services grow within any particular region, and can be downloaded from this link: https://www.microsoft.com/en-us/download/details.aspx?id=56519.

Some services can either be integrated with, or built on top of, a VNET. VMs are the most common example of this, and to build a VM, you must use a VNET. Other services can also be optionally integrated with VNETs in different ways. For example, VMs can communicate with an Azure SQL database using a service endpoint, enabling you to limit access and ensure traffic is kept private and off the public network. We look at service endpoints and other ways to secure internal communications later in this chapter, in the Implementing network security section.

The first subject we will need to look at when dealing with VNETs and connectivity is that of addressing and Doman Name Services (DNSes).

Deployments and YAML – Designing Compute Solutions

A pod’s resources are defined as a deployment, which is described within a YAML manifest. The manifest defines everything you need to state how many copies or replicas of a pod to run, what resources each pod requires, the container image to use, and other information necessary for your service.
A typical YAML file may look like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
name: nginx
image: mcr.microsoft.com/oss/nginx/nginx:1.15.2-alpine
ports:
containerPort: 80
resources:
requests:
CPU: 250m
memory: 64Mi
limits:
CPU: 500m
memory: 256Mi

In this example, taken from the docs.microsoft.com site, we see a deployment using the nginx container image, requesting a minimum of 250 m (millicore) and 64 Mi (mebibytes) of RAM, and a maximum of 500 m and 256 Mi.

Tip

A mebibyte is equal to 1024 KB, whereas a millicore is one-thousandth of a CPU core.

Once we have our pods and applications defined within a YAML file, we can then use that file to tell our AKS cluster to use the information in that file and deploy then run our application. This can be performed by running the deployment commands against the AKS APIs or via DevOps pipelines.

Kubernetes is a powerful tool for building resilient and dynamic applications that use microservices, and using images is incredibly efficient and portable due to their use of containerization; however, they are complex.

AKS abstracts much of the complexity of using and managing a Kubernetes cluster. Still, your development and support teams need to be fully conversant with the unique capabilities and configuration options available.

Summary

This chapter looked at the different compute options available to us in Azure and looked at the strengths and weaknesses of each. With any solution, the choice of technology is dependent on your requirements and the skills of the teams who are building them.

We then looked at how to design update management processes to ensure any VMs we use as part of our solution are kept up to date with the latest security patches.

Finally, we looked at how we can use containerization in our solutions, and specifically how Azure Kubernetes Service provides a flexible and dynamic approach to running microservices.

In the next chapter, we will look at the different networking options in Azure, including load balancing for resilience and performance.

Exam scenario

The solutions to the exam scenarios can be found at the end of this book.

Mega Corp is planning a new multi-service solution to help the business manage expenses. The application development team has decided to break the solution into different services that communicate with each other.

End users will upload expense claims as a Word document to the system, and these documents must flow through to different approvers.

The HR department also wants to amend some of the workflows themselves as they can change often.

The application will have a web frontend, and the application developers are used to building .NET websites. However, they would like to start moving to a more containerized approach.

Suggest some compute components that would be suited to this solution.

Azure Kubernetes Service – Designing Compute Solutions

We looked at containerization, and specifically with Docker (Azure’s default container engine), in the previous section. Now we have container images registered in Azure Container Registry, and from there, we can then use those images to spin up instances or running containers.

Two questions may spring to mind – the first is what now? Or perhaps more broadly, why bother? In theory, we could achieve some of what’s in the container with a simple virtual machine image.

Of course, one reason for containerization is that of portability – that is, we can run those images on any platform that runs Docker. However, the other main reason is it now allows us to run many more of those instances on the same underlying hardware because we can have greater density through the shared OS.

This fact, in turn, allows us to create software using a pattern known as microservices.

Traditionally, a software service may have been built as monolithic – that is, the software is just one big code base that runs on a server. The problem with this pattern is that it can be quite hard to scale – that is, if you need more power, you can only go so far as adding more RAM and CPU.

The first answer to this issue was to build applications that could be duplicated across multiple servers and then have requests load balanced between them – and in fact, this is still a pervasive pattern.

As software started to be developed in a more modular fashion, those individual modules would be broken up and run as separate services, each being responsible for a particular aspect of the system. For example, we might split off a product ordering component as an individual service that gets called by other parts of the system, and this service could run on its own server.

While we can quickly achieve this by running it on its virtual server, the additional memory overhead means as we break our system into more and more individual services, this memory overhead increases, and we soon run very efficiently from a resource usage point of view.

And here is where containers come in. Because they offer isolation without running a full OS each time, we can run our processes far more efficiently – that is, we can run far more on the same hardware than we could on standard virtual machines.

By this point, you might now be asking how do we manage all this? What controls the spinning up of new containers or shutting them down? And the answer is orchestration. Container orchestrators monitor containers and add additional instances in response to usage thresholds or even for resiliency if a running container becomes unhealthy for any reason. Kubernetes is an orchestration service for managing containers.

A Kubernetes cluster consists of worker machines, called nodes, that run containerized applications, and every cluster has at least one worker node. The worker node(s) host pods that are the application’s components, and a control plane or cluster master manages the worker nodes and the pods in the cluster. We can see a logical overview of a typical Kubernetes cluster, with all its components, in the following diagram:

Figure 7.11 – Kubernetes control plane and components

AKS is Microsoft’s implementation of a managed Kubernetes cluster. When you create an AKS cluster, a cluster master is automatically created and configured; there is no cost for the cluster master, only the nodes that are part of the AKS cluster.

The cluster master includes the following Kubernetes components:

  • kube-apiserver: The API server exposes the Kubernetes management services and provides access for management tools such as the kubectl command, which is used to manage the service.
  • etcd: A highly available key-value store that records the state of your cluster.
  • kube-scheduler: Manages the nodes and what workloads to run on them.
  • kube-controller-manager: Manages a set of smaller controllers that perform pod and node operations.

You define the nodes’ number and size, and the Azure platform configures secure communication between the cluster master and nodes.

Automating virtual machine management – Designing Compute Solutions-2

For this example, you will need a Windows VM set up in your subscription:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the search bar, type and select Virtual Machines and select the virtual machine you wish to apply Update Management to.
  3. On the left-hand menu, click Guest + host updates under Operations.
  4. Click the Go to Update Management button.
  5. Complete the following details:
    a) Log Analytics workspace Location: The location of your VM, for example, East US
    b) Log Analytics workspace: Create default workspace
    c) Automation account subscription: Your subscription
    d) Automation account: Create a default account
  6. Click Enable.

The process can take around 15 minutes once completed. Go back to the VM view and again select Guest + host updates under Operations, followed by Go to Update Management.

You will see a view similar to the following screenshot:

Figure 7.8 – Update Management blade

You can get to the same view but for all the VMs you wish to manage in the portal by searching for Automation Accounts and selecting the automation account that has been created. Then click Update management.

If you want to add more VMs, click the + Add Azure VMs button to see a list of VMs in your subscription and enable the agent on multiple machines simultaneously – as we see in the following screenshot:

Figure 7.9 – Adding more virtual machines for Update Management

The final step is to schedule the installation of patches:

  1. Navigate to the Azure portal by opening https://portal.azure.com.
  2. Type Automation into the search bar and select Automation Accounts.
  3. Select the automation account.
  4. Click Update Management.
  5. Click Schedule deployment and complete the details as follows:
    a) Name: Patch Tuesday
    b) Operating System: Windows
    c) Maintenance Window (minutes): 120
    d) RebootReboot options:Reboot if required
  6. Under Groups to update, click Click to Configure.
  7. Select your subscription and Select All under Resource Groups.
  8. Click Add, then OK.
  9. Click Schedule Settings.
  10. Set the following details:
    a) Start date: First Tuesday of the month
    b) Recurrence: Recurring
    c) Recur Every: 14 days
  11. Click OK.
  12. Click Create.

Through the Update Management feature, you can control how your virtual machines are patched and when and what updates to include or exclude. You can also set multiple schedules and group servers by resource group, location, or tag.

In the preceding example, we selected all VMs in our subscription, but as you saw, we had the option to choose a machine based on location, subscription, resource group, or tags.

In this way, you can create separate groups for a variety of purposes. For example, we mentioned earlier that a common practice would be to test patches before applying them to production servers. We can accommodate this by grouping non-production servers into a separate subscription, resource group, or simply tagging them. You can then create one patch group for your test machines, followed by another for production machines a week later – after you’ve had time to confirm the patches have not adversely affected workloads.

As part of any solution design that utilizes VMs, accommodation must be included to ensure they are always running healthily and securely, and Update Management is a critical part of this. As we have seen, Azure makes the task of managing OS updates easy and straightforward to set up.

Next, we will investigate another form of compute that is becoming increasingly popular – containerization and Kubernetes.