Category Exams of Microsoft AZ-304

Azure Functions – Designing Compute Solutions

Azure Functions falls into the Functions as a Service (FaaS) or serverless category. This means that you can run Azure Functions using a consumption plan whereby you only pay for the service as it is being executed. In comparison, Azure App Service runs on a service plan in which you define the CPU and RAM.

With Azure Functions, you don’t need to define CPU and RAM as the Azure platform automatically allocates whatever resources are required to complete the operation. Because of this, functions have a default timeout of 5 minutes with a maximum of 10 minutes – in other words, if you have a function that would run for longer than 10 minutes, you may need to consider an alternative approach.

Tip

Azure Functions can be run as an App Service plan the same as App Service. This can be useful if you have functions that will run for longer than 10 minutes, if you have spare capacity in an existing service plan, or if you require support for VNet integration. Using an App Service plan means you pay for the service in the same way as App Service, that is, you pay for the provisioned CPU and RAM whether you are using it or not.

Functions are event-driven; this means they will execute your code in response to a trigger being activated. The following triggers are available:

  • HTTPTrigger: The function is executed in response to a service calling an API endpoint over HTTP/HTTPS.
  • TimerTrigger: Executes on a schedule.
  • GitHub webhook: Responds to events that occur in your GitHub repositories.
  • CosmosDBTrigger: Processes Azure Cosmos DB documents when added or updated in collections in a NoSQL database.
  • BlobTrigger: Processes Azure Storage blobs when they are added to containers.
  • QueueTrigger: Responds to messages as they arrive in an Azure Storage queue.
  • EventHubTrigger: Responds to events delivered to an Azure Event Hub.
  • ServiceBusQueueTrigger: Connects your code to other Azure services or on-premises services by listening to message queues.
  • ServiceBusTopicTrigger: Connects your code to other Azure services or on-premises services by subscribing to topics.

Once triggered, an Azure function can then run code and interact with other Azure services for reading and writing data, including the following:

  • Azure Cosmos DB
  • Azure Event Hubs
  • Azure Event Grid
  • Azure Notification Hubs
  • Azure Service Bus (queues and topics)
  • Azure Storage (blob, queues, and tables)
  • On-premises (using Service Bus)

By combining different triggers and outputs, you can easily create a range of possible functions, as we see in the following diagram:

Figure 7.6 – Combining triggers and outputs with a Functions app

Azure Functions is therefore well suited to event-based microservice applications that are short-run and are not continuously activated. As with App Service, Functions supports a range of languages, including C#, F#, JavaScript, Python, and PowerShell Core.

Service endpoints – Network Connectivity and Security

Many services are exposed via a public address or URL. For example, Blob Storage is accessed via <accountname>.blob.core.windows.net. Even if your application is running on a VM connected to a VNET, communication to the default endpoint will be the public address, and full access to all IPs, internal and external, is allowed.

For public-facing systems, this may be desirable; however, if you need the backend service to be protected from the outside and only accessible internally, you can use a service endpoint.

Service endpoints provide direct and secure access from one Azure service to another over the Azure backbone. Internally, the service is given a private IP address, which is used instead of the default public IP address. Traffic from the source is then allowed, and external traffic becomes blocked, as we see in the following example:

Figure 8.8 – Protecting access with service endpoints

Although using service endpoints enables private IP addresses on the service, this address is not exposed or manageable by you. One effect of this is that although Azure-hosted services can connect to the service, on-premises systems cannot access it over a VPN or ExpressRoute. For these scenarios, an alternative solution called a private endpoint can be used, which we will cover in the next sub-section, or using an ExpressRoute with Microsoft peering using a NAT IP address.

Important Note

When you set up an ExpressRoute into Azure, you have the option of using Microsoft peering or private peering. Microsoft peering ensures all connectivity in the Office 365 platform. Azure goes over the ExpressRoute instead of private peering, sending only traffic destined for internal IP ranges to use the ExpressRoute. In contrast, public services are accessed via public endpoints. The most common form of connectivity is private peering; Microsoft peering is only recommended for specific scenarios. See https://docs.microsoft.com/en-us/microsoft-365/enterprise/azure-expressroute?view=o365-worldwide for more details.

To use service endpoints, the service itself must be enabled on the subnet, and the service you wish to lock down must have the public network option turned off and the source subnet added as an allowable source.

Important Note

Service endpoints ignore NSGs – therefore, any rules you have in place and attached to the secure subnet are effectively ignored. This only affects the point-to-point connection between the subnet and the service endpoint. All other NSG rules still hold.

At the time of writing, the following Azure services support service endpoints:

  • Azure Storage
  • Azure Key Vault
  • Azure SQL Database
  • Azure Synapse Analytics
  • Azure PostgreSQL Server
  • Azure MySQL Server
  • Azure MariaDB
  • Azure Cosmos DB
  • Azure Service Bus
  • Azure Event Hubs
  • Azure App Service
  • Azure Cognitive Services
  • Azure Container Registry

To enable service endpoints on a subnet, in the Azure portal, go to the properties of the VNET you wish to use, select the Subnets blade on the left-hand menu, then select your subnet. The subnet configuration window appears with the option to choose one or more services, as we can see in the following screenshot. Once you have made changes, click Save:

Figure 8.9 – Enabling service endpoints on a subnet

Once enabled, you can then restrict access to your backend service. In the following example, we will limit access to a storage account from a subnet:

  1. Go to the Azure portal at https://portal.azure.com.
  2. In the search bar, search for and select Storage accounts.
  3. Select the storage account you wish to restrict access to.
  4. On the left-hand menu, click the Networking option.
  5. Change the Allow access from option from All networks to Selected networks.
  6. Click + Add existing virtual network.
  7. Select the VNET and subnet you want to restrict access to.
  8. Click Save.

The following screenshot shows an example of a secure storage account:

Figure 8.10 – Restricting VNET access

Once set up, any access except the defined VNET will be denied, and any traffic from services on the VNET to the storage account will now be directly over the Azure backbone.

You may have noticed another option in the Networking tab – Private endpoint connections.

Application Security Groups – Network Connectivity and Security

An ASG is another way of grouping together resources instead of just allowing all traffic to all resources on your VNET. For example, you may want one to define a single NSG that applies to all subnets; however, you may have a mixture of services, such as database servers and web servers, across those subnets.

You can define an ASG and attach your web servers to that ASG, and another ASG that groups your database servers. In your NSG, you then set the HTTPS inbound rule to use the ASG as the destination rather than the whole subnet, VNET, or individual IPs. In this configuration, even though you have a common NSG, you can still uniquely allow access to specific server groups.

The following diagram shows an example of this type of configuration:

Figure 8.6 – Example architecture using NSGs and ASGs

In the preceding example, App1 and App2 are part of the ASGApps ASG, and Db1 and Db2 are part of the ASGDb ASG.

The NSG rulesets would then be as follows:

With the preceding in place, HTTPS inbound would only be allowed to App1 and App2, and port 1433 would only be allowed from App1 and App2.

ASGs and NSGs are great for discrete services; however, there are some rules that you may always want to apply, for example, blocking all outbound access to certain services such as FTP. A better option might be to create a central firewall that all your services route through in this scenario.

Azure Firewall

Whereas individual NSGs and ASGs form part of your security strategy, building multiple network security layers, especially in enterprise systems, is even better.

Azure Firewall is a cloud-based, fully managed network security appliance that would typically be placed at the edge of your network. This means that you would not usually have one firewall per solution or even subscription. Instead, you would have one per region and have all other devices, even those in different subscriptions, route through to it, as in the following example:

Figure 8.7 – Azure Firewall in a hub/spoke model

Azure Firewall offers some of the functionality you can achieve from NSGs, such as network traffic filtering based on port and IP or service tags. Over and above these basic services, Azure Firewall also offers the following:

  • High availability and scalability: As a managed offering, you don’t need to worry about building multiple VMs with load balancers or how much your peak traffic might be. Azure Firewall will automatically scale as required, is fully resilient, and supports availability zones.
  • FQDN tags and FQDN filters: As well as IP, addressing, and service tags, Azure Firewall also allows you to define FQDNs. FQDN tags are similar to service tags but support a more comprehensive range of services, such as Windows Update.
  • Outgoing SNAT and inbound DNAT support: If you use public IP address ranges for private networks, Azure Firewall can perform Secure Network Address Translation (SNAT) on your outgoing requests. Incoming traffic can be translated using Destination Network Address Translation (DNAT).
  • Threat intelligence: Azure Firewall can automatically block incoming traffic originating from IP addresses known to be malicious. These addresses and domains come from Microsoft’s threat intelligence feed.
  • Multiple IPs: Up to 250 IP addresses can be associated with your firewall, which helps with SNAT and DNAT.
  • Monitoring: Azure Firewall is fully integrated with Azure Monitor for data analysis and alerting.
  • Forced tunneling: You can route all internet-bound traffic to another device, such as an on-premises edge firewall.

Azure Firewall provides an additional and centralized security boundary to your systems, ensuring an extra layer of safety.

So far, we have looked at securing access into and between services that use VNETs, such as VMs. Some services don’t use VNETs directly but instead have their firewall options. These firewall options often include the ability to either block access to the service based on IPs or VNETs, and when this option is selected, it uses a feature called service endpoints.

Network Security Groups – Network Connectivity and Security

NSGs allow you to define inbound and outbound rules that will allow or deny the flow of traffic from a source to a destination on a specific port. Although you define separate inbound and outbound rules, each rule is stateful. This means that the flow in any one direction is recorded so that the returning traffic can also be allowed using the same rule.

In other words, if you allow HTTPS traffic into a service, then that same traffic will be allowed back out for the same source and destination.

We create NSGs as components in Azure and then attach them to a subnet or network interface on a VM. Each subnet can only be connected to a single NSG, but any NSG can be attached to multiple subnets. This allows us to define rulesets independently for everyday use cases (such as allowing web traffic) and then reusing them across various subnets.

When NSGs are created, Azure applies several default rules that effectively block all access except essential Azure services.

If you create a VM in Azure, a default NSG is created for you and attached to the network interface of the VM; we can see such an example in the following screenshot:

Figure 8.5 – Example NSG ruleset

In the preceding figure, we can see five inbound rules and three outbound. The top two inbound rules highlighted in red were created with the VM – in the example, we specified to allow RDP (3389) and HTTP (80).

The three rules in the inbound and outbound highlighted in green are created by Azure and cannot be removed or altered. These define a baseline set of rules that must be applied for the platform to function correctly while blocking everything else. As the name suggests on these rules, AllowVnetInBound allows traffic to flow freely between all devices in that VNET, and the AllowAzureLoadBalancerInBound rule allows any traffic originating from an Azure load balancer. DenyAllInBound blocks everything else.

Each rule requires a set of options to be provided:

  • Name and Description: For reference; these have no bearing on the actual service. They make it easier to determine what it is or what it is for.
  • Source and Destination port: The port is, of course, the network port that a particular service communicates on – for RDP, this is 3389; for HTTP, it is 80, and for HTTPS, it is 443. Some services require port mapping; that is, the source may expect to communicate on one port, but the actual service communicates on a different port.
  • Source and Destination location: The source and destination locations define where traffic is coming from (the source) and where it is trying to go to (the destination). The most common option is an IP address or list of IP addresses, and these will typically be used to define external services.

For Azure services, we can either choose the VNET – that is, the destination is any service on the VNET the NSG is attached to – or a service tag, which is a range of IPs managed by Azure. Examples may include the following:

– Internet: Any address that doesn’t originate from the Azure platform

– AzureLoadBalancer: An Azure load balancer

– AzureActiveDirectory: Communications from the Azure Active Directory service

– AzureCloud.EastUS: Any Azure service in the East US region

As we can see from these examples, with the exception of the internet option, they are IP sets that belong to Azure services. Using service tags to allow traffic from Azure services is safer than manually entering the IP ranges (which Microsoft publishes) as you don’t need to worry about them changing.

  • Protocol: Any, TCP, UDP, or ICMP. Services use different protocols, and some services require TCP and UDP. You should always define the least access; so, if only TCP is needed, only choose TCP. ICMP protocol is used primarily for Ping.
  • Priority: Firewall rules are applied one at a time in order, with the lowest number, which is 100, being used last. Azure applies a Deny All rule to all NSGs with the lowest priority. Therefore, any rule with a higher priority will overrule this one. Deny all is a failsafe rule – this means everything will be blocked by default unless you specifically create a rule to allow access.

Through the use of NSGs, we can create simple rules around our VNET-integrated services and form part of an effective defense strategy. There may be occasions, however, when you want to apply different firewall rules to other components within the same subnet; we can use Application Security Groups (ASGs) for these scenarios.

Azure public DNS zones – Network Connectivity and Security

If you own your domain, bigcorp.com, you can create a zone in Azure and then configure your domain to use the Azure name servers. Once set up, you can then use Azure to create, edit, and maintain the records for that domain.

You cannot purchase domain names through Azure DNS, and Azure does not become the registrar. However, using Azure DNS to manage your domain, you can use RBAC roles to control which users can manage DNS, Azure logs to track change, and resource locking to prevent the accidental deletion of records.

We have looked at the different options for setting up VNETs with IP addressing and name resolution; we will now investigate to ensure secure communications to and between our services.

Implementing network security

Ensuring secure traffic flow to and between services is a core requirement for many solutions. An example is an external communication to a VM running a website – you may only want to allow traffic to the server in a particular port such as HTTPS over port 443. All other traffic, such as SMTP, FTP, or file share protocols, need to be blocked.

It isn’t just inbound traffic that needs to be controlled; blocking outbound traffic can be just as important. For many organizations today, ensuring you are protected from insider threats is just as crucial, if not more so, than external threats. For this reason, we may want to block all but specific outbound access so that if a service is infected by malware, it cannot send traffic out – known as data exfiltration.

Important Note

Data exfiltration is a growing technique for stealing data. Either by manually logging on to a server or through malware infection, data is copied from an internal system to an external system.

As solutions become more distributed, the ability to control data between components has also become a key design element and can often work to our advantage. A typical and well-used architectural pattern is an n-tier architecture. The services in a solution are hosted on different layers – a User Interface (UI) at the front, a data processing tier in the middle, and a database at the back. Each tier could be hosted on its subnet with security controls between them. In this way, we can tightly control who and what has access to each tier individually, which helps prevent any attacker from gaining direct access to the data, as we can see in the following example:

Figure 8.4 – N-tier architecture helps protect resources

In the example, in the preceding figure, the UI tier only allows traffic from the user over HTTP (port 443), and as the UI only contains frontend logic and no data, should an attacker compromise the service, they can only access that code.

The next tier only allows traffic from the UI tier; in other words, an external attacker has no direct access. If the frontend tier was compromised, an attacker could access the business logic tier, but this doesn’t contain any actual data.

The final tier only accepts SQL traffic (port 1433) from the business tier; therefore, a hacker would need to get past the first two tiers to gain access to it.

Of course, other security mechanisms such as authentication and authorization would be employed over these systems, but access by the network is often considered the first line of defense.

Firewalls are often employed to provide security at the network level. Although Azure provides discrete firewall services, another option is often used to provide simpler management and security – Network Security Groups (NSGs).

Understanding Azure networking options – Network Connectivity and Security

In the previous chapter, we examined the different options when building computer services, from the different types of Virtual Machines (VMs) to web apps and containerization.

All solution components need to be able to communicate effectively and safely; therefore, in this chapter, we will discuss what options we have to control traffic flow using route tables and load balancing components, securing traffic with different firewalling options, and managing IP addressing and resolution.

With this in mind, we will cover the following topics:

  • Understanding Azure networking options
  • Understanding IP addressing and DNS in Azure
  • Implementing network security
  • Connectivity
  • Load balancing and advanced traffic routing

Technical requirements

This chapter will use the Azure portal (https://portal.azure.com) and you need an Azure subscription for the examples.

Understanding Azure networking options

Services in Azure need to communicate, and this communication is performed over a virtual network, or VNET.

There are essentially two types of networking in Azure – private VNETs and the Azure backbone. The Azure backbone is a fully managed service. The underlying details are never exposed to you – although the ranges used by many services are available, grouped by region, for download in a JSON file. The Azure backbone is generally used when non-VNET-connected services communicate with each other; for example, when storage accounts replicate data or when Azure functions communicate with SQL and Cosmos DB, Azure handles all aspects of these communications. This can cause issues when you need more control, especially if you want to limit access to your services at the network level, that is, by implementing firewall rules.

Important Note

The address ranges of services in Azure change continually as the services grow within any particular region, and can be downloaded from this link: https://www.microsoft.com/en-us/download/details.aspx?id=56519.

Some services can either be integrated with, or built on top of, a VNET. VMs are the most common example of this, and to build a VM, you must use a VNET. Other services can also be optionally integrated with VNETs in different ways. For example, VMs can communicate with an Azure SQL database using a service endpoint, enabling you to limit access and ensure traffic is kept private and off the public network. We look at service endpoints and other ways to secure internal communications later in this chapter, in the Implementing network security section.

The first subject we will need to look at when dealing with VNETs and connectivity is that of addressing and Doman Name Services (DNSes).

Deployments and YAML – Designing Compute Solutions

A pod’s resources are defined as a deployment, which is described within a YAML manifest. The manifest defines everything you need to state how many copies or replicas of a pod to run, what resources each pod requires, the container image to use, and other information necessary for your service.
A typical YAML file may look like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
name: nginx
image: mcr.microsoft.com/oss/nginx/nginx:1.15.2-alpine
ports:
containerPort: 80
resources:
requests:
CPU: 250m
memory: 64Mi
limits:
CPU: 500m
memory: 256Mi

In this example, taken from the docs.microsoft.com site, we see a deployment using the nginx container image, requesting a minimum of 250 m (millicore) and 64 Mi (mebibytes) of RAM, and a maximum of 500 m and 256 Mi.

Tip

A mebibyte is equal to 1024 KB, whereas a millicore is one-thousandth of a CPU core.

Once we have our pods and applications defined within a YAML file, we can then use that file to tell our AKS cluster to use the information in that file and deploy then run our application. This can be performed by running the deployment commands against the AKS APIs or via DevOps pipelines.

Kubernetes is a powerful tool for building resilient and dynamic applications that use microservices, and using images is incredibly efficient and portable due to their use of containerization; however, they are complex.

AKS abstracts much of the complexity of using and managing a Kubernetes cluster. Still, your development and support teams need to be fully conversant with the unique capabilities and configuration options available.

Summary

This chapter looked at the different compute options available to us in Azure and looked at the strengths and weaknesses of each. With any solution, the choice of technology is dependent on your requirements and the skills of the teams who are building them.

We then looked at how to design update management processes to ensure any VMs we use as part of our solution are kept up to date with the latest security patches.

Finally, we looked at how we can use containerization in our solutions, and specifically how Azure Kubernetes Service provides a flexible and dynamic approach to running microservices.

In the next chapter, we will look at the different networking options in Azure, including load balancing for resilience and performance.

Exam scenario

The solutions to the exam scenarios can be found at the end of this book.

Mega Corp is planning a new multi-service solution to help the business manage expenses. The application development team has decided to break the solution into different services that communicate with each other.

End users will upload expense claims as a Word document to the system, and these documents must flow through to different approvers.

The HR department also wants to amend some of the workflows themselves as they can change often.

The application will have a web frontend, and the application developers are used to building .NET websites. However, they would like to start moving to a more containerized approach.

Suggest some compute components that would be suited to this solution.

Azure Kubernetes Service – Designing Compute Solutions

We looked at containerization, and specifically with Docker (Azure’s default container engine), in the previous section. Now we have container images registered in Azure Container Registry, and from there, we can then use those images to spin up instances or running containers.

Two questions may spring to mind – the first is what now? Or perhaps more broadly, why bother? In theory, we could achieve some of what’s in the container with a simple virtual machine image.

Of course, one reason for containerization is that of portability – that is, we can run those images on any platform that runs Docker. However, the other main reason is it now allows us to run many more of those instances on the same underlying hardware because we can have greater density through the shared OS.

This fact, in turn, allows us to create software using a pattern known as microservices.

Traditionally, a software service may have been built as monolithic – that is, the software is just one big code base that runs on a server. The problem with this pattern is that it can be quite hard to scale – that is, if you need more power, you can only go so far as adding more RAM and CPU.

The first answer to this issue was to build applications that could be duplicated across multiple servers and then have requests load balanced between them – and in fact, this is still a pervasive pattern.

As software started to be developed in a more modular fashion, those individual modules would be broken up and run as separate services, each being responsible for a particular aspect of the system. For example, we might split off a product ordering component as an individual service that gets called by other parts of the system, and this service could run on its own server.

While we can quickly achieve this by running it on its virtual server, the additional memory overhead means as we break our system into more and more individual services, this memory overhead increases, and we soon run very efficiently from a resource usage point of view.

And here is where containers come in. Because they offer isolation without running a full OS each time, we can run our processes far more efficiently – that is, we can run far more on the same hardware than we could on standard virtual machines.

By this point, you might now be asking how do we manage all this? What controls the spinning up of new containers or shutting them down? And the answer is orchestration. Container orchestrators monitor containers and add additional instances in response to usage thresholds or even for resiliency if a running container becomes unhealthy for any reason. Kubernetes is an orchestration service for managing containers.

A Kubernetes cluster consists of worker machines, called nodes, that run containerized applications, and every cluster has at least one worker node. The worker node(s) host pods that are the application’s components, and a control plane or cluster master manages the worker nodes and the pods in the cluster. We can see a logical overview of a typical Kubernetes cluster, with all its components, in the following diagram:

Figure 7.11 – Kubernetes control plane and components

AKS is Microsoft’s implementation of a managed Kubernetes cluster. When you create an AKS cluster, a cluster master is automatically created and configured; there is no cost for the cluster master, only the nodes that are part of the AKS cluster.

The cluster master includes the following Kubernetes components:

  • kube-apiserver: The API server exposes the Kubernetes management services and provides access for management tools such as the kubectl command, which is used to manage the service.
  • etcd: A highly available key-value store that records the state of your cluster.
  • kube-scheduler: Manages the nodes and what workloads to run on them.
  • kube-controller-manager: Manages a set of smaller controllers that perform pod and node operations.

You define the nodes’ number and size, and the Azure platform configures secure communication between the cluster master and nodes.

Architecting for containerization and Kubernetes – Designing Compute Solutions

This section will look in more detail at AKS, Microsoft’s implementation of Kubernetes. To understand what AKS is, we need to take a small step back and understand containerization and Kubernetes itself.

Containerization

As we briefly mentioned earlier, containerization is a form of virtualization in that you can run multiple containers upon the same hardware, much like virtual machines. Unlike virtual machines, containers share the underlying OS of the host. This provides much greater efficiency and density. You can run many more containers upon the same hardware than you can run virtual machines because of the lower memory overhead of needing to run multiple copies of the OS – as we can see in the following diagram:

Figure 7.10 – Containers versus virtual machines

In addition to this efficiency, containers are portable. They can easily be moved from one host to another, and this is because containers are self-contained and isolated. A container includes everything it needs to run, including the application code, runtime, system tools, libraries, and settings.

To run containers, you need a container host – the most common is Docker, and in fact, container capabilities in Azure use the Docker runtime.

A container is a running instance, and what that instance contains is defined in an image. Images can be defined in code; in Docker images, this is called a Dockerfile.

The Dockerfile uses a specific syntax that defines what base image you wish you use – that is, either a vanilla OS or an existing image with other tools and components on it, followed by your unique configuration options, which may include additional software to install, networking, file shares, and so on. An example Dockerfile might look like this:

FROM node:current-slim
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
EXPOSE 8080
CMD [ “npm”, “start” ]
COPY . .

In this example, we start with an image called node:current-slim, set a working directory, copy a file into it, and install a package called npm. Finally, we expose the application over port 8080 and issue the npm start command.

This Dockerfile can create a new image but notice how it is based on an existing image. By extending existing images, you can more easily build your containers with consistent patterns.

The images we build, or use as a source, are held in a container registry. Docker has its public container registry, but you can create your private registry with the Azure Container Registry service in Azure.

Once we have created our new image and stored it in a container registry, we can deploy that image as a running container. Containers in Azure can be run either using the Azure Container Instances(ACI), a containerized web app, or an AKS cluster.

Web apps for containers

Web apps for containers are a great choice if your development team is already used to using Azure Web Apps to run monolithic or N-tier apps and you want to start moving toward a containerized platform. Web Apps works best when you only need one or a few long-running instances or when you would benefit from a shared or free App Service plan.

An example use case might be when you have an existing .NET app that you wish to containerize that hasn’t been built as a microservice.

Azure Container Instances

ACI is a fully managed environment for containers, and you are only billed for the time you use them. As such, they suit short-lived microservices, although, like web apps for containers, you should only consider this option if you are running a few services.

Web apps for containers and ACI are great for simple services or when you are starting the containerization journey. Once your applications begin to fully embrace microservices and containerized patterns, you will need better control and management; for these scenarios, you should consider using AKS.

Automating virtual machine management – Designing Compute Solutions-1

What to watch out for

Power Automate is only for simpler workflows and is not suitable when deeper or more advanced integration is required.

In this section, we have briefly looked at the many different compute technologies available in Azure. PaaS options are fully managed by the platform, allowing architects and developers to focus on the solution rather than management. However, when traditional IaaS compute options are required, such as virtual machines, security and OS patches must be managed yourself. Next, we will look at the native tooling that Azure provides to make this management easier.

Automating virtual machine management

Virtual machines are part of the IaaS family of components. One of the defining features of VMs in Azure is that you are responsible for keeping the OS up to date with the latest security patches.

In an on-premise environment, this could be achieved by manually configuring individual servers to apply updates as they become available; however, in many organizations, more control is required; such as, for example, the ability to have patches verified and approved before mass roll out to production systems, control when they happen, and control reboots when required.

Typically, this could be achieved using Windows Server Update Services (WSUS) and Configuration Manager, part of the Microsoft Endpoint Manager suite of products. However, these services require additional management and setup, which can be time-consuming.

As with most services, Azure helps simplify managing VM updates with a native Update Management service. Update Management uses several other Azure components, including the following:

  • Log Analytics: Along with the Log Analytics agent, reports on the current status of patching for a VM
  • PowerShell Desired State Configuration (DSC): Required for Linux patching
  • Automation Hybrid Runbooks / Automation Account: Used to perform updates

Automation Account and Log Analytics workspaces are not supported together in all regions, and therefore you must plan when setting up Update Management. For example, if your Log Analytics workspace is in East US, your automation account must be created in East US 2.

See the following link for more details on region pairings: https://docs.microsoft.com/en-gb/azure/automation/how-to/region-mappings.

When setting up Update Management, you can either create the Log Analytics workspaces and automation accounts yourself or let the Azure portal make them for you. In the following example, we will select an Azure VM and have the portal set up Update Management.