Category Deployments and YAML

Private endpoint connections – Network Connectivity and Security

We have said that service endpoints assign an internal IP to services that are then used to direct the flow of traffic to it. However, the actual IP is hidden and can therefore not be referenced by yourself.

There are times when you need to access a service such as SQL or a storage account via a private IP – either for direct connectivity from an on-premises network or when you have strict firewall policies between your users and your solution.

For these scenarios, Private endpoint connections can be used to assign private IP addresses to certain Azure services. Private endpoints are very similar to service endpoints, except you have visibility of the underlying IP address and so they can therefore be used across VPNs and ExpressRoute.

However, private endpoints rely on DNS to function correctly. As most services use host headers (that is, an FQDN) to determine your individual backend service, connecting via the IP itself does not work. Instead, you must set up a DNS record that sets your service to the internal IP.

For example, if you create a private endpoint for your storage account called mystorage that uses an IP address of 10.0.0.10, to access the service securely, you must create a DNS record so that mystorage.blob.core.windows.net resolves to 10.0.0.10.

This can be performed by either creating DNS records in your DNS service or forwarding the request to an Azure private zone and having the internal Azure DNS service resolve it for you.

Azure private endpoints support more services than service endpoints and are, therefore, the only option in some circumstances. In addition to the services supported by service endpoints, private endpoints also support the following:

  • Azure Automation
  • Azure IoT Hub
  • Azure Kubernetes Service – Kubernetes API
  • Azure Search
  • Azure App Configuration
  • Azure Backup
  • Azure Relay
  • Azure Event Grid
  • Azure Machine Learning
  • SignalR
  • Azure Monitor
  • Azure File Sync

Using a combination of NSGs, ASGs, Azure Firewall, service endpoints, and private endpoints, you have the tools to secure your workloads internally and externally. Next, we will examine how we can extend the actual VNETs by exploring the different options for connecting into them or connecting different VNETs.

Connectivity

A simple, standalone solution may only require a single VNET, and especially if your service is an externally facing application for clients, you may not need to create anything more complicated.

However, for enterprise applications that contain many different services, or for hybrid scenarios where you need to connect securely to Azure from an on-premises network, you must consider the other options for providing connectivity.

We will start by looking at connecting two VNETs.

Previously, we separated services within different subnets. However, each of those subnets was in the same subnet. Because of this, connectivity between the devices was automatic – other than defining NSG rules, connectivity just happened.

More complex solutions may be built across multiple VNETs, and these VNETs may or may not be in the same region. By default, communication between VNETs is not enabled. Therefore you must set this up if required. The simplest way to achieve this connectivity is with VNET peering.

Service endpoints – Network Connectivity and Security

Many services are exposed via a public address or URL. For example, Blob Storage is accessed via <accountname>.blob.core.windows.net. Even if your application is running on a VM connected to a VNET, communication to the default endpoint will be the public address, and full access to all IPs, internal and external, is allowed.

For public-facing systems, this may be desirable; however, if you need the backend service to be protected from the outside and only accessible internally, you can use a service endpoint.

Service endpoints provide direct and secure access from one Azure service to another over the Azure backbone. Internally, the service is given a private IP address, which is used instead of the default public IP address. Traffic from the source is then allowed, and external traffic becomes blocked, as we see in the following example:

Figure 8.8 – Protecting access with service endpoints

Although using service endpoints enables private IP addresses on the service, this address is not exposed or manageable by you. One effect of this is that although Azure-hosted services can connect to the service, on-premises systems cannot access it over a VPN or ExpressRoute. For these scenarios, an alternative solution called a private endpoint can be used, which we will cover in the next sub-section, or using an ExpressRoute with Microsoft peering using a NAT IP address.

Important Note

When you set up an ExpressRoute into Azure, you have the option of using Microsoft peering or private peering. Microsoft peering ensures all connectivity in the Office 365 platform. Azure goes over the ExpressRoute instead of private peering, sending only traffic destined for internal IP ranges to use the ExpressRoute. In contrast, public services are accessed via public endpoints. The most common form of connectivity is private peering; Microsoft peering is only recommended for specific scenarios. See https://docs.microsoft.com/en-us/microsoft-365/enterprise/azure-expressroute?view=o365-worldwide for more details.

To use service endpoints, the service itself must be enabled on the subnet, and the service you wish to lock down must have the public network option turned off and the source subnet added as an allowable source.

Important Note

Service endpoints ignore NSGs – therefore, any rules you have in place and attached to the secure subnet are effectively ignored. This only affects the point-to-point connection between the subnet and the service endpoint. All other NSG rules still hold.

At the time of writing, the following Azure services support service endpoints:

  • Azure Storage
  • Azure Key Vault
  • Azure SQL Database
  • Azure Synapse Analytics
  • Azure PostgreSQL Server
  • Azure MySQL Server
  • Azure MariaDB
  • Azure Cosmos DB
  • Azure Service Bus
  • Azure Event Hubs
  • Azure App Service
  • Azure Cognitive Services
  • Azure Container Registry

To enable service endpoints on a subnet, in the Azure portal, go to the properties of the VNET you wish to use, select the Subnets blade on the left-hand menu, then select your subnet. The subnet configuration window appears with the option to choose one or more services, as we can see in the following screenshot. Once you have made changes, click Save:

Figure 8.9 – Enabling service endpoints on a subnet

Once enabled, you can then restrict access to your backend service. In the following example, we will limit access to a storage account from a subnet:

  1. Go to the Azure portal at https://portal.azure.com.
  2. In the search bar, search for and select Storage accounts.
  3. Select the storage account you wish to restrict access to.
  4. On the left-hand menu, click the Networking option.
  5. Change the Allow access from option from All networks to Selected networks.
  6. Click + Add existing virtual network.
  7. Select the VNET and subnet you want to restrict access to.
  8. Click Save.

The following screenshot shows an example of a secure storage account:

Figure 8.10 – Restricting VNET access

Once set up, any access except the defined VNET will be denied, and any traffic from services on the VNET to the storage account will now be directly over the Azure backbone.

You may have noticed another option in the Networking tab – Private endpoint connections.

Application Security Groups – Network Connectivity and Security

An ASG is another way of grouping together resources instead of just allowing all traffic to all resources on your VNET. For example, you may want one to define a single NSG that applies to all subnets; however, you may have a mixture of services, such as database servers and web servers, across those subnets.

You can define an ASG and attach your web servers to that ASG, and another ASG that groups your database servers. In your NSG, you then set the HTTPS inbound rule to use the ASG as the destination rather than the whole subnet, VNET, or individual IPs. In this configuration, even though you have a common NSG, you can still uniquely allow access to specific server groups.

The following diagram shows an example of this type of configuration:

Figure 8.6 – Example architecture using NSGs and ASGs

In the preceding example, App1 and App2 are part of the ASGApps ASG, and Db1 and Db2 are part of the ASGDb ASG.

The NSG rulesets would then be as follows:

With the preceding in place, HTTPS inbound would only be allowed to App1 and App2, and port 1433 would only be allowed from App1 and App2.

ASGs and NSGs are great for discrete services; however, there are some rules that you may always want to apply, for example, blocking all outbound access to certain services such as FTP. A better option might be to create a central firewall that all your services route through in this scenario.

Azure Firewall

Whereas individual NSGs and ASGs form part of your security strategy, building multiple network security layers, especially in enterprise systems, is even better.

Azure Firewall is a cloud-based, fully managed network security appliance that would typically be placed at the edge of your network. This means that you would not usually have one firewall per solution or even subscription. Instead, you would have one per region and have all other devices, even those in different subscriptions, route through to it, as in the following example:

Figure 8.7 – Azure Firewall in a hub/spoke model

Azure Firewall offers some of the functionality you can achieve from NSGs, such as network traffic filtering based on port and IP or service tags. Over and above these basic services, Azure Firewall also offers the following:

  • High availability and scalability: As a managed offering, you don’t need to worry about building multiple VMs with load balancers or how much your peak traffic might be. Azure Firewall will automatically scale as required, is fully resilient, and supports availability zones.
  • FQDN tags and FQDN filters: As well as IP, addressing, and service tags, Azure Firewall also allows you to define FQDNs. FQDN tags are similar to service tags but support a more comprehensive range of services, such as Windows Update.
  • Outgoing SNAT and inbound DNAT support: If you use public IP address ranges for private networks, Azure Firewall can perform Secure Network Address Translation (SNAT) on your outgoing requests. Incoming traffic can be translated using Destination Network Address Translation (DNAT).
  • Threat intelligence: Azure Firewall can automatically block incoming traffic originating from IP addresses known to be malicious. These addresses and domains come from Microsoft’s threat intelligence feed.
  • Multiple IPs: Up to 250 IP addresses can be associated with your firewall, which helps with SNAT and DNAT.
  • Monitoring: Azure Firewall is fully integrated with Azure Monitor for data analysis and alerting.
  • Forced tunneling: You can route all internet-bound traffic to another device, such as an on-premises edge firewall.

Azure Firewall provides an additional and centralized security boundary to your systems, ensuring an extra layer of safety.

So far, we have looked at securing access into and between services that use VNETs, such as VMs. Some services don’t use VNETs directly but instead have their firewall options. These firewall options often include the ability to either block access to the service based on IPs or VNETs, and when this option is selected, it uses a feature called service endpoints.

Network Security Groups – Network Connectivity and Security

NSGs allow you to define inbound and outbound rules that will allow or deny the flow of traffic from a source to a destination on a specific port. Although you define separate inbound and outbound rules, each rule is stateful. This means that the flow in any one direction is recorded so that the returning traffic can also be allowed using the same rule.

In other words, if you allow HTTPS traffic into a service, then that same traffic will be allowed back out for the same source and destination.

We create NSGs as components in Azure and then attach them to a subnet or network interface on a VM. Each subnet can only be connected to a single NSG, but any NSG can be attached to multiple subnets. This allows us to define rulesets independently for everyday use cases (such as allowing web traffic) and then reusing them across various subnets.

When NSGs are created, Azure applies several default rules that effectively block all access except essential Azure services.

If you create a VM in Azure, a default NSG is created for you and attached to the network interface of the VM; we can see such an example in the following screenshot:

Figure 8.5 – Example NSG ruleset

In the preceding figure, we can see five inbound rules and three outbound. The top two inbound rules highlighted in red were created with the VM – in the example, we specified to allow RDP (3389) and HTTP (80).

The three rules in the inbound and outbound highlighted in green are created by Azure and cannot be removed or altered. These define a baseline set of rules that must be applied for the platform to function correctly while blocking everything else. As the name suggests on these rules, AllowVnetInBound allows traffic to flow freely between all devices in that VNET, and the AllowAzureLoadBalancerInBound rule allows any traffic originating from an Azure load balancer. DenyAllInBound blocks everything else.

Each rule requires a set of options to be provided:

  • Name and Description: For reference; these have no bearing on the actual service. They make it easier to determine what it is or what it is for.
  • Source and Destination port: The port is, of course, the network port that a particular service communicates on – for RDP, this is 3389; for HTTP, it is 80, and for HTTPS, it is 443. Some services require port mapping; that is, the source may expect to communicate on one port, but the actual service communicates on a different port.
  • Source and Destination location: The source and destination locations define where traffic is coming from (the source) and where it is trying to go to (the destination). The most common option is an IP address or list of IP addresses, and these will typically be used to define external services.

For Azure services, we can either choose the VNET – that is, the destination is any service on the VNET the NSG is attached to – or a service tag, which is a range of IPs managed by Azure. Examples may include the following:

– Internet: Any address that doesn’t originate from the Azure platform

– AzureLoadBalancer: An Azure load balancer

– AzureActiveDirectory: Communications from the Azure Active Directory service

– AzureCloud.EastUS: Any Azure service in the East US region

As we can see from these examples, with the exception of the internet option, they are IP sets that belong to Azure services. Using service tags to allow traffic from Azure services is safer than manually entering the IP ranges (which Microsoft publishes) as you don’t need to worry about them changing.

  • Protocol: Any, TCP, UDP, or ICMP. Services use different protocols, and some services require TCP and UDP. You should always define the least access; so, if only TCP is needed, only choose TCP. ICMP protocol is used primarily for Ping.
  • Priority: Firewall rules are applied one at a time in order, with the lowest number, which is 100, being used last. Azure applies a Deny All rule to all NSGs with the lowest priority. Therefore, any rule with a higher priority will overrule this one. Deny all is a failsafe rule – this means everything will be blocked by default unless you specifically create a rule to allow access.

Through the use of NSGs, we can create simple rules around our VNET-integrated services and form part of an effective defense strategy. There may be occasions, however, when you want to apply different firewall rules to other components within the same subnet; we can use Application Security Groups (ASGs) for these scenarios.

Understanding IP addressing and DNS in Azure – Network Connectivity and Security

When building services in Azure, you sometimes choose to use internal IP addresses and external IP addresses. Internal IP addresses can only communicate internally and use VNETs. Many services can also use public IP addresses, which allow you to communicate with the service from the internet.

Before we delve into public and internal IP addresses, we need to understand the basics of IP addressing in general, and especially the use of subnets and subnet masks.

Understanding subnets and subnet masks

When devices are connected to a TCP/IP-based network, they are provided with an IP address in the notation xxx.xxx.xxx.xxx. Generally, all devices that are on the same local network can communicate with each other without any additional settings.

When devices on different networks need to communicate, they must do so via a router or gateway. Devices use a subnet mask to differentiate between addresses on the local network and those on a remote network.

The network mask breaks down an IP address into a device or host address component and a network component. It does this by laying a binary mask over the IP address with the host address to the right.

255 in binary is 11111111 and 0 in binary is 00000000. The mask says how many of those bits are the network, with 1 denoting a network address and 0 denoting a host address.

Thus, 255.0.0.0 becomes 11111111.00000000.00000000.0000000, therefore in the address 10.0.0.1, 10 is the network and 0.0.0.1 is the host address. Similarly, with a mask of 255.255.0.0 and an address of 10.0.0.1, 10.0 becomes the network and 0.1 the host. The following diagram shows this concept more clearly:

Figure 8.1 – Example subnet mask

Splitting an address space into multiple networks is known as subnetting, and subnets can be broken down into even smaller subnets until the mask becomes too big.

When configuring IP settings for devices, you often supply an IP address, a subnet mask, and the address of the router on the local network that will connect you to other networks.

Sometimes, when denoting an IP address range, the subnet mask and range are written in a shorthand form known as CIDR notation. We will cover CIDR notation examples in the Private IP addresses sub-section.

This is a relatively simplified overview of network addressing and subnetting, and although the AZ-304 exam will not explicitly ask you questions on this, it does help to better understand the next set of topics.

Understanding Azure networking options – Network Connectivity and Security

In the previous chapter, we examined the different options when building computer services, from the different types of Virtual Machines (VMs) to web apps and containerization.

All solution components need to be able to communicate effectively and safely; therefore, in this chapter, we will discuss what options we have to control traffic flow using route tables and load balancing components, securing traffic with different firewalling options, and managing IP addressing and resolution.

With this in mind, we will cover the following topics:

  • Understanding Azure networking options
  • Understanding IP addressing and DNS in Azure
  • Implementing network security
  • Connectivity
  • Load balancing and advanced traffic routing

Technical requirements

This chapter will use the Azure portal (https://portal.azure.com) and you need an Azure subscription for the examples.

Understanding Azure networking options

Services in Azure need to communicate, and this communication is performed over a virtual network, or VNET.

There are essentially two types of networking in Azure – private VNETs and the Azure backbone. The Azure backbone is a fully managed service. The underlying details are never exposed to you – although the ranges used by many services are available, grouped by region, for download in a JSON file. The Azure backbone is generally used when non-VNET-connected services communicate with each other; for example, when storage accounts replicate data or when Azure functions communicate with SQL and Cosmos DB, Azure handles all aspects of these communications. This can cause issues when you need more control, especially if you want to limit access to your services at the network level, that is, by implementing firewall rules.

Important Note

The address ranges of services in Azure change continually as the services grow within any particular region, and can be downloaded from this link: https://www.microsoft.com/en-us/download/details.aspx?id=56519.

Some services can either be integrated with, or built on top of, a VNET. VMs are the most common example of this, and to build a VM, you must use a VNET. Other services can also be optionally integrated with VNETs in different ways. For example, VMs can communicate with an Azure SQL database using a service endpoint, enabling you to limit access and ensure traffic is kept private and off the public network. We look at service endpoints and other ways to secure internal communications later in this chapter, in the Implementing network security section.

The first subject we will need to look at when dealing with VNETs and connectivity is that of addressing and Doman Name Services (DNSes).

Deployments and YAML – Designing Compute Solutions

A pod’s resources are defined as a deployment, which is described within a YAML manifest. The manifest defines everything you need to state how many copies or replicas of a pod to run, what resources each pod requires, the container image to use, and other information necessary for your service.
A typical YAML file may look like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
name: nginx
image: mcr.microsoft.com/oss/nginx/nginx:1.15.2-alpine
ports:
containerPort: 80
resources:
requests:
CPU: 250m
memory: 64Mi
limits:
CPU: 500m
memory: 256Mi

In this example, taken from the docs.microsoft.com site, we see a deployment using the nginx container image, requesting a minimum of 250 m (millicore) and 64 Mi (mebibytes) of RAM, and a maximum of 500 m and 256 Mi.

Tip

A mebibyte is equal to 1024 KB, whereas a millicore is one-thousandth of a CPU core.

Once we have our pods and applications defined within a YAML file, we can then use that file to tell our AKS cluster to use the information in that file and deploy then run our application. This can be performed by running the deployment commands against the AKS APIs or via DevOps pipelines.

Kubernetes is a powerful tool for building resilient and dynamic applications that use microservices, and using images is incredibly efficient and portable due to their use of containerization; however, they are complex.

AKS abstracts much of the complexity of using and managing a Kubernetes cluster. Still, your development and support teams need to be fully conversant with the unique capabilities and configuration options available.

Summary

This chapter looked at the different compute options available to us in Azure and looked at the strengths and weaknesses of each. With any solution, the choice of technology is dependent on your requirements and the skills of the teams who are building them.

We then looked at how to design update management processes to ensure any VMs we use as part of our solution are kept up to date with the latest security patches.

Finally, we looked at how we can use containerization in our solutions, and specifically how Azure Kubernetes Service provides a flexible and dynamic approach to running microservices.

In the next chapter, we will look at the different networking options in Azure, including load balancing for resilience and performance.

Exam scenario

The solutions to the exam scenarios can be found at the end of this book.

Mega Corp is planning a new multi-service solution to help the business manage expenses. The application development team has decided to break the solution into different services that communicate with each other.

End users will upload expense claims as a Word document to the system, and these documents must flow through to different approvers.

The HR department also wants to amend some of the workflows themselves as they can change often.

The application will have a web frontend, and the application developers are used to building .NET websites. However, they would like to start moving to a more containerized approach.

Suggest some compute components that would be suited to this solution.

Nodes and node pools – Designing Compute Solutions

An AKS cluster has one or more nodes, which are virtual machines running the Kubernetes node components and container runtime:

  • kubelet is the Kubernetes agent that responds to requests from the cluster master and runs the requested containers.
  • kube-proxy manages virtual networking.
  • The container runtime is the Docker engine that runs your containers.

The following diagram shows these components and their relation to Azure:

Figure 7.12 – AKS nodes

When you define your AKS nodes, you choose the SKU of the VM you want, which in turn determines the number of CPUs, RAM, and type of disk. You can also run GPU-powered VMs, which are great for mathematical and AI-related workloads.

You can also set up the maximum and the minimum number of nodes to run in your cluster, and AKS will automatically add and remove nodes within those limits.

AKS nodes are built with either Ubuntu Linux or Windows 2019, and because the cluster is managed, you cannot change this. If you need to specify your OS or use a different container runtime, you must build your Kubernetes cluster using the appropriate engine.

When you define your node sizes, you need to be aware that Azure automatically reserves an amount of CPU and RAM to ensure each node performs as expected – these reservations are 60 ms for CPU and 20% of RAM, up to 4 GB So, if your VMs have 7 GB RAM, the reservation will be 1.4 GB but for any VM with 20 GB RAM and above, the reservation will be 4 GB.

This means that the actual RAM and CPU amounts available to your nodes will always be slightly less than the size would otherwise indicate.

When you have more than one node of the same configuration, you group them into a node pool, and the first node is created within the default node pool. When you upgrade or scale an AKS cluster, the action will be performed against either the default node pool or a specific node pool of your choosing.

Pods

A node runs your applications within pods. Typically, a pod has a one-to-one mapping to a container, that is, a running instance. However, in advanced scenarios, you can run multiple containers within a single pod.

At the pod level, you define the number of resources to assign to your particular services, such as the amount of RAM and CPU. When pods are required to run Kubernetes, the scheduler attempts to run the pod on a node with available resources to match what you have defined.

Automating virtual machine management – Designing Compute Solutions-2

For this example, you will need a Windows VM set up in your subscription:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the search bar, type and select Virtual Machines and select the virtual machine you wish to apply Update Management to.
  3. On the left-hand menu, click Guest + host updates under Operations.
  4. Click the Go to Update Management button.
  5. Complete the following details:
    a) Log Analytics workspace Location: The location of your VM, for example, East US
    b) Log Analytics workspace: Create default workspace
    c) Automation account subscription: Your subscription
    d) Automation account: Create a default account
  6. Click Enable.

The process can take around 15 minutes once completed. Go back to the VM view and again select Guest + host updates under Operations, followed by Go to Update Management.

You will see a view similar to the following screenshot:

Figure 7.8 – Update Management blade

You can get to the same view but for all the VMs you wish to manage in the portal by searching for Automation Accounts and selecting the automation account that has been created. Then click Update management.

If you want to add more VMs, click the + Add Azure VMs button to see a list of VMs in your subscription and enable the agent on multiple machines simultaneously – as we see in the following screenshot:

Figure 7.9 – Adding more virtual machines for Update Management

The final step is to schedule the installation of patches:

  1. Navigate to the Azure portal by opening https://portal.azure.com.
  2. Type Automation into the search bar and select Automation Accounts.
  3. Select the automation account.
  4. Click Update Management.
  5. Click Schedule deployment and complete the details as follows:
    a) Name: Patch Tuesday
    b) Operating System: Windows
    c) Maintenance Window (minutes): 120
    d) RebootReboot options:Reboot if required
  6. Under Groups to update, click Click to Configure.
  7. Select your subscription and Select All under Resource Groups.
  8. Click Add, then OK.
  9. Click Schedule Settings.
  10. Set the following details:
    a) Start date: First Tuesday of the month
    b) Recurrence: Recurring
    c) Recur Every: 14 days
  11. Click OK.
  12. Click Create.

Through the Update Management feature, you can control how your virtual machines are patched and when and what updates to include or exclude. You can also set multiple schedules and group servers by resource group, location, or tag.

In the preceding example, we selected all VMs in our subscription, but as you saw, we had the option to choose a machine based on location, subscription, resource group, or tags.

In this way, you can create separate groups for a variety of purposes. For example, we mentioned earlier that a common practice would be to test patches before applying them to production servers. We can accommodate this by grouping non-production servers into a separate subscription, resource group, or simply tagging them. You can then create one patch group for your test machines, followed by another for production machines a week later – after you’ve had time to confirm the patches have not adversely affected workloads.

As part of any solution design that utilizes VMs, accommodation must be included to ensure they are always running healthily and securely, and Update Management is a critical part of this. As we have seen, Azure makes the task of managing OS updates easy and straightforward to set up.

Next, we will investigate another form of compute that is becoming increasingly popular – containerization and Kubernetes.

What to watch out for – Designing Compute Solutions

When running as a consumption plan, Azure Functions is best suited to short-lived tasks – for tasks that run longer than 10 minutes, you should consider alternatives or running them on an App Service plan.

You should also consider how often they will be executed because you pay per execution on a consumption plan. If it is continuously triggered, your costs could increase beyond that of a standard web app. Again, consider alternative approaches or the use of an App Service plan.

Finally, consumption-based apps cannot integrate with VNets. Again, if this is required, running them on an App Service plan can provide this functionality.

Logic Apps

Azure Logic Apps is another serverless option – when creating logic apps, you do not need to be concerned with how much RAM or CPU to provision; instead, you pay per execution or triggering them.

Important note

Consumption versus fixed price: Many serverless components, including Logic Apps and Functions, can be run on isolated environments, or in the case of Logic Apps, an Isolated Service Environment (ISE), whereby you pay for provisioned resources in the same way as a virtual machine.

Logic Apps shares many concepts with Azure Functions; you can define triggers, actions, flow logic, and connectors for communicating with other services. Whereas you define this in code with Functions, Logic Apps provides a drag-and-drop interface that allows you to build workflows quickly.

Logic Apps has hundreds of pre-built connectors that allow you to interface with hundreds of systems – not just in Azure but also externally. By combining these connectors with if-then-else style logic flows and either scheduled or action-based triggers, you can develop complex workflows without writing a single line of code.

The following screenshot shows a typical workflow built purely in the Azure portal:

Figure 7.7 – Logic Apps example

With their extensibility features, you can also create your custom logic and connectors for integrating with your services.

Finally, although the solution can be built entirely in the Azure portal, you can also create workflows using traditional development tools such as Visual Studio or Visual Studio Code. This is because solutions are defined as ARM templates – which enables developers to define workflows and store them in code repositories. You can then automate deployments through DevOps pipelines.

What to watch out for

Logic Apps provides a quick and relatively simple mechanism for creating business workflows. When you need to build more complex business logic or create custom connectors, you need to balance the difficulty of doing this versus using an alternative approach such as Azure Functions. Logic Apps still requires a level of developer experience and is not suitable if business users may need to develop and amend the workflows.

Power Automate

Power Automate, previously called Flow, is also a GUI-driven workflow creation tool that allows you to build automated business processes. Like Logic Apps, using Power Automate, you can define triggers and logic flow connected to other services, such as email, storage, or apps, through built-in connectors.

The most significant difference between Power Automate and Logic Apps is that Power Automate workflows can only be built via the drag-and-drop interface – you cannot edit or store the underlying code.

Therefore, the primary use case for Power Automate is for office workers and business analysts to create simple workflows that can use only the built-in connectors.