Archives 2024

Azure Functions – Designing Compute Solutions

Azure Functions falls into the Functions as a Service (FaaS) or serverless category. This means that you can run Azure Functions using a consumption plan whereby you only pay for the service as it is being executed. In comparison, Azure App Service runs on a service plan in which you define the CPU and RAM.

With Azure Functions, you don’t need to define CPU and RAM as the Azure platform automatically allocates whatever resources are required to complete the operation. Because of this, functions have a default timeout of 5 minutes with a maximum of 10 minutes – in other words, if you have a function that would run for longer than 10 minutes, you may need to consider an alternative approach.

Tip

Azure Functions can be run as an App Service plan the same as App Service. This can be useful if you have functions that will run for longer than 10 minutes, if you have spare capacity in an existing service plan, or if you require support for VNet integration. Using an App Service plan means you pay for the service in the same way as App Service, that is, you pay for the provisioned CPU and RAM whether you are using it or not.

Functions are event-driven; this means they will execute your code in response to a trigger being activated. The following triggers are available:

  • HTTPTrigger: The function is executed in response to a service calling an API endpoint over HTTP/HTTPS.
  • TimerTrigger: Executes on a schedule.
  • GitHub webhook: Responds to events that occur in your GitHub repositories.
  • CosmosDBTrigger: Processes Azure Cosmos DB documents when added or updated in collections in a NoSQL database.
  • BlobTrigger: Processes Azure Storage blobs when they are added to containers.
  • QueueTrigger: Responds to messages as they arrive in an Azure Storage queue.
  • EventHubTrigger: Responds to events delivered to an Azure Event Hub.
  • ServiceBusQueueTrigger: Connects your code to other Azure services or on-premises services by listening to message queues.
  • ServiceBusTopicTrigger: Connects your code to other Azure services or on-premises services by subscribing to topics.

Once triggered, an Azure function can then run code and interact with other Azure services for reading and writing data, including the following:

  • Azure Cosmos DB
  • Azure Event Hubs
  • Azure Event Grid
  • Azure Notification Hubs
  • Azure Service Bus (queues and topics)
  • Azure Storage (blob, queues, and tables)
  • On-premises (using Service Bus)

By combining different triggers and outputs, you can easily create a range of possible functions, as we see in the following diagram:

Figure 7.6 – Combining triggers and outputs with a Functions app

Azure Functions is therefore well suited to event-based microservice applications that are short-run and are not continuously activated. As with App Service, Functions supports a range of languages, including C#, F#, JavaScript, Python, and PowerShell Core.

Routing – Network Connectivity and Security

By default, all traffic in Azure follows pre-defined routes that are set up within the VNETs. These routes ensure traffic flows correctly between VNETs and out to the internet as required.

When more advanced routing is required, you can set up your routes to force the traffic through set paths, sometimes known as service chaining.

An example is where you need to route your Azure VM traffic back on-premises for your internal ranges. In this instance, you could create a route that sends all traffic destined for your internal ranges to the VPN gateway in your hub VNET.

Another example would be when you wish to have all internet traffic traverse a central firewall; in this instance, you would define a route to send all internet traffic to a firewall device you have in a peered VNET.

When creating routes, you can create either user-defined routes or Border Gateway Protocol (BGP).

BGP automatically exchanges routing information between two or more networks. In Azure, it can be used to advertise routes from your on-premises network to Azure when using ExpressRoute or a site-to-site VPN.

Alternatively, you can create your custom route; although this is more manual and has a higher administrative overhead, it does provide complete control.

When defining a user-defined route, we set a descriptive name, an address prefix that specifies the address range that we will redirect traffic for, and the next hop. The next hop is where traffic will be routed through and can be any of the following:

  • Virtual appliance: Such as a firewall or other routing device
  • VNET gateway: Used when directing traffic through a VPN gateway
  • VNET: Sends all traffic to a specific VNET
  • Internet: Sends traffic to Azure internet routers
  • None: Drops all data (that is, blocks all traffic for that range)

For example, if we want to route all traffic through a firewall device with the address of 10.0.0.10, we would create the following custom route:

Figure 8.15 – Example user-defined route

We can also add additional routes for other rules; for example, routing traffic through the firewall, we could add another rule to route internal bound traffic to a VPN gateway.

Because we can have a mixture of custom routes, system routes, and BGP routes, Azure uses the following order to decide where to send traffic in the event there is a conflict. That order is as follows:

  1. User-defined routes
  2. BGP routes
  3. System routes

By using a combination, we can precisely control traffic depending on our precise requirements.

Another aspect of routing traffic is when we need to use load balancing components to share traffic between one or more services, and we will discuss this in the next section.

On-premises resources – Network Connectivity and Security

To connect to an Azure VPN gateway, you will need a VPN device on your corporate network that supports policy-based or route-based VPN gateways. It also needs to have a public IPv4 network address.

Azure resources

Within Azure, you need to set up the following components:

  • VNET: The address space used by the VNET must not overlap with your corporate ranges.
  • Gateway subnet: The VPN gateway must be installed in a specific subnet, and it must be called GatewaySubnet. It must have a range of at least /27 (32 addresses).
  • Public IP address: An IP address that can be connected to from the public network (internet).
  • Local network gateway: This defines the on-premises gateway and configuration.
  • VNET gateway: An Azure VPN or ExpressRoute gateway.

The following diagram shows how this might look:

Figure 8.12 – VPN gateway

As we can see from the preceding diagram, a VPN connection is made to a specific subnet and VNET within Azure. In most cases, you would need to connect multiple VNETs to the same connection, which we can perform by peering the connected VNET to your workload VNETs.

This is often called a hub-spoke model; we can see an example hub-spoke model in the following diagram:

Figure 8.13 – Hub-spoke architecture

Earlier, we stated that connections between VNETs are not transitive, therefore to set up the hub-spoke architecture, we must use a gateway transit – we do this when we create our peering connection between the spoke VNET (which contains our workloads) and the hub VNET (which includes the VNET gateway). On the options when creating a peering request from the spoke to the hub, select the Use the remote virtual network’s gateway option, as we can see in the following example:

Figure 8.14 – Setting the peering option to use gateway transit

Using a VPN is a simple way to connect securely to Azure. However, you are still using the public network; thus, connectivity and performance cannot be guaranteed. For a more robust and direct connection into Azure, companies can leverage ExpressRoute.

ExpressRoute

ExpressRoute provides a dedicated and utterly private connection into Azure, Office 365, and Dynamics 365. Throughput is significantly increased since connections are more reliable with minimal latency.

Connectivity is via authorized network providers who ensure connections are highly available; this means you get redundancy built-in.

There are three different models to choose from when ordering an ExpressRoute – CloudExchange co-location, point-to-point Ethernet connection, and any-to-any connection:

  • CloudExchange co-location is for companies that house their existing data center with an internet service provider.
  • Point-to-point connections are dedicated connections between your premises and Azure.
  • Any-to-any is for companies that have existing WAN infrastructure. Microsoft can connect to that existing network to provide connectivity from any of your offices.

A key aspect of ExpressRoute is that your connectivity is via private routes; it does not traverse the public internet – except for Content Delivery Network (CDN) components, which by design must leverage the internet to function.

As you leverage more advanced network options, you must have tighter control over traffic flow between VNETs and your on-premises network.

VNET peering – Network Connectivity and Security

Any two VNETs can be connected using peering, and there are two types of peering available:

  • VNET peering, which connects two VNETs in the same region
  • Global VNET peering, which connects two VNETs in different regions

You can connect two VNETs that are in different subscriptions. However, you must ensure that the address spaces in each VNET do not overlap. So, if VNET 1 and VNET 2 both use the address range of 10.0.0.0/16, the peering will fail.

Peerings between VNETs are also non-transitive – this means that if you have three VNETs – VNET 1, VNET 2, and VNET 3 – and you create a peering between VNET 1 and VNET 2 and VNET 2 and VNET 3, devices in VNET 1 will not be able to access a resource in VNET 3 – in other words, you cannot traverse the two peers. Instead, you would have to explicitly connect VNET 1 to VNET 3 as well, as we can see in the following diagram:

Figure 8.11 – Peerings are not transitive

Peerings between VNETs are not the only type of network you may need to connect; the other common scenario is connecting on-premises networks into Azure. For this, we can use a VPN gateway.

VPN gateways

When you need to connect an on-premises network to Azure, you can use a VPN gateway. A VPN gateway uses a gateway device on your corporate network and a gateway device in Azure. The two are then connected with a VPN that uses the public network to create an encrypted route between your two gateways. In other words, you use the internet but your traffic is encrypted and, therefore, secure.

You can use two types of VPN – a Point to Site (P2S) VPN, used by individual clients to connect directly to a remote gateway, and a Site to Site (S2S) VPN, used to connect networks.

When creating a VPN connection, you can choose between a policy-based VPN or a route-based VPN.

Policy-based VPNs

Policy-based VPNs are generally used for connections using legacy VPN gateways, as they are not as flexible as route-based. Policy-based VPNs use IKEv1 protocols and static routing to define the source and destination network ranges in the policy, rather than in a routing table.

Route-based VPNs

Route-based VPNs are the preferred choice and should be used unless legacy requirements prevent it. Route-based VPNs use IKEv2 and support dynamic routing protocols whereby routing tables direct traffic based on discovery.

Important Note

Internet Key Exchange (IKE) v1 and v2 are VPN encryption protocols that ensure traffic is encrypted between two points by authenticating both the client and the server and then agreeing on an actual encryption method. IKEv2 is the successor to IKEv1. It is faster and provides greater functionality.

When creating a VPN, you have different sizes available, and the choice of size, or SKU, is dependent on your requirements. The following table shows the current differences:

The basic VPN is only recommended for use for dev/test and not for production. Also, basic does not support IKEv2 or RADIUS authentication. This may impact you depending on the clients using the VPN. For example, Mac computers do not support IKEv1 and cannot use a basic VPN for a P2S connection.

When creating a VPN connection, you need several services and components set up.

Private endpoint connections – Network Connectivity and Security

We have said that service endpoints assign an internal IP to services that are then used to direct the flow of traffic to it. However, the actual IP is hidden and can therefore not be referenced by yourself.

There are times when you need to access a service such as SQL or a storage account via a private IP – either for direct connectivity from an on-premises network or when you have strict firewall policies between your users and your solution.

For these scenarios, Private endpoint connections can be used to assign private IP addresses to certain Azure services. Private endpoints are very similar to service endpoints, except you have visibility of the underlying IP address and so they can therefore be used across VPNs and ExpressRoute.

However, private endpoints rely on DNS to function correctly. As most services use host headers (that is, an FQDN) to determine your individual backend service, connecting via the IP itself does not work. Instead, you must set up a DNS record that sets your service to the internal IP.

For example, if you create a private endpoint for your storage account called mystorage that uses an IP address of 10.0.0.10, to access the service securely, you must create a DNS record so that mystorage.blob.core.windows.net resolves to 10.0.0.10.

This can be performed by either creating DNS records in your DNS service or forwarding the request to an Azure private zone and having the internal Azure DNS service resolve it for you.

Azure private endpoints support more services than service endpoints and are, therefore, the only option in some circumstances. In addition to the services supported by service endpoints, private endpoints also support the following:

  • Azure Automation
  • Azure IoT Hub
  • Azure Kubernetes Service – Kubernetes API
  • Azure Search
  • Azure App Configuration
  • Azure Backup
  • Azure Relay
  • Azure Event Grid
  • Azure Machine Learning
  • SignalR
  • Azure Monitor
  • Azure File Sync

Using a combination of NSGs, ASGs, Azure Firewall, service endpoints, and private endpoints, you have the tools to secure your workloads internally and externally. Next, we will examine how we can extend the actual VNETs by exploring the different options for connecting into them or connecting different VNETs.

Connectivity

A simple, standalone solution may only require a single VNET, and especially if your service is an externally facing application for clients, you may not need to create anything more complicated.

However, for enterprise applications that contain many different services, or for hybrid scenarios where you need to connect securely to Azure from an on-premises network, you must consider the other options for providing connectivity.

We will start by looking at connecting two VNETs.

Previously, we separated services within different subnets. However, each of those subnets was in the same subnet. Because of this, connectivity between the devices was automatic – other than defining NSG rules, connectivity just happened.

More complex solutions may be built across multiple VNETs, and these VNETs may or may not be in the same region. By default, communication between VNETs is not enabled. Therefore you must set this up if required. The simplest way to achieve this connectivity is with VNET peering.

Service endpoints – Network Connectivity and Security

Many services are exposed via a public address or URL. For example, Blob Storage is accessed via <accountname>.blob.core.windows.net. Even if your application is running on a VM connected to a VNET, communication to the default endpoint will be the public address, and full access to all IPs, internal and external, is allowed.

For public-facing systems, this may be desirable; however, if you need the backend service to be protected from the outside and only accessible internally, you can use a service endpoint.

Service endpoints provide direct and secure access from one Azure service to another over the Azure backbone. Internally, the service is given a private IP address, which is used instead of the default public IP address. Traffic from the source is then allowed, and external traffic becomes blocked, as we see in the following example:

Figure 8.8 – Protecting access with service endpoints

Although using service endpoints enables private IP addresses on the service, this address is not exposed or manageable by you. One effect of this is that although Azure-hosted services can connect to the service, on-premises systems cannot access it over a VPN or ExpressRoute. For these scenarios, an alternative solution called a private endpoint can be used, which we will cover in the next sub-section, or using an ExpressRoute with Microsoft peering using a NAT IP address.

Important Note

When you set up an ExpressRoute into Azure, you have the option of using Microsoft peering or private peering. Microsoft peering ensures all connectivity in the Office 365 platform. Azure goes over the ExpressRoute instead of private peering, sending only traffic destined for internal IP ranges to use the ExpressRoute. In contrast, public services are accessed via public endpoints. The most common form of connectivity is private peering; Microsoft peering is only recommended for specific scenarios. See https://docs.microsoft.com/en-us/microsoft-365/enterprise/azure-expressroute?view=o365-worldwide for more details.

To use service endpoints, the service itself must be enabled on the subnet, and the service you wish to lock down must have the public network option turned off and the source subnet added as an allowable source.

Important Note

Service endpoints ignore NSGs – therefore, any rules you have in place and attached to the secure subnet are effectively ignored. This only affects the point-to-point connection between the subnet and the service endpoint. All other NSG rules still hold.

At the time of writing, the following Azure services support service endpoints:

  • Azure Storage
  • Azure Key Vault
  • Azure SQL Database
  • Azure Synapse Analytics
  • Azure PostgreSQL Server
  • Azure MySQL Server
  • Azure MariaDB
  • Azure Cosmos DB
  • Azure Service Bus
  • Azure Event Hubs
  • Azure App Service
  • Azure Cognitive Services
  • Azure Container Registry

To enable service endpoints on a subnet, in the Azure portal, go to the properties of the VNET you wish to use, select the Subnets blade on the left-hand menu, then select your subnet. The subnet configuration window appears with the option to choose one or more services, as we can see in the following screenshot. Once you have made changes, click Save:

Figure 8.9 – Enabling service endpoints on a subnet

Once enabled, you can then restrict access to your backend service. In the following example, we will limit access to a storage account from a subnet:

  1. Go to the Azure portal at https://portal.azure.com.
  2. In the search bar, search for and select Storage accounts.
  3. Select the storage account you wish to restrict access to.
  4. On the left-hand menu, click the Networking option.
  5. Change the Allow access from option from All networks to Selected networks.
  6. Click + Add existing virtual network.
  7. Select the VNET and subnet you want to restrict access to.
  8. Click Save.

The following screenshot shows an example of a secure storage account:

Figure 8.10 – Restricting VNET access

Once set up, any access except the defined VNET will be denied, and any traffic from services on the VNET to the storage account will now be directly over the Azure backbone.

You may have noticed another option in the Networking tab – Private endpoint connections.