Category What to watch out for

On-premises resources – Network Connectivity and Security

To connect to an Azure VPN gateway, you will need a VPN device on your corporate network that supports policy-based or route-based VPN gateways. It also needs to have a public IPv4 network address.

Azure resources

Within Azure, you need to set up the following components:

  • VNET: The address space used by the VNET must not overlap with your corporate ranges.
  • Gateway subnet: The VPN gateway must be installed in a specific subnet, and it must be called GatewaySubnet. It must have a range of at least /27 (32 addresses).
  • Public IP address: An IP address that can be connected to from the public network (internet).
  • Local network gateway: This defines the on-premises gateway and configuration.
  • VNET gateway: An Azure VPN or ExpressRoute gateway.

The following diagram shows how this might look:

Figure 8.12 – VPN gateway

As we can see from the preceding diagram, a VPN connection is made to a specific subnet and VNET within Azure. In most cases, you would need to connect multiple VNETs to the same connection, which we can perform by peering the connected VNET to your workload VNETs.

This is often called a hub-spoke model; we can see an example hub-spoke model in the following diagram:

Figure 8.13 – Hub-spoke architecture

Earlier, we stated that connections between VNETs are not transitive, therefore to set up the hub-spoke architecture, we must use a gateway transit – we do this when we create our peering connection between the spoke VNET (which contains our workloads) and the hub VNET (which includes the VNET gateway). On the options when creating a peering request from the spoke to the hub, select the Use the remote virtual network’s gateway option, as we can see in the following example:

Figure 8.14 – Setting the peering option to use gateway transit

Using a VPN is a simple way to connect securely to Azure. However, you are still using the public network; thus, connectivity and performance cannot be guaranteed. For a more robust and direct connection into Azure, companies can leverage ExpressRoute.

ExpressRoute

ExpressRoute provides a dedicated and utterly private connection into Azure, Office 365, and Dynamics 365. Throughput is significantly increased since connections are more reliable with minimal latency.

Connectivity is via authorized network providers who ensure connections are highly available; this means you get redundancy built-in.

There are three different models to choose from when ordering an ExpressRoute – CloudExchange co-location, point-to-point Ethernet connection, and any-to-any connection:

  • CloudExchange co-location is for companies that house their existing data center with an internet service provider.
  • Point-to-point connections are dedicated connections between your premises and Azure.
  • Any-to-any is for companies that have existing WAN infrastructure. Microsoft can connect to that existing network to provide connectivity from any of your offices.

A key aspect of ExpressRoute is that your connectivity is via private routes; it does not traverse the public internet – except for Content Delivery Network (CDN) components, which by design must leverage the internet to function.

As you leverage more advanced network options, you must have tighter control over traffic flow between VNETs and your on-premises network.

VNET peering – Network Connectivity and Security

Any two VNETs can be connected using peering, and there are two types of peering available:

  • VNET peering, which connects two VNETs in the same region
  • Global VNET peering, which connects two VNETs in different regions

You can connect two VNETs that are in different subscriptions. However, you must ensure that the address spaces in each VNET do not overlap. So, if VNET 1 and VNET 2 both use the address range of 10.0.0.0/16, the peering will fail.

Peerings between VNETs are also non-transitive – this means that if you have three VNETs – VNET 1, VNET 2, and VNET 3 – and you create a peering between VNET 1 and VNET 2 and VNET 2 and VNET 3, devices in VNET 1 will not be able to access a resource in VNET 3 – in other words, you cannot traverse the two peers. Instead, you would have to explicitly connect VNET 1 to VNET 3 as well, as we can see in the following diagram:

Figure 8.11 – Peerings are not transitive

Peerings between VNETs are not the only type of network you may need to connect; the other common scenario is connecting on-premises networks into Azure. For this, we can use a VPN gateway.

VPN gateways

When you need to connect an on-premises network to Azure, you can use a VPN gateway. A VPN gateway uses a gateway device on your corporate network and a gateway device in Azure. The two are then connected with a VPN that uses the public network to create an encrypted route between your two gateways. In other words, you use the internet but your traffic is encrypted and, therefore, secure.

You can use two types of VPN – a Point to Site (P2S) VPN, used by individual clients to connect directly to a remote gateway, and a Site to Site (S2S) VPN, used to connect networks.

When creating a VPN connection, you can choose between a policy-based VPN or a route-based VPN.

Policy-based VPNs

Policy-based VPNs are generally used for connections using legacy VPN gateways, as they are not as flexible as route-based. Policy-based VPNs use IKEv1 protocols and static routing to define the source and destination network ranges in the policy, rather than in a routing table.

Route-based VPNs

Route-based VPNs are the preferred choice and should be used unless legacy requirements prevent it. Route-based VPNs use IKEv2 and support dynamic routing protocols whereby routing tables direct traffic based on discovery.

Important Note

Internet Key Exchange (IKE) v1 and v2 are VPN encryption protocols that ensure traffic is encrypted between two points by authenticating both the client and the server and then agreeing on an actual encryption method. IKEv2 is the successor to IKEv1. It is faster and provides greater functionality.

When creating a VPN, you have different sizes available, and the choice of size, or SKU, is dependent on your requirements. The following table shows the current differences:

The basic VPN is only recommended for use for dev/test and not for production. Also, basic does not support IKEv2 or RADIUS authentication. This may impact you depending on the clients using the VPN. For example, Mac computers do not support IKEv1 and cannot use a basic VPN for a P2S connection.

When creating a VPN connection, you need several services and components set up.

Private endpoint connections – Network Connectivity and Security

We have said that service endpoints assign an internal IP to services that are then used to direct the flow of traffic to it. However, the actual IP is hidden and can therefore not be referenced by yourself.

There are times when you need to access a service such as SQL or a storage account via a private IP – either for direct connectivity from an on-premises network or when you have strict firewall policies between your users and your solution.

For these scenarios, Private endpoint connections can be used to assign private IP addresses to certain Azure services. Private endpoints are very similar to service endpoints, except you have visibility of the underlying IP address and so they can therefore be used across VPNs and ExpressRoute.

However, private endpoints rely on DNS to function correctly. As most services use host headers (that is, an FQDN) to determine your individual backend service, connecting via the IP itself does not work. Instead, you must set up a DNS record that sets your service to the internal IP.

For example, if you create a private endpoint for your storage account called mystorage that uses an IP address of 10.0.0.10, to access the service securely, you must create a DNS record so that mystorage.blob.core.windows.net resolves to 10.0.0.10.

This can be performed by either creating DNS records in your DNS service or forwarding the request to an Azure private zone and having the internal Azure DNS service resolve it for you.

Azure private endpoints support more services than service endpoints and are, therefore, the only option in some circumstances. In addition to the services supported by service endpoints, private endpoints also support the following:

  • Azure Automation
  • Azure IoT Hub
  • Azure Kubernetes Service – Kubernetes API
  • Azure Search
  • Azure App Configuration
  • Azure Backup
  • Azure Relay
  • Azure Event Grid
  • Azure Machine Learning
  • SignalR
  • Azure Monitor
  • Azure File Sync

Using a combination of NSGs, ASGs, Azure Firewall, service endpoints, and private endpoints, you have the tools to secure your workloads internally and externally. Next, we will examine how we can extend the actual VNETs by exploring the different options for connecting into them or connecting different VNETs.

Connectivity

A simple, standalone solution may only require a single VNET, and especially if your service is an externally facing application for clients, you may not need to create anything more complicated.

However, for enterprise applications that contain many different services, or for hybrid scenarios where you need to connect securely to Azure from an on-premises network, you must consider the other options for providing connectivity.

We will start by looking at connecting two VNETs.

Previously, we separated services within different subnets. However, each of those subnets was in the same subnet. Because of this, connectivity between the devices was automatic – other than defining NSG rules, connectivity just happened.

More complex solutions may be built across multiple VNETs, and these VNETs may or may not be in the same region. By default, communication between VNETs is not enabled. Therefore you must set this up if required. The simplest way to achieve this connectivity is with VNET peering.

Service endpoints – Network Connectivity and Security

Many services are exposed via a public address or URL. For example, Blob Storage is accessed via <accountname>.blob.core.windows.net. Even if your application is running on a VM connected to a VNET, communication to the default endpoint will be the public address, and full access to all IPs, internal and external, is allowed.

For public-facing systems, this may be desirable; however, if you need the backend service to be protected from the outside and only accessible internally, you can use a service endpoint.

Service endpoints provide direct and secure access from one Azure service to another over the Azure backbone. Internally, the service is given a private IP address, which is used instead of the default public IP address. Traffic from the source is then allowed, and external traffic becomes blocked, as we see in the following example:

Figure 8.8 – Protecting access with service endpoints

Although using service endpoints enables private IP addresses on the service, this address is not exposed or manageable by you. One effect of this is that although Azure-hosted services can connect to the service, on-premises systems cannot access it over a VPN or ExpressRoute. For these scenarios, an alternative solution called a private endpoint can be used, which we will cover in the next sub-section, or using an ExpressRoute with Microsoft peering using a NAT IP address.

Important Note

When you set up an ExpressRoute into Azure, you have the option of using Microsoft peering or private peering. Microsoft peering ensures all connectivity in the Office 365 platform. Azure goes over the ExpressRoute instead of private peering, sending only traffic destined for internal IP ranges to use the ExpressRoute. In contrast, public services are accessed via public endpoints. The most common form of connectivity is private peering; Microsoft peering is only recommended for specific scenarios. See https://docs.microsoft.com/en-us/microsoft-365/enterprise/azure-expressroute?view=o365-worldwide for more details.

To use service endpoints, the service itself must be enabled on the subnet, and the service you wish to lock down must have the public network option turned off and the source subnet added as an allowable source.

Important Note

Service endpoints ignore NSGs – therefore, any rules you have in place and attached to the secure subnet are effectively ignored. This only affects the point-to-point connection between the subnet and the service endpoint. All other NSG rules still hold.

At the time of writing, the following Azure services support service endpoints:

  • Azure Storage
  • Azure Key Vault
  • Azure SQL Database
  • Azure Synapse Analytics
  • Azure PostgreSQL Server
  • Azure MySQL Server
  • Azure MariaDB
  • Azure Cosmos DB
  • Azure Service Bus
  • Azure Event Hubs
  • Azure App Service
  • Azure Cognitive Services
  • Azure Container Registry

To enable service endpoints on a subnet, in the Azure portal, go to the properties of the VNET you wish to use, select the Subnets blade on the left-hand menu, then select your subnet. The subnet configuration window appears with the option to choose one or more services, as we can see in the following screenshot. Once you have made changes, click Save:

Figure 8.9 – Enabling service endpoints on a subnet

Once enabled, you can then restrict access to your backend service. In the following example, we will limit access to a storage account from a subnet:

  1. Go to the Azure portal at https://portal.azure.com.
  2. In the search bar, search for and select Storage accounts.
  3. Select the storage account you wish to restrict access to.
  4. On the left-hand menu, click the Networking option.
  5. Change the Allow access from option from All networks to Selected networks.
  6. Click + Add existing virtual network.
  7. Select the VNET and subnet you want to restrict access to.
  8. Click Save.

The following screenshot shows an example of a secure storage account:

Figure 8.10 – Restricting VNET access

Once set up, any access except the defined VNET will be denied, and any traffic from services on the VNET to the storage account will now be directly over the Azure backbone.

You may have noticed another option in the Networking tab – Private endpoint connections.

Azure DNS – Network Connectivity and Security

Once we have our resources built in Azure, we need to resolve names with IP addresses to communicate with them. By default, services in Azure use Azure-managed DNS servers. Azure-managed DNS provides name resolution for your Azure resources and doesn’t require any specific configuration from you.

Azure-managed DNS servers

Azure-managed DNS is highly available and fully resilient. VMs built in Azure can use Azure-managed DNS to communicate with other Azure services or other VMs in your VNETs without the need for a Fully Qualified Domain Name (FQDN).

However, this name resolution only works for Azure services; if you wish to communicate with on-premises servers or need more control over DNS, you must build and integrate with your DNS servers.

When configuring a VNET in Azure, you can override the default DNS servers. In this way, you can define your DNS servers to ensure queries to your on-premises resources are resolved correctly. You can also enter the Azure-managed DNS servers as well; if your DNS solution cannot resolve a query, the service would then fall back to the alternate Azure DNS service. The address for the Azure DNS service is 168.63.129.16.

To change the default DNS servers in Azure, perform the following steps:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the search bar, search for and select Virtual Networks.
  3. Select your VNET.
  4. On the left-hand menu, select DNS servers.
  5. Change the default option from Default (Azure-provided) to Custom.
  6. Enter your DNS servers, optionally followed by the Azure internal DNS server address.

The following screenshot shows an example of how this might look:

Figure 8.3 – Setting up custom DNS servers

These settings must be set up on each VNET that you wish to set up the custom DNS settings.

Tip

Be careful how many DNS servers you set. Each DNS server will be queried in turn, and if you put too many, the request will time out before it reaches the final server. This can cause issues if you need to fall back to the Azure DNS service for Azure-hosted services.

You can also leverage Azure private DNS, using private zones, for your internal DNS needs, using your custom domain names.

Azure private DNS zones

Using custom DNS allows you to use your domains with your Azure resources without the need to set up and maintain your DNS servers for resolution.

This option can provide much tighter integration with your Azure-hosted resources as it allows automatic record updates and DNS resolution between VNETs. As a managed solution, it is also resilient without maintaining separate VMs to run the DNS server.

Azure also provides you with the ability to manage your external domain records. Using Azure DNS zones, you can delegate the name resolution for your custom domain to Azure’s DNS servers.

Private zones are also used with PrivateLink IP services, which we will examine in the next section, Implementing network security.

Public IP addresses – Network Connectivity and Security

A public IP address is a discrete component that can be created and attached to many services, such as VMs. The public IP component is dedicated to a resource until you un-assign it – in other words, you cannot use the same public IP across multiple resources.

Public IP addresses can be either static or dynamic. With a static IP, once the resource has been created, the assigned IP address it is given stays the same until that resource is deleted. A dynamic address can change in specific scenarios. For example, if you create a public IP address for a VM as a dynamic address, when you stop the VM, the address is released and is different when assigned once you start the VM up again. With static addresses, the IP is assigned once you attached it to the VM, and it stays until you manually remove it.

Static addresses are useful if you have a firewall device that controls access to the service that can only be configured to use IP addresses or DNS resolution as changing the IP would mean the DNS record would also need updating. You also need to use a static address if you use TLS/SSL certificates linked to IP addresses.

Private IP addresses

Private IP addresses can be assigned to various Azure components, such as VMs, network load balancers, or application gateways. The devices are connected to a VNET, and the IP range you wish to use for your resources is defined at the VNET level.

When creating VNETs, you assign an IP range; the default is 10.0.0.0/16 – which provides 65,536 possible IP addresses. VNETs can contain multiple ranges if you wish; however, you need to be careful that those ranges do not interfere with public addresses.

When assigning IP ranges, you denote the range using CIDR notation – a forward slash (/) followed by a number that defines the number of addresses within that range. The following are just some example ranges:

Tip

CIDR notation is a more compact way to state an IP address and it’s ranged based on a subnet mask. The number after the slash (/) is the count of leading 1 bits in the network mask. The complete range of addresses can be found here: https://bretthargreaves.com/ip-cheatsheet/.

For more in-depth details of CIDR, see https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing.

Subnets are then created within the VNET, and each subnet must also be assigned an IP range that is within the range defined at the VNET level, as we can see in the following example diagram:

Figure 8.2 – Subnets within VNETs

For every subnet you create, Azure reserves five IPs for internal use – for smaller subnets, this has a significant impact on the number of available addresses. The reservations within a given range are as follows:

With these reservations in mind, the minimum size of a subnet in Azure is a /29 network with eight IPs, of which only three are useable. The largest allowable range is /8, giving 16,777,216 IPs with 16,777,211 usable.

Private ranges in Azure can be used purely for services within your Azure subscriptions. If you don’t connect the VNETs or require communications between them, you can have more than one VNET with the same ranges.

If you plan to allow services within one VNET to communicate with another VNET, you must consider more carefully the ranges you assign to ensure they do not overlap. This is especially crucial if you use VNETs to extend your private corporate network into Azure, as creating ranges that overlap can cause routing and addressing problems.

As with public IPs, private IPs can also be static or dynamic. With dynamic addressing, Azure assigns the next available IP within the given range. For example, if you are using a 10.0.0.0 network, and 10.0.0.3–10.0.0.20 are already used, your new resource will be assigned 10.0.0.21.

Understanding IP addressing and DNS in Azure – Network Connectivity and Security

When building services in Azure, you sometimes choose to use internal IP addresses and external IP addresses. Internal IP addresses can only communicate internally and use VNETs. Many services can also use public IP addresses, which allow you to communicate with the service from the internet.

Before we delve into public and internal IP addresses, we need to understand the basics of IP addressing in general, and especially the use of subnets and subnet masks.

Understanding subnets and subnet masks

When devices are connected to a TCP/IP-based network, they are provided with an IP address in the notation xxx.xxx.xxx.xxx. Generally, all devices that are on the same local network can communicate with each other without any additional settings.

When devices on different networks need to communicate, they must do so via a router or gateway. Devices use a subnet mask to differentiate between addresses on the local network and those on a remote network.

The network mask breaks down an IP address into a device or host address component and a network component. It does this by laying a binary mask over the IP address with the host address to the right.

255 in binary is 11111111 and 0 in binary is 00000000. The mask says how many of those bits are the network, with 1 denoting a network address and 0 denoting a host address.

Thus, 255.0.0.0 becomes 11111111.00000000.00000000.0000000, therefore in the address 10.0.0.1, 10 is the network and 0.0.0.1 is the host address. Similarly, with a mask of 255.255.0.0 and an address of 10.0.0.1, 10.0 becomes the network and 0.1 the host. The following diagram shows this concept more clearly:

Figure 8.1 – Example subnet mask

Splitting an address space into multiple networks is known as subnetting, and subnets can be broken down into even smaller subnets until the mask becomes too big.

When configuring IP settings for devices, you often supply an IP address, a subnet mask, and the address of the router on the local network that will connect you to other networks.

Sometimes, when denoting an IP address range, the subnet mask and range are written in a shorthand form known as CIDR notation. We will cover CIDR notation examples in the Private IP addresses sub-section.

This is a relatively simplified overview of network addressing and subnetting, and although the AZ-304 exam will not explicitly ask you questions on this, it does help to better understand the next set of topics.

Understanding Azure networking options – Network Connectivity and Security

In the previous chapter, we examined the different options when building computer services, from the different types of Virtual Machines (VMs) to web apps and containerization.

All solution components need to be able to communicate effectively and safely; therefore, in this chapter, we will discuss what options we have to control traffic flow using route tables and load balancing components, securing traffic with different firewalling options, and managing IP addressing and resolution.

With this in mind, we will cover the following topics:

  • Understanding Azure networking options
  • Understanding IP addressing and DNS in Azure
  • Implementing network security
  • Connectivity
  • Load balancing and advanced traffic routing

Technical requirements

This chapter will use the Azure portal (https://portal.azure.com) and you need an Azure subscription for the examples.

Understanding Azure networking options

Services in Azure need to communicate, and this communication is performed over a virtual network, or VNET.

There are essentially two types of networking in Azure – private VNETs and the Azure backbone. The Azure backbone is a fully managed service. The underlying details are never exposed to you – although the ranges used by many services are available, grouped by region, for download in a JSON file. The Azure backbone is generally used when non-VNET-connected services communicate with each other; for example, when storage accounts replicate data or when Azure functions communicate with SQL and Cosmos DB, Azure handles all aspects of these communications. This can cause issues when you need more control, especially if you want to limit access to your services at the network level, that is, by implementing firewall rules.

Important Note

The address ranges of services in Azure change continually as the services grow within any particular region, and can be downloaded from this link: https://www.microsoft.com/en-us/download/details.aspx?id=56519.

Some services can either be integrated with, or built on top of, a VNET. VMs are the most common example of this, and to build a VM, you must use a VNET. Other services can also be optionally integrated with VNETs in different ways. For example, VMs can communicate with an Azure SQL database using a service endpoint, enabling you to limit access and ensure traffic is kept private and off the public network. We look at service endpoints and other ways to secure internal communications later in this chapter, in the Implementing network security section.

The first subject we will need to look at when dealing with VNETs and connectivity is that of addressing and Doman Name Services (DNSes).

Deployments and YAML – Designing Compute Solutions

A pod’s resources are defined as a deployment, which is described within a YAML manifest. The manifest defines everything you need to state how many copies or replicas of a pod to run, what resources each pod requires, the container image to use, and other information necessary for your service.
A typical YAML file may look like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
name: nginx
image: mcr.microsoft.com/oss/nginx/nginx:1.15.2-alpine
ports:
containerPort: 80
resources:
requests:
CPU: 250m
memory: 64Mi
limits:
CPU: 500m
memory: 256Mi

In this example, taken from the docs.microsoft.com site, we see a deployment using the nginx container image, requesting a minimum of 250 m (millicore) and 64 Mi (mebibytes) of RAM, and a maximum of 500 m and 256 Mi.

Tip

A mebibyte is equal to 1024 KB, whereas a millicore is one-thousandth of a CPU core.

Once we have our pods and applications defined within a YAML file, we can then use that file to tell our AKS cluster to use the information in that file and deploy then run our application. This can be performed by running the deployment commands against the AKS APIs or via DevOps pipelines.

Kubernetes is a powerful tool for building resilient and dynamic applications that use microservices, and using images is incredibly efficient and portable due to their use of containerization; however, they are complex.

AKS abstracts much of the complexity of using and managing a Kubernetes cluster. Still, your development and support teams need to be fully conversant with the unique capabilities and configuration options available.

Summary

This chapter looked at the different compute options available to us in Azure and looked at the strengths and weaknesses of each. With any solution, the choice of technology is dependent on your requirements and the skills of the teams who are building them.

We then looked at how to design update management processes to ensure any VMs we use as part of our solution are kept up to date with the latest security patches.

Finally, we looked at how we can use containerization in our solutions, and specifically how Azure Kubernetes Service provides a flexible and dynamic approach to running microservices.

In the next chapter, we will look at the different networking options in Azure, including load balancing for resilience and performance.

Exam scenario

The solutions to the exam scenarios can be found at the end of this book.

Mega Corp is planning a new multi-service solution to help the business manage expenses. The application development team has decided to break the solution into different services that communicate with each other.

End users will upload expense claims as a Word document to the system, and these documents must flow through to different approvers.

The HR department also wants to amend some of the workflows themselves as they can change often.

The application will have a web frontend, and the application developers are used to building .NET websites. However, they would like to start moving to a more containerized approach.

Suggest some compute components that would be suited to this solution.

Nodes and node pools – Designing Compute Solutions

An AKS cluster has one or more nodes, which are virtual machines running the Kubernetes node components and container runtime:

  • kubelet is the Kubernetes agent that responds to requests from the cluster master and runs the requested containers.
  • kube-proxy manages virtual networking.
  • The container runtime is the Docker engine that runs your containers.

The following diagram shows these components and their relation to Azure:

Figure 7.12 – AKS nodes

When you define your AKS nodes, you choose the SKU of the VM you want, which in turn determines the number of CPUs, RAM, and type of disk. You can also run GPU-powered VMs, which are great for mathematical and AI-related workloads.

You can also set up the maximum and the minimum number of nodes to run in your cluster, and AKS will automatically add and remove nodes within those limits.

AKS nodes are built with either Ubuntu Linux or Windows 2019, and because the cluster is managed, you cannot change this. If you need to specify your OS or use a different container runtime, you must build your Kubernetes cluster using the appropriate engine.

When you define your node sizes, you need to be aware that Azure automatically reserves an amount of CPU and RAM to ensure each node performs as expected – these reservations are 60 ms for CPU and 20% of RAM, up to 4 GB So, if your VMs have 7 GB RAM, the reservation will be 1.4 GB but for any VM with 20 GB RAM and above, the reservation will be 4 GB.

This means that the actual RAM and CPU amounts available to your nodes will always be slightly less than the size would otherwise indicate.

When you have more than one node of the same configuration, you group them into a node pool, and the first node is created within the default node pool. When you upgrade or scale an AKS cluster, the action will be performed against either the default node pool or a specific node pool of your choosing.

Pods

A node runs your applications within pods. Typically, a pod has a one-to-one mapping to a container, that is, a running instance. However, in advanced scenarios, you can run multiple containers within a single pod.

At the pod level, you define the number of resources to assign to your particular services, such as the amount of RAM and CPU. When pods are required to run Kubernetes, the scheduler attempts to run the pod on a node with available resources to match what you have defined.