Comparing compute options – Designing Compute Solutions
Each type of compute has its own set of strengths; however, each also has its primary use cases, and therefore, might not be suitable for some scenarios.
Virtual machines
As the closest technology to existing on-premise systems, VMs are best placed for use cases requiring either fast migration to the cloud or those legacy systems that cannot run on other services without reworking the application.
The ability to quickly provision, test, and destroy a VM makes them ideal for testing and developing products, especially when you need to ascertain how a particular piece of software works on different operating systems.
Sometimes a solution may have stringent requirements around security in that they cannot use shared compute. Running such applications on VMs helps ensure processing is not shared. Through the use of dedicated hosts, you can even provision your physical hardware to run those VMs on.
What to watch out for
To make VMs scalable and resilient, you must architect and deploy supporting technologies or configure the machines accordingly. By default, a single VM is not resilient. Failure of the physical hardware can disrupt services, and the servers do not scale automatically.
Building multiple VMs in availability sets and across Availability Zones can protect you against many such events, and scale sets allow you to configure automatic scaling. However, these are optional configurations and may require additional components such as load balancers. These options require careful planning and can increase costs.
Important note
We will cover availability sets and scale sets in more detail in Chapter 14, High Availability and Redundancy Concepts.
Azure Batch
With Azure Batch, you create applications that perform specific tasks, which run in node pools. Node pools can contain thousands of VMs that are created, run a task, and are then decommissioned. No information is stored on the VMs themselves. However, the input and output of datasets can be achieved by reading and writing to Azure storage accounts.
Azure Batch is suited to the parallel processing of tasks and high-performance computing (HPC) batch jobs. Being able to provision thousands of VMs for short periods, combined with per-second billing, ensures efficient costs for such projects.
The following diagram shows how a typical batch service might work. As we can see, input files can be ingested from Azure Storage by the Batch service, which then distributes it to a node in a node pool for processing. The code that performs the processing is held within Azure Batch as a ZIP file. All output is then sent back out to the storage account:

Figure 7.2 – Pool, job, and task management
Some examples of a typical workload may include the following:
- Financial risk modeling
- Image and video rendering
- Media transcoding
- Large data imports and transformation
With Azure Batch, you can also opt for low-priority VMs – these are cheaper but do not have guaranteed availability. Instead, they are allocated from surplus capacity within the data center. In other words, you must wait for the surplus compute to become available.