Archives 2021

Container Instances – Designing Compute Solutions

Virtual machines offer a way to run multiple, isolated applications on a single piece of hardware. However, virtual machines are relatively inefficient in that every single instance contains a full copy of the operating system.

Containers wrap and isolate individual applications and their dependencies but still use the same underlying operating systems as other containers running on the host – as we can see in the following diagram:

Figure 7.4 – Containers versus virtual machines

This provides several advantages, including speed and the way they are defined. Azure uses Docker as the container engine, and Docker images are built-in code. This enables easier and repeatable deployments.

Because containers are also lightweight, they are much faster to provision and start up, enabling applications based on them to react quickly to demands for resources.

Containers are ideal for a range of scenarios. Many legacy applications can be containerized relatively quickly, making them a great option when migrating to the cloud.

Containers’ lightweight and resource-efficient nature also lends itself to microservice architectures whereby applications are broken into smaller services that can scale out with more instances in response to demand.

We cover containers in more detail later in this chapter, in the Architecting for containerization and Kubernetes section.

What to watch out for

Not all applications can be containerized, and containerization removes some controls that would otherwise be available on a standard virtual machine.

As the number of images and containers increases in an application, it can become challenging to maintain and manage them; in these cases, an orchestration layer may be required, which we will cover next.

Azure Kubernetes Service (AKS)

Microservice-based applications often require specific capabilities to be effective, such as automated provisioning and deployment, resource allocation, monitoring and responding to container health events, load balancing, traffic routing, and more.

Kubernetes is a service that provides these capabilities, which are often referred to as orchestration.

AKS stands for Azure Kubernetes Service and is the ideal choice for microservice-based applications that need to dynamically respond to events such as individual node outages or automatically scaling resources in response to demand. Because AKS is a managed service, much of the complexity of creating and managing the cluster is taken care of for you.

The following shows a high-level overview of a typical AKS cluster and it is described in more detail in the Azure Kubernetes Service section later in this chapter:

Figure 7.5 – AKS cluster

AKS is also platform-independent – any application built to run on the Kubernetes service can easily be migrated from one cluster to another regardless of whether it is in Azure, on-premise, or even another cloud vendor.

As already stated, we cover containers and AKS in more detail later in this chapter, in the Architecting for containerization and Kubernetes section.

What to watch out for – Designing Compute Solutions

Azure Batch is, of course, not suited to interactive applications such as websites or services that must store files locally for periods – although, as already discussed, it can output results to Azure Storage.

Service Fabric

Modern applications are often built or run as microservices, smaller components that can be scaled independently of other services. To achieve greater efficiency, it is common to run multiple services on the same VM. However, as an application will be built of numerous services, each of which needs to scale, managing, distributing, and scaling, known as orchestration, can become difficult.

Azure Service Fabric is a container orchestrator that makes the management and deployment of software packages onto scalable infrastructure easier.

The following diagram shows a typical Service Fabric architecture; applications are deployed to VMs or VM scale sets:

Figure 7.3 – Azure Service Fabric example architecture

It is particularly suited to .NET applications that would traditionally run on a virtual machine, and one of its most significant benefits is that it supports stateful services. Service Fabric powers many of Microsoft’s services, such as Azure SQL, Cosmos DB, Power BI, and others.

Tip

When building modern applications, there is often discussion around stateful and stateless applications. When a client is communicating with a backend system, such as a website, you need to keep track of those requests – for example, when you log in, how can you confirm the next request is from that same client? This is known as state. Stateless applications expect the client to track this information and provide it back to the server with every request – usually in the form of a token validated by the server. With stateful applications, the server keeps track of the client, but this requires the client to always to use the same backend server – which is more difficult when your systems are spread across multiple servers.

Using Service Fabric enables developers to build distributed systems without worrying about how those systems scale and communicate. It is an excellent choice for moving existing applications into a scalable environment without the need to completely re-architect.

What to watch out for

You will soon see that there are many similarities between Service Fabric and AKS clusters – one of the most significant differences between the two is portability. Because Service Fabric is tightly integrated into Azure and other Microsoft technologies, it may not work well if you need to move the solution to another platform.

Comparing compute options – Designing Compute Solutions

Each type of compute has its own set of strengths; however, each also has its primary use cases, and therefore, might not be suitable for some scenarios.

Virtual machines

As the closest technology to existing on-premise systems, VMs are best placed for use cases requiring either fast migration to the cloud or those legacy systems that cannot run on other services without reworking the application.

The ability to quickly provision, test, and destroy a VM makes them ideal for testing and developing products, especially when you need to ascertain how a particular piece of software works on different operating systems.

Sometimes a solution may have stringent requirements around security in that they cannot use shared compute. Running such applications on VMs helps ensure processing is not shared. Through the use of dedicated hosts, you can even provision your physical hardware to run those VMs on.

What to watch out for

To make VMs scalable and resilient, you must architect and deploy supporting technologies or configure the machines accordingly. By default, a single VM is not resilient. Failure of the physical hardware can disrupt services, and the servers do not scale automatically.

Building multiple VMs in availability sets and across Availability Zones can protect you against many such events, and scale sets allow you to configure automatic scaling. However, these are optional configurations and may require additional components such as load balancers. These options require careful planning and can increase costs.

Important note

We will cover availability sets and scale sets in more detail in Chapter 14, High Availability and Redundancy Concepts.

Azure Batch

With Azure Batch, you create applications that perform specific tasks, which run in node pools. Node pools can contain thousands of VMs that are created, run a task, and are then decommissioned. No information is stored on the VMs themselves. However, the input and output of datasets can be achieved by reading and writing to Azure storage accounts.

Azure Batch is suited to the parallel processing of tasks and high-performance computing (HPC) batch jobs. Being able to provision thousands of VMs for short periods, combined with per-second billing, ensures efficient costs for such projects.

The following diagram shows how a typical batch service might work. As we can see, input files can be ingested from Azure Storage by the Batch service, which then distributes it to a node in a node pool for processing. The code that performs the processing is held within Azure Batch as a ZIP file. All output is then sent back out to the storage account:

Figure 7.2 – Pool, job, and task management

Some examples of a typical workload may include the following:

  • Financial risk modeling
  • Image and video rendering
  • Media transcoding
  • Large data imports and transformation

With Azure Batch, you can also opt for low-priority VMs – these are cheaper but do not have guaranteed availability. Instead, they are allocated from surplus capacity within the data center. In other words, you must wait for the surplus compute to become available.

Understanding different types of compute – Designing Compute Solutions-2

  • Scalability

Different services have different methods for scaling. Legacy applications may need to use traditional load balancing methods by building VMs in web farms with load balancers in front to distribute the load.

Modern web applications can make use of App Service or Azure Functions, which scale automatically without the need for additional components.

  • Availability

Each Azure service has a Service-Level Agreement (SLA) that determines a baseline for how much uptime a service offers. The mix of components used can also affect this value. For example, a single VM has an SLA of 95%, whereas two VMs across Availability Zones with a load balancer in front has an SLA of 99.99%.

Azure Functions and App Service have an SLA of 99.95% without any additional components.

Important note

Service-Level Agreements (SLAs) define specific metrics by which a service is measured. In Azure, it is the amount of time any particular service is agreed to be available for. This is usually measured as a percentage of that uptime – for example, 99.95% (referred to as three and a half nines) or 99.99% (referred to as four nines). Your choice of components and how they are architected will impact the SLA Microsoft offers.

An SLA of 99.95% means up to 4.38 hours of downtime a year is allowed, whereas 99.99% means only 52.60 minutes are permitted.

  • Security

As services move from IaaS to PaaS and FaaS, the security responsibility shifts. For VMs, Microsoft is responsible for the physical security and underlying infrastructure, whereas you are responsible for patching, anti-virus software, and applications that run on them. For PaaS and FaaS, Microsoft is also responsible for security on the service. However, you need to be careful of different configuration elements within the service that may not be compliant with your requirements.

For some organizations, all traffic flow needs to be tightly controlled, especially for internal services; most PaaS solutions support this but only as a configurable option, which can sometimes increase costs.

  • Cost

FaaS provides a very granular cost model in that you pay for execution time. Whereas IaaS and some PaaS demand you provision set resources based on required CPU and RAM. For example, a VM incurs costs as long as it is running, which is continual for many use cases.

When migrating existing legacy applications, this may be the only option, but it isn’t the most efficient from a cost perspective. Refactoring applications may cost more upfront but could be cheaper in the long run as they only consume resources and incur costs periodically.

Similarly, a new microservice built to respond to events on an ad hoc basis would suit an Azure function, whereas the same process running on a VM would not be cost-effective.

  • Architecture styles

How an application is designed can directly impact the choice of technology. VMs are best suited to older architectures such as N-tier, whereas microservice and event-driven patterns are well suited to Azure Functions or containerization.

  • User skills

Azure provides several technologies for no-code development. Power Automate, and the workflow development system, is specifically built to allow end users with no development knowledge to quickly create simple apps.

As you can see, to decide on a compute technology, you must factor in many different requirements. The following chart shows a simple workflow to help in this process:

Figure 7.1 – Compute options workflow

Next, we will look in more detail at each service and provide example use cases.

Understanding different types of compute – Designing Compute Solutions-1

In the previous chapter, we looked at how to secure our Azure applications using key vaults, security principals, and managed identities.

When building solutions in Azure many components use some form of compute – such as a virtual machine (VM). However, there are many different types of compute, each with its own strengths. Therefore, in this chapter, we focus on the different types of compute services we have available to us and which options are best suited to which scenarios.

We will then maintain the security and health of VMs by ensuring they are always up to date with the latest OS patches.

Finally, we’ll look at containerization and how we can use Azure Kubernetes Service (AKS).

With this in mind, we will be covering the following topics:

  • Understanding different types of compute
  • Automating virtual machine management
  • Architecting for containerization and Kubernetes
Technical requirements

This chapter will use the Azure portal (https://portal.azure.com) for examples.

Understanding different types of compute

When we architect solutions, there will often be at least one component that needs to host, or run, an application. The application could be built specifically for the task or an off-the-shelf package bought from a vendor.

Azure provides several compute services for hosting your application; each type can be grouped into one of three kinds of hosting model:

  • Infrastructure as a Service (IaaS): VMs are within this category and support services such as storage (that is, disk drives) and networking. IaaS is the closest to a traditional on-premise environment, except Microsoft manages the underlying infrastructure, including hardware and the host operating system. You are still responsible for maintaining the guest operating system, however, including patching, monitoring, anti-virus software, and so on.
  • Platform as a Service (PaaS): Azure App Service is an example of a PaaS component. With PaaS, you do not need to worry about the operating system (other than to ensure what you deploy to it is compatible). Microsoft manages all maintenance, patching, and anti-virus software; you simply deploy your applications to it. When provisioning PaaS components, you generally specify an amount and CPU and RAM, and your costs will be based on this.
  • Serverless or Function as a Service (FaaS): FaaS, or serverless, is at the opposite end to IaaS. With FaaS, any notion of CPU, RAM, or management is completely abstracted away; you simply deploy your code, and the required resources are utilized to perform the task. Because of this, FaaS pricing models are calculated on exact usage, for example, the number of executions, as opposed to IaaS, where pricing is based on the specific RAM and CPU.

Some services may appear to blur the line between the hosting options; for example, VMs can be built as scale sets that automatically scale up and down on demand.

Generally, as you move from IaaS to FaaS, management becomes easier; however, control, flexibility, and portability are lost.

When choosing a compute hosting model for your solution, you will need to consider many factors:

  • Deployment and compatibility

Not all applications can run on all services without modification. Older applications may have dependencies on installed services or can only be deployed via traditionally installed packages. For these legacy systems, an IaaS approach might be the only option.

Conversely, a modern application built using Agile DevOps processes, with regularly updated and redeployed components, might be better suited to Web Apps or Azure Functions.

  • Support

Existing enterprise systems typically have support teams and processes embedded within the organization and will be used to patch and update systems in line with existing support processes.

Smaller companies may have fewer IT resources to provide these support tasks. Therefore, they would benefit significantly from PaaS or FaaS systems that do not require maintenance as the Azure platform handles this.

Using managed identities in web apps – Building Application Security

We will replace the key vault that used a client ID and secret in the following walk-through. This time, we will use an AzureServiceTokenProvider, which will use the assigned managed identity instead:

  1. Open your web app in Visual Studio Code.
  2. Open a Terminal window within Visual Studio Code and enter the following to install an additional NuGet package:
    dotnet add package Microsoft.Azure.Services.AppAuthentication
  3. Open the Program.cs file and add the following using statements to the top of the page:
    using Microsoft.Azure.KeyVault;
    using Microsoft.Azure.Services.AppAuthentication;
    using Microsoft.Extensions.Configuration.AzureKeyVault;
  4. Modify the CreateHostBuilder method as follows:
    public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
    .ConfigureAppConfiguration((ctx, builder) =>
    {
    var azureServiceTokenProvider = new AzureServiceTokenProvider();
    var keyVaultClient = new KeyVaultClient(
    new KeyVaultClient .AuthenticationCallback(
    azureService TokenProvider.KeyVaultTokenCallback));
    builder.AddAzureKeyVault (“https://packtpubkeyvault01.vault.azure.net/”, new DefaultKeyVaultSecretManager());
    })
    .ConfigureWebHostDefaults(webBuilder =>
    {
    webBuilder.UseStartup();
    });
  5. Open a Terminal window in Visual Studio Code to rebuild and republish the application by entering the following:
    dotnet build
    dotnet publish -c Release -o ./publish
  6. Next, right-click the publish folder and select Deploy Web App.
  7. Select your subscription and web app to deploy, too, when prompted.
  8. Once deployed, browse to your website.

Your website is accessing the secret from the key vault as before; only this time, it is using the managed identity.

In this section, we have replaced a service principal with a managed identity. The use of managed identities offers a more secure way of connecting services as login details are never exposed.

Summary

This chapter covered three tools in Azure that can help secure our applications, particularly around managing data encryption keys and authentication between systems.

We looked at how to use key vaults for creating and managing secrets and keys and how we can then secure access to them using Access policies. We also looked at how we can use security principals and managed identities to secure our applications.

This chapter also concluded the Identity and Security requirement of the AZ-304 exam, looking at authentication, authorization, system governance, and application-level security.

Next, we will look at how we architect solutions around specific Azure infrastructure and storage components.

Exam Scenario

The solutions to the exam scenarios can be found at the end of the book.

Mega Corp plans a new internal web solution consisting of a frontend web app, multiple middle-tier API apps, and a SQL database.

The database’s data is highly sensitive, and the leadership team is concerned that providing database connection strings to the developers would compromise data protection laws and industry compliance regulations.

Part of the application includes the storage of documents in a Blob Storage account; however, the leadership team is not comfortable with Microsoft managing the encryption keys.

As this is an internal application, authentication needs to be integrated into the existing Active Directory. Also, each of the middle-tier services needs to know who the logged-in user is at all times – in other words, any authentication mechanism needs to pass through all layers of the system.

Design a solution that will alleviate the company’s security concerns but still provides a robust application.

Using managed identities – Building Application Security

In the previous section, we looked at working with security principals that can provide programmatic access to key vaults from our applications. There are a couple of problems with them – you must generate and provide a client ID and secret, and you must manage the rotation of those secrets yourself.
Managed identities provides a similar access option but is fully managed by Azure – there is no need to generate IDs or passwords; you set the appropriate access through role-based access controls. The managed identity mechanism can also be used to provide access to the following:
• Azure Data Lake
• Azure SQL
• Azure Storage (Blobs and Queues)
• Azure Analysis Services
• Azure Event Hubs
• Azure Service Bus

We have the option of using either a system-assigned or user-assigned identity. System-assigned is the easiest route – and is ideal for simple scenarios – but they are tied to the resource in question – that is, a virtual machine or web app. User-assigned identities are discrete objects and can be assigned to multiple resources – this can be useful if your application uses numerous components to give them all the same managed identity.

As well as Web Apps and Virtual Machines, the following services can also be set to use managed identities:
• Azure Functions
• Azure Logic Apps
• Azure Kubernetes Service
• Azure Data Explorer
• Azure Data Factory

As with security principals, working through using a managed identity is the easiest way to understand it.

Assigning a managed identity

In the next example, we will modify the web app we created in the Working with security principals section to use a managed identity instead:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the top search bar, search for and select App Services.
  3. Select your web app – for example, packtpub-secureapp.
  4. On the left-hand menu, click Identity.
  5. System assigned is the default identity type; set the status to On as in the following example:

Figure 6.18 – Setting the app identity

  1. Click Save.
  2. In the top search bar, search for and select Key vaults.
  3. Click on your key vault.
  4. On the left-hand menu, click Access policies.
  5. Click Add Access Policy.
  6. Click the drop-down list next to Configure from template and choose Secret Management.
  7. Under Select Principal, click None selected. Search for the name of the web app you created earlier in Deploying a web app – in our example, packtpub-secureapp.
  8. Click Add.
  9. Click Save.

With the managed identity set up on our web app, and the necessary policy linked in our key vault, we can update our code to use the identity instead of the security principal.

Enabling AD integration – Building Application Security

To enable AD integration, we must first set a login redirect URI for our new website on the service principal we created earlier, and then configure the web app to use that principal:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the top search bar, search for and select Azure Active Directory.
  3. On the left-hand menu, click App registrations.
  4. Select the SecureWebApp registration.
  5. On the left-hand menu, click Authentication.
  6. Click + Add a Platform.
  7. In the Configure Platforms window that appears, choose Web.
  8. Paste in the URL from your web app into Redirect URIs and add the following to it: /.auth/login/aad/callback. In this example, the URI would be https://packtpub-secureapp.azurewebsites.net/.auth/login/aad/callback.
  9. Click Configure.
  10. Scroll down the page to Implicit grant, tick the ID tokens box, then click Save. The page should look like this:Figure 6.16 – Setting app authentication

Figure 6.16 – Setting app authentication

  1. We now need to configure your app to use the app registration – in the top search bar, search for and select App Services.
  2. Select your web app – for example, packtpub-secureapp.
  3. On the left-hand menu, click Authentication/Authorization.
  4. Set App Service Authentication to On.
  5. Under Action to take when a request is not authenticated, choose Log in with Azure Active Directory.
  6. Under Authentication Providers, click Active Directory.
  7. On the next page, set the first Management mode option toExpress, and the second Management mode option to Select Existing AD App.
  8. Click Azure AD App, and select the app registration we created in Creating the service principal in the Working with Security Principals section.
  9. Click OK. The page should look like the following. Click Save.

Figure 6.17 – Setting authentication

Wait a few minutes for the changes to take effect, then browse to the web app; for example, https://packtpub-secureapp.azurewebsites.net. You will now be prompted to log in with your Active Directory account, and once authenticated, you will be directed back to your application. If you are not prompted to sign in, open a private browsing window instead, as your credentials may already be cached in the browser.

As you can see, integrating your application into your Azure Active Directory tenant is very easy and provides a secure and seamless login experience for your users.

The first half of this section involved using a security principal to access the key vault. service principals can be used to access many different services; however, they do rely on a client ID and secret being generated and shared.

Next, we will look at an alternative and more secure method of providing authenticated access to many Azure resources, called managed identities.