Category Exams of Microsoft AZ-304

What to watch out for – Designing Compute Solutions

In general, AKS and Kubernetes are more complicated than other technologies, especially Azure native alternatives such as App Service or Azure Functions. Additional tools are often required to better monitor and deploy solutions, which can sometimes lead to security concerns for some organizations. Although these can, of course, be satisfied, there is more work involved in setting up and using AKS for the first time.

Kubernetes is also designed to host multiple services and therefore may not be cost-effective for smaller, more straightforward applications such as a single, basic website. As an example, the recommended minimum number of nodes in a production AKS cluster is three nodes. In comparison, a resilient web app can be run on just a single node when using Azure App Service.

App Service

App Service is a fully managed hosting platform for web applications, RESTful APIs, mobile backend services, and background jobs.

App Service supports applications built in ASP.NET, ASP.NET Core, Java, Ruby, Node.js, PHP, and Python. Applications deployed to App Service are scalable, secure, and adhere to many industry-standard compliance standards.

App Service is linked to an App Service plan, which defines the amount of CPU and RAM available to your applications. You can also assign multiple app services to the same App Service plan to share resources.

For highly secure environments, Application Service Environments (ASEs) provide isolated environments built on top of VNets.

App Service is, therefore, best suited to web apps, RESTful APIs, and mobile backend apps. It can be easily scaled by defining CPU and RAM-based thresholds and are fully managed, so you do not need to worry about security patching or resilience within a region.

What to watch out for

Because App Service is always running, is always costs – that is, it is never idle. However, using automated scaling can at least ensure a minimal cost during low usage, and scale-out with additional instances in response to demand.

Container Instances – Designing Compute Solutions

Virtual machines offer a way to run multiple, isolated applications on a single piece of hardware. However, virtual machines are relatively inefficient in that every single instance contains a full copy of the operating system.

Containers wrap and isolate individual applications and their dependencies but still use the same underlying operating systems as other containers running on the host – as we can see in the following diagram:

Figure 7.4 – Containers versus virtual machines

This provides several advantages, including speed and the way they are defined. Azure uses Docker as the container engine, and Docker images are built-in code. This enables easier and repeatable deployments.

Because containers are also lightweight, they are much faster to provision and start up, enabling applications based on them to react quickly to demands for resources.

Containers are ideal for a range of scenarios. Many legacy applications can be containerized relatively quickly, making them a great option when migrating to the cloud.

Containers’ lightweight and resource-efficient nature also lends itself to microservice architectures whereby applications are broken into smaller services that can scale out with more instances in response to demand.

We cover containers in more detail later in this chapter, in the Architecting for containerization and Kubernetes section.

What to watch out for

Not all applications can be containerized, and containerization removes some controls that would otherwise be available on a standard virtual machine.

As the number of images and containers increases in an application, it can become challenging to maintain and manage them; in these cases, an orchestration layer may be required, which we will cover next.

Azure Kubernetes Service (AKS)

Microservice-based applications often require specific capabilities to be effective, such as automated provisioning and deployment, resource allocation, monitoring and responding to container health events, load balancing, traffic routing, and more.

Kubernetes is a service that provides these capabilities, which are often referred to as orchestration.

AKS stands for Azure Kubernetes Service and is the ideal choice for microservice-based applications that need to dynamically respond to events such as individual node outages or automatically scaling resources in response to demand. Because AKS is a managed service, much of the complexity of creating and managing the cluster is taken care of for you.

The following shows a high-level overview of a typical AKS cluster and it is described in more detail in the Azure Kubernetes Service section later in this chapter:

Figure 7.5 – AKS cluster

AKS is also platform-independent – any application built to run on the Kubernetes service can easily be migrated from one cluster to another regardless of whether it is in Azure, on-premise, or even another cloud vendor.

As already stated, we cover containers and AKS in more detail later in this chapter, in the Architecting for containerization and Kubernetes section.

What to watch out for – Designing Compute Solutions

Azure Batch is, of course, not suited to interactive applications such as websites or services that must store files locally for periods – although, as already discussed, it can output results to Azure Storage.

Service Fabric

Modern applications are often built or run as microservices, smaller components that can be scaled independently of other services. To achieve greater efficiency, it is common to run multiple services on the same VM. However, as an application will be built of numerous services, each of which needs to scale, managing, distributing, and scaling, known as orchestration, can become difficult.

Azure Service Fabric is a container orchestrator that makes the management and deployment of software packages onto scalable infrastructure easier.

The following diagram shows a typical Service Fabric architecture; applications are deployed to VMs or VM scale sets:

Figure 7.3 – Azure Service Fabric example architecture

It is particularly suited to .NET applications that would traditionally run on a virtual machine, and one of its most significant benefits is that it supports stateful services. Service Fabric powers many of Microsoft’s services, such as Azure SQL, Cosmos DB, Power BI, and others.

Tip

When building modern applications, there is often discussion around stateful and stateless applications. When a client is communicating with a backend system, such as a website, you need to keep track of those requests – for example, when you log in, how can you confirm the next request is from that same client? This is known as state. Stateless applications expect the client to track this information and provide it back to the server with every request – usually in the form of a token validated by the server. With stateful applications, the server keeps track of the client, but this requires the client to always to use the same backend server – which is more difficult when your systems are spread across multiple servers.

Using Service Fabric enables developers to build distributed systems without worrying about how those systems scale and communicate. It is an excellent choice for moving existing applications into a scalable environment without the need to completely re-architect.

What to watch out for

You will soon see that there are many similarities between Service Fabric and AKS clusters – one of the most significant differences between the two is portability. Because Service Fabric is tightly integrated into Azure and other Microsoft technologies, it may not work well if you need to move the solution to another platform.

Using managed identities in web apps – Building Application Security

We will replace the key vault that used a client ID and secret in the following walk-through. This time, we will use an AzureServiceTokenProvider, which will use the assigned managed identity instead:

  1. Open your web app in Visual Studio Code.
  2. Open a Terminal window within Visual Studio Code and enter the following to install an additional NuGet package:
    dotnet add package Microsoft.Azure.Services.AppAuthentication
  3. Open the Program.cs file and add the following using statements to the top of the page:
    using Microsoft.Azure.KeyVault;
    using Microsoft.Azure.Services.AppAuthentication;
    using Microsoft.Extensions.Configuration.AzureKeyVault;
  4. Modify the CreateHostBuilder method as follows:
    public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
    .ConfigureAppConfiguration((ctx, builder) =>
    {
    var azureServiceTokenProvider = new AzureServiceTokenProvider();
    var keyVaultClient = new KeyVaultClient(
    new KeyVaultClient .AuthenticationCallback(
    azureService TokenProvider.KeyVaultTokenCallback));
    builder.AddAzureKeyVault (“https://packtpubkeyvault01.vault.azure.net/”, new DefaultKeyVaultSecretManager());
    })
    .ConfigureWebHostDefaults(webBuilder =>
    {
    webBuilder.UseStartup();
    });
  5. Open a Terminal window in Visual Studio Code to rebuild and republish the application by entering the following:
    dotnet build
    dotnet publish -c Release -o ./publish
  6. Next, right-click the publish folder and select Deploy Web App.
  7. Select your subscription and web app to deploy, too, when prompted.
  8. Once deployed, browse to your website.

Your website is accessing the secret from the key vault as before; only this time, it is using the managed identity.

In this section, we have replaced a service principal with a managed identity. The use of managed identities offers a more secure way of connecting services as login details are never exposed.

Summary

This chapter covered three tools in Azure that can help secure our applications, particularly around managing data encryption keys and authentication between systems.

We looked at how to use key vaults for creating and managing secrets and keys and how we can then secure access to them using Access policies. We also looked at how we can use security principals and managed identities to secure our applications.

This chapter also concluded the Identity and Security requirement of the AZ-304 exam, looking at authentication, authorization, system governance, and application-level security.

Next, we will look at how we architect solutions around specific Azure infrastructure and storage components.

Exam Scenario

The solutions to the exam scenarios can be found at the end of the book.

Mega Corp plans a new internal web solution consisting of a frontend web app, multiple middle-tier API apps, and a SQL database.

The database’s data is highly sensitive, and the leadership team is concerned that providing database connection strings to the developers would compromise data protection laws and industry compliance regulations.

Part of the application includes the storage of documents in a Blob Storage account; however, the leadership team is not comfortable with Microsoft managing the encryption keys.

As this is an internal application, authentication needs to be integrated into the existing Active Directory. Also, each of the middle-tier services needs to know who the logged-in user is at all times – in other words, any authentication mechanism needs to pass through all layers of the system.

Design a solution that will alleviate the company’s security concerns but still provides a robust application.

Using managed identities – Building Application Security

In the previous section, we looked at working with security principals that can provide programmatic access to key vaults from our applications. There are a couple of problems with them – you must generate and provide a client ID and secret, and you must manage the rotation of those secrets yourself.
Managed identities provides a similar access option but is fully managed by Azure – there is no need to generate IDs or passwords; you set the appropriate access through role-based access controls. The managed identity mechanism can also be used to provide access to the following:
• Azure Data Lake
• Azure SQL
• Azure Storage (Blobs and Queues)
• Azure Analysis Services
• Azure Event Hubs
• Azure Service Bus

We have the option of using either a system-assigned or user-assigned identity. System-assigned is the easiest route – and is ideal for simple scenarios – but they are tied to the resource in question – that is, a virtual machine or web app. User-assigned identities are discrete objects and can be assigned to multiple resources – this can be useful if your application uses numerous components to give them all the same managed identity.

As well as Web Apps and Virtual Machines, the following services can also be set to use managed identities:
• Azure Functions
• Azure Logic Apps
• Azure Kubernetes Service
• Azure Data Explorer
• Azure Data Factory

As with security principals, working through using a managed identity is the easiest way to understand it.

Assigning a managed identity

In the next example, we will modify the web app we created in the Working with security principals section to use a managed identity instead:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the top search bar, search for and select App Services.
  3. Select your web app – for example, packtpub-secureapp.
  4. On the left-hand menu, click Identity.
  5. System assigned is the default identity type; set the status to On as in the following example:

Figure 6.18 – Setting the app identity

  1. Click Save.
  2. In the top search bar, search for and select Key vaults.
  3. Click on your key vault.
  4. On the left-hand menu, click Access policies.
  5. Click Add Access Policy.
  6. Click the drop-down list next to Configure from template and choose Secret Management.
  7. Under Select Principal, click None selected. Search for the name of the web app you created earlier in Deploying a web app – in our example, packtpub-secureapp.
  8. Click Add.
  9. Click Save.

With the managed identity set up on our web app, and the necessary policy linked in our key vault, we can update our code to use the identity instead of the security principal.

Enabling AD integration – Building Application Security

To enable AD integration, we must first set a login redirect URI for our new website on the service principal we created earlier, and then configure the web app to use that principal:

  1. Navigate to the Azure portal at https://portal.azure.com.
  2. In the top search bar, search for and select Azure Active Directory.
  3. On the left-hand menu, click App registrations.
  4. Select the SecureWebApp registration.
  5. On the left-hand menu, click Authentication.
  6. Click + Add a Platform.
  7. In the Configure Platforms window that appears, choose Web.
  8. Paste in the URL from your web app into Redirect URIs and add the following to it: /.auth/login/aad/callback. In this example, the URI would be https://packtpub-secureapp.azurewebsites.net/.auth/login/aad/callback.
  9. Click Configure.
  10. Scroll down the page to Implicit grant, tick the ID tokens box, then click Save. The page should look like this:Figure 6.16 – Setting app authentication

Figure 6.16 – Setting app authentication

  1. We now need to configure your app to use the app registration – in the top search bar, search for and select App Services.
  2. Select your web app – for example, packtpub-secureapp.
  3. On the left-hand menu, click Authentication/Authorization.
  4. Set App Service Authentication to On.
  5. Under Action to take when a request is not authenticated, choose Log in with Azure Active Directory.
  6. Under Authentication Providers, click Active Directory.
  7. On the next page, set the first Management mode option toExpress, and the second Management mode option to Select Existing AD App.
  8. Click Azure AD App, and select the app registration we created in Creating the service principal in the Working with Security Principals section.
  9. Click OK. The page should look like the following. Click Save.

Figure 6.17 – Setting authentication

Wait a few minutes for the changes to take effect, then browse to the web app; for example, https://packtpub-secureapp.azurewebsites.net. You will now be prompted to log in with your Active Directory account, and once authenticated, you will be directed back to your application. If you are not prompted to sign in, open a private browsing window instead, as your credentials may already be cached in the browser.

As you can see, integrating your application into your Azure Active Directory tenant is very easy and provides a secure and seamless login experience for your users.

The first half of this section involved using a security principal to access the key vault. service principals can be used to access many different services; however, they do rely on a client ID and secret being generated and shared.

Next, we will look at an alternative and more secure method of providing authenticated access to many Azure resources, called managed identities.

Load balancing and advanced traffic routing – Network Connectivity and Security

Many PaaS options in Azure, such as Web Apps and Functions, automatically scale as demand increases (and within limits you set). For this to function, Azure places services such as these behind a load balancer to distribute the load between them and redirect traffic from unhealthy nodes to healthy ones.

There are times when either a load balancer is not included, such as with VMs, or when you want to provide additional functionality not provided by the standard load balancers – such as the ability to balance between regions. In these cases, we have the option to build and configure our load balancers. You can choose several options, each providing its capabilities depending on your requirements.

Azure Load Balancer

Azure Load Balancer allows you to distribute traffic across VMs, allowing you to scale apps by distributing load and offering high availability. If a node becomes unhealthy, traffic is not sent to us, as shown in the following diagram:

Figure 8.16 – Azure Load Balancer

Load balancers distribute traffic and manage the session persistence between nodes in one of two ways:

  • The default is a five-tuple hash. The tuple is composed of the source IP, source port, destination IP, destination port, and protocol type. Because the source port is included in the hash and the source port changes for each session, clients might be using different VMs between sessions. This means applications that need to maintain a state for a client between requests will not work.
  • The alternative is source IP affinity. This is also known as session affinity or client IP affinity. This mode uses a two-tuple hash (from the source IP address and destination IP address) or a three-tuple hash (from the source IP address, destination IP address, and protocol type). This ensures that a specific client’s requests are always sent to the same VM behind the load balancer. Thus, applications that need to maintain state will still function.

Load balancers can be configured to be either internally (private) facing or external (public), and there are two SKUs for load balancers – Basic and Standard. The Basic tier is free but only supports 300 instances, VMs in availability sets or scale sets, and HTTP and TCP protocols when configuring health probes. The standard tier supports more advanced management features, such as zone-redundant frontends for inbound and outbound traffic and HTTPS probes, and you can have up to 1,000 instances. Finally, the Standard tier has an SLA of 99.99%, whereas the basic tier offers no SLA.