Category Using managed identities

Architecting for containerization and Kubernetes – Designing Compute Solutions

This section will look in more detail at AKS, Microsoft’s implementation of Kubernetes. To understand what AKS is, we need to take a small step back and understand containerization and Kubernetes itself.

Containerization

As we briefly mentioned earlier, containerization is a form of virtualization in that you can run multiple containers upon the same hardware, much like virtual machines. Unlike virtual machines, containers share the underlying OS of the host. This provides much greater efficiency and density. You can run many more containers upon the same hardware than you can run virtual machines because of the lower memory overhead of needing to run multiple copies of the OS – as we can see in the following diagram:

Figure 7.10 – Containers versus virtual machines

In addition to this efficiency, containers are portable. They can easily be moved from one host to another, and this is because containers are self-contained and isolated. A container includes everything it needs to run, including the application code, runtime, system tools, libraries, and settings.

To run containers, you need a container host – the most common is Docker, and in fact, container capabilities in Azure use the Docker runtime.

A container is a running instance, and what that instance contains is defined in an image. Images can be defined in code; in Docker images, this is called a Dockerfile.

The Dockerfile uses a specific syntax that defines what base image you wish you use – that is, either a vanilla OS or an existing image with other tools and components on it, followed by your unique configuration options, which may include additional software to install, networking, file shares, and so on. An example Dockerfile might look like this:

FROM node:current-slim
WORKDIR /usr/src/app
COPY package.json .
RUN npm install
EXPOSE 8080
CMD [ “npm”, “start” ]
COPY . .

In this example, we start with an image called node:current-slim, set a working directory, copy a file into it, and install a package called npm. Finally, we expose the application over port 8080 and issue the npm start command.

This Dockerfile can create a new image but notice how it is based on an existing image. By extending existing images, you can more easily build your containers with consistent patterns.

The images we build, or use as a source, are held in a container registry. Docker has its public container registry, but you can create your private registry with the Azure Container Registry service in Azure.

Once we have created our new image and stored it in a container registry, we can deploy that image as a running container. Containers in Azure can be run either using the Azure Container Instances(ACI), a containerized web app, or an AKS cluster.

Web apps for containers

Web apps for containers are a great choice if your development team is already used to using Azure Web Apps to run monolithic or N-tier apps and you want to start moving toward a containerized platform. Web Apps works best when you only need one or a few long-running instances or when you would benefit from a shared or free App Service plan.

An example use case might be when you have an existing .NET app that you wish to containerize that hasn’t been built as a microservice.

Azure Container Instances

ACI is a fully managed environment for containers, and you are only billed for the time you use them. As such, they suit short-lived microservices, although, like web apps for containers, you should only consider this option if you are running a few services.

Web apps for containers and ACI are great for simple services or when you are starting the containerization journey. Once your applications begin to fully embrace microservices and containerized patterns, you will need better control and management; for these scenarios, you should consider using AKS.

What to watch out for – Designing Compute Solutions

When running as a consumption plan, Azure Functions is best suited to short-lived tasks – for tasks that run longer than 10 minutes, you should consider alternatives or running them on an App Service plan.

You should also consider how often they will be executed because you pay per execution on a consumption plan. If it is continuously triggered, your costs could increase beyond that of a standard web app. Again, consider alternative approaches or the use of an App Service plan.

Finally, consumption-based apps cannot integrate with VNets. Again, if this is required, running them on an App Service plan can provide this functionality.

Logic Apps

Azure Logic Apps is another serverless option – when creating logic apps, you do not need to be concerned with how much RAM or CPU to provision; instead, you pay per execution or triggering them.

Important note

Consumption versus fixed price: Many serverless components, including Logic Apps and Functions, can be run on isolated environments, or in the case of Logic Apps, an Isolated Service Environment (ISE), whereby you pay for provisioned resources in the same way as a virtual machine.

Logic Apps shares many concepts with Azure Functions; you can define triggers, actions, flow logic, and connectors for communicating with other services. Whereas you define this in code with Functions, Logic Apps provides a drag-and-drop interface that allows you to build workflows quickly.

Logic Apps has hundreds of pre-built connectors that allow you to interface with hundreds of systems – not just in Azure but also externally. By combining these connectors with if-then-else style logic flows and either scheduled or action-based triggers, you can develop complex workflows without writing a single line of code.

The following screenshot shows a typical workflow built purely in the Azure portal:

Figure 7.7 – Logic Apps example

With their extensibility features, you can also create your custom logic and connectors for integrating with your services.

Finally, although the solution can be built entirely in the Azure portal, you can also create workflows using traditional development tools such as Visual Studio or Visual Studio Code. This is because solutions are defined as ARM templates – which enables developers to define workflows and store them in code repositories. You can then automate deployments through DevOps pipelines.

What to watch out for

Logic Apps provides a quick and relatively simple mechanism for creating business workflows. When you need to build more complex business logic or create custom connectors, you need to balance the difficulty of doing this versus using an alternative approach such as Azure Functions. Logic Apps still requires a level of developer experience and is not suitable if business users may need to develop and amend the workflows.

Power Automate

Power Automate, previously called Flow, is also a GUI-driven workflow creation tool that allows you to build automated business processes. Like Logic Apps, using Power Automate, you can define triggers and logic flow connected to other services, such as email, storage, or apps, through built-in connectors.

The most significant difference between Power Automate and Logic Apps is that Power Automate workflows can only be built via the drag-and-drop interface – you cannot edit or store the underlying code.

Therefore, the primary use case for Power Automate is for office workers and business analysts to create simple workflows that can use only the built-in connectors.

What to watch out for – Designing Compute Solutions

Azure Batch is, of course, not suited to interactive applications such as websites or services that must store files locally for periods – although, as already discussed, it can output results to Azure Storage.

Service Fabric

Modern applications are often built or run as microservices, smaller components that can be scaled independently of other services. To achieve greater efficiency, it is common to run multiple services on the same VM. However, as an application will be built of numerous services, each of which needs to scale, managing, distributing, and scaling, known as orchestration, can become difficult.

Azure Service Fabric is a container orchestrator that makes the management and deployment of software packages onto scalable infrastructure easier.

The following diagram shows a typical Service Fabric architecture; applications are deployed to VMs or VM scale sets:

Figure 7.3 – Azure Service Fabric example architecture

It is particularly suited to .NET applications that would traditionally run on a virtual machine, and one of its most significant benefits is that it supports stateful services. Service Fabric powers many of Microsoft’s services, such as Azure SQL, Cosmos DB, Power BI, and others.

Tip

When building modern applications, there is often discussion around stateful and stateless applications. When a client is communicating with a backend system, such as a website, you need to keep track of those requests – for example, when you log in, how can you confirm the next request is from that same client? This is known as state. Stateless applications expect the client to track this information and provide it back to the server with every request – usually in the form of a token validated by the server. With stateful applications, the server keeps track of the client, but this requires the client to always to use the same backend server – which is more difficult when your systems are spread across multiple servers.

Using Service Fabric enables developers to build distributed systems without worrying about how those systems scale and communicate. It is an excellent choice for moving existing applications into a scalable environment without the need to completely re-architect.

What to watch out for

You will soon see that there are many similarities between Service Fabric and AKS clusters – one of the most significant differences between the two is portability. Because Service Fabric is tightly integrated into Azure and other Microsoft technologies, it may not work well if you need to move the solution to another platform.

Comparing compute options – Designing Compute Solutions

Each type of compute has its own set of strengths; however, each also has its primary use cases, and therefore, might not be suitable for some scenarios.

Virtual machines

As the closest technology to existing on-premise systems, VMs are best placed for use cases requiring either fast migration to the cloud or those legacy systems that cannot run on other services without reworking the application.

The ability to quickly provision, test, and destroy a VM makes them ideal for testing and developing products, especially when you need to ascertain how a particular piece of software works on different operating systems.

Sometimes a solution may have stringent requirements around security in that they cannot use shared compute. Running such applications on VMs helps ensure processing is not shared. Through the use of dedicated hosts, you can even provision your physical hardware to run those VMs on.

What to watch out for

To make VMs scalable and resilient, you must architect and deploy supporting technologies or configure the machines accordingly. By default, a single VM is not resilient. Failure of the physical hardware can disrupt services, and the servers do not scale automatically.

Building multiple VMs in availability sets and across Availability Zones can protect you against many such events, and scale sets allow you to configure automatic scaling. However, these are optional configurations and may require additional components such as load balancers. These options require careful planning and can increase costs.

Important note

We will cover availability sets and scale sets in more detail in Chapter 14, High Availability and Redundancy Concepts.

Azure Batch

With Azure Batch, you create applications that perform specific tasks, which run in node pools. Node pools can contain thousands of VMs that are created, run a task, and are then decommissioned. No information is stored on the VMs themselves. However, the input and output of datasets can be achieved by reading and writing to Azure storage accounts.

Azure Batch is suited to the parallel processing of tasks and high-performance computing (HPC) batch jobs. Being able to provision thousands of VMs for short periods, combined with per-second billing, ensures efficient costs for such projects.

The following diagram shows how a typical batch service might work. As we can see, input files can be ingested from Azure Storage by the Batch service, which then distributes it to a node in a node pool for processing. The code that performs the processing is held within Azure Batch as a ZIP file. All output is then sent back out to the storage account:

Figure 7.2 – Pool, job, and task management

Some examples of a typical workload may include the following:

  • Financial risk modeling
  • Image and video rendering
  • Media transcoding
  • Large data imports and transformation

With Azure Batch, you can also opt for low-priority VMs – these are cheaper but do not have guaranteed availability. Instead, they are allocated from surplus capacity within the data center. In other words, you must wait for the surplus compute to become available.

Understanding different types of compute – Designing Compute Solutions-2

  • Scalability

Different services have different methods for scaling. Legacy applications may need to use traditional load balancing methods by building VMs in web farms with load balancers in front to distribute the load.

Modern web applications can make use of App Service or Azure Functions, which scale automatically without the need for additional components.

  • Availability

Each Azure service has a Service-Level Agreement (SLA) that determines a baseline for how much uptime a service offers. The mix of components used can also affect this value. For example, a single VM has an SLA of 95%, whereas two VMs across Availability Zones with a load balancer in front has an SLA of 99.99%.

Azure Functions and App Service have an SLA of 99.95% without any additional components.

Important note

Service-Level Agreements (SLAs) define specific metrics by which a service is measured. In Azure, it is the amount of time any particular service is agreed to be available for. This is usually measured as a percentage of that uptime – for example, 99.95% (referred to as three and a half nines) or 99.99% (referred to as four nines). Your choice of components and how they are architected will impact the SLA Microsoft offers.

An SLA of 99.95% means up to 4.38 hours of downtime a year is allowed, whereas 99.99% means only 52.60 minutes are permitted.

  • Security

As services move from IaaS to PaaS and FaaS, the security responsibility shifts. For VMs, Microsoft is responsible for the physical security and underlying infrastructure, whereas you are responsible for patching, anti-virus software, and applications that run on them. For PaaS and FaaS, Microsoft is also responsible for security on the service. However, you need to be careful of different configuration elements within the service that may not be compliant with your requirements.

For some organizations, all traffic flow needs to be tightly controlled, especially for internal services; most PaaS solutions support this but only as a configurable option, which can sometimes increase costs.

  • Cost

FaaS provides a very granular cost model in that you pay for execution time. Whereas IaaS and some PaaS demand you provision set resources based on required CPU and RAM. For example, a VM incurs costs as long as it is running, which is continual for many use cases.

When migrating existing legacy applications, this may be the only option, but it isn’t the most efficient from a cost perspective. Refactoring applications may cost more upfront but could be cheaper in the long run as they only consume resources and incur costs periodically.

Similarly, a new microservice built to respond to events on an ad hoc basis would suit an Azure function, whereas the same process running on a VM would not be cost-effective.

  • Architecture styles

How an application is designed can directly impact the choice of technology. VMs are best suited to older architectures such as N-tier, whereas microservice and event-driven patterns are well suited to Azure Functions or containerization.

  • User skills

Azure provides several technologies for no-code development. Power Automate, and the workflow development system, is specifically built to allow end users with no development knowledge to quickly create simple apps.

As you can see, to decide on a compute technology, you must factor in many different requirements. The following chart shows a simple workflow to help in this process:

Figure 7.1 – Compute options workflow

Next, we will look in more detail at each service and provide example use cases.

Understanding different types of compute – Designing Compute Solutions-1

In the previous chapter, we looked at how to secure our Azure applications using key vaults, security principals, and managed identities.

When building solutions in Azure many components use some form of compute – such as a virtual machine (VM). However, there are many different types of compute, each with its own strengths. Therefore, in this chapter, we focus on the different types of compute services we have available to us and which options are best suited to which scenarios.

We will then maintain the security and health of VMs by ensuring they are always up to date with the latest OS patches.

Finally, we’ll look at containerization and how we can use Azure Kubernetes Service (AKS).

With this in mind, we will be covering the following topics:

  • Understanding different types of compute
  • Automating virtual machine management
  • Architecting for containerization and Kubernetes
Technical requirements

This chapter will use the Azure portal (https://portal.azure.com) for examples.

Understanding different types of compute

When we architect solutions, there will often be at least one component that needs to host, or run, an application. The application could be built specifically for the task or an off-the-shelf package bought from a vendor.

Azure provides several compute services for hosting your application; each type can be grouped into one of three kinds of hosting model:

  • Infrastructure as a Service (IaaS): VMs are within this category and support services such as storage (that is, disk drives) and networking. IaaS is the closest to a traditional on-premise environment, except Microsoft manages the underlying infrastructure, including hardware and the host operating system. You are still responsible for maintaining the guest operating system, however, including patching, monitoring, anti-virus software, and so on.
  • Platform as a Service (PaaS): Azure App Service is an example of a PaaS component. With PaaS, you do not need to worry about the operating system (other than to ensure what you deploy to it is compatible). Microsoft manages all maintenance, patching, and anti-virus software; you simply deploy your applications to it. When provisioning PaaS components, you generally specify an amount and CPU and RAM, and your costs will be based on this.
  • Serverless or Function as a Service (FaaS): FaaS, or serverless, is at the opposite end to IaaS. With FaaS, any notion of CPU, RAM, or management is completely abstracted away; you simply deploy your code, and the required resources are utilized to perform the task. Because of this, FaaS pricing models are calculated on exact usage, for example, the number of executions, as opposed to IaaS, where pricing is based on the specific RAM and CPU.

Some services may appear to blur the line between the hosting options; for example, VMs can be built as scale sets that automatically scale up and down on demand.

Generally, as you move from IaaS to FaaS, management becomes easier; however, control, flexibility, and portability are lost.

When choosing a compute hosting model for your solution, you will need to consider many factors:

  • Deployment and compatibility

Not all applications can run on all services without modification. Older applications may have dependencies on installed services or can only be deployed via traditionally installed packages. For these legacy systems, an IaaS approach might be the only option.

Conversely, a modern application built using Agile DevOps processes, with regularly updated and redeployed components, might be better suited to Web Apps or Azure Functions.

  • Support

Existing enterprise systems typically have support teams and processes embedded within the organization and will be used to patch and update systems in line with existing support processes.

Smaller companies may have fewer IT resources to provide these support tasks. Therefore, they would benefit significantly from PaaS or FaaS systems that do not require maintenance as the Azure platform handles this.

Using managed identities in web apps – Building Application Security

We will replace the key vault that used a client ID and secret in the following walk-through. This time, we will use an AzureServiceTokenProvider, which will use the assigned managed identity instead:

  1. Open your web app in Visual Studio Code.
  2. Open a Terminal window within Visual Studio Code and enter the following to install an additional NuGet package:
    dotnet add package Microsoft.Azure.Services.AppAuthentication
  3. Open the Program.cs file and add the following using statements to the top of the page:
    using Microsoft.Azure.KeyVault;
    using Microsoft.Azure.Services.AppAuthentication;
    using Microsoft.Extensions.Configuration.AzureKeyVault;
  4. Modify the CreateHostBuilder method as follows:
    public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
    .ConfigureAppConfiguration((ctx, builder) =>
    {
    var azureServiceTokenProvider = new AzureServiceTokenProvider();
    var keyVaultClient = new KeyVaultClient(
    new KeyVaultClient .AuthenticationCallback(
    azureService TokenProvider.KeyVaultTokenCallback));
    builder.AddAzureKeyVault (“https://packtpubkeyvault01.vault.azure.net/”, new DefaultKeyVaultSecretManager());
    })
    .ConfigureWebHostDefaults(webBuilder =>
    {
    webBuilder.UseStartup();
    });
  5. Open a Terminal window in Visual Studio Code to rebuild and republish the application by entering the following:
    dotnet build
    dotnet publish -c Release -o ./publish
  6. Next, right-click the publish folder and select Deploy Web App.
  7. Select your subscription and web app to deploy, too, when prompted.
  8. Once deployed, browse to your website.

Your website is accessing the secret from the key vault as before; only this time, it is using the managed identity.

In this section, we have replaced a service principal with a managed identity. The use of managed identities offers a more secure way of connecting services as login details are never exposed.

Summary

This chapter covered three tools in Azure that can help secure our applications, particularly around managing data encryption keys and authentication between systems.

We looked at how to use key vaults for creating and managing secrets and keys and how we can then secure access to them using Access policies. We also looked at how we can use security principals and managed identities to secure our applications.

This chapter also concluded the Identity and Security requirement of the AZ-304 exam, looking at authentication, authorization, system governance, and application-level security.

Next, we will look at how we architect solutions around specific Azure infrastructure and storage components.

Exam Scenario

The solutions to the exam scenarios can be found at the end of the book.

Mega Corp plans a new internal web solution consisting of a frontend web app, multiple middle-tier API apps, and a SQL database.

The database’s data is highly sensitive, and the leadership team is concerned that providing database connection strings to the developers would compromise data protection laws and industry compliance regulations.

Part of the application includes the storage of documents in a Blob Storage account; however, the leadership team is not comfortable with Microsoft managing the encryption keys.

As this is an internal application, authentication needs to be integrated into the existing Active Directory. Also, each of the middle-tier services needs to know who the logged-in user is at all times – in other words, any authentication mechanism needs to pass through all layers of the system.

Design a solution that will alleviate the company’s security concerns but still provides a robust application.