Azure App Service recently introduced a feature called Run From Package. Rather than uploading our application binaries and other files to an App Service directly, we can instead package them into a zip file and provide App Services with the URL. This is a useful feature because it eliminates issues with file locking during deployments, it allows for atomic updates of application code, and it reduces the time required to boot an application. It also means that the 'release' of an application simply involves the deployment of a configuration setting. And because Azure Functions runs on top of App Services, the same technique can be used for Azure Functions too.
While we can store the packages anywhere on the internet, and then provide a URL to App Services to find them, a common approach is to use Azure Storage blobs to host the packages. Azure Storage is a relatively cheap and easy way to host files and get URLs to them, making it perfect for this type of deployment. Additionally, by permanently storing our packages in an Azure Storage account we can keep a permanent record of all deployments - and we can even use feature of Azure Storage like immutable blobs to ensure that the blobs can't be tampered with or deleted.
However there are a few different considerations that it's important to think through when using storage accounts in conjunction with App Services. In this post I'll describe one way to use this feature with Azure DevOps build and release pipelines, and some of the pros and cons of this approach.
Storage Account Considerations
When provisioning an Azure Storage account to contain your application packages, there are a few things you should consider.
First, I'd strongly recommend using SAS tokens in conjunction with a container access policy to ensure that your packages can't be accessed by anyone who shouldn't access them. Typically application packages that are destined for an App Service aren't files you want to be made available to everyone on the internet.
Second, consider the replication options you have for your storage account. For something that runs an App Service I'd generally recommend using the RA-GRS replication type to ensure that even if Azure Storage has a regional outage App Services can still access the packages. This needs to be considered as part of a wider disaster recovery strategy though, and also remember that in the event that the primary region for your Azure Storage account is unavailable, you need to do some manual work to switch your App Service to read your package from the secondary region.
Third, virtual network integration is not currently possible for storage accounts that are used by App Services, although it should be soon. Today, Azure allows joining an App Service into a virtual network, but service endpoints - the feature that allows blocking access to Azure Storage outside of a virtual network - aren't supported by App Services yet. This feature is in preview, though, so I'm hopeful that we'll be able to lock down the storage accounts used by App Services packages soon.
Fourth, consider who deploys the storage account(s), and how many you need. In most cases, having a single storage account per App Service is going to be too much overhead to maintain. But if you decide to share a single storage account across many App Services throughout your organisation, you'll need to consider how to share the keys to generate SAS tokens.
Fifth, consider the lifetime of the application versus the storage account. In Azure, resource groups provide a natural boundary for resources that share a common lifetime. An application and all of its components would typically be deployed into a single resource group. If we're dealing with a storage account that may be used across multiple applications, it would be best to have the storage account in its own resource group so that decommissioning one application won't result in the storage account being removed, and so that Azure RBAC permissions can be separately assigned.
Using Azure Pipelines
Azure Pipelines is the new term for what used to be called VSTS's build and release management features. Pipelines let us define the steps involved in building, packaging, and deploying our applications.
There are many different ways we can create an App Service package from Azure Pipelines, upload them to Azure Storage, and then deploy them. Each option has its own pros and cons, and the choice will often depending on how pure you want your build and release processes to be. For example, I prefer that my build processes are completely standalone, and don't interact with Azure at all if possible. They emit artifacts and the release process then manages the communication with Azure to deploy them. Others may not be as concerned about this separation, though.
I also insist on 100% automation for all of my build and release tasks, and vastly prefer to script things out in text files and commit them to a version control system like Git, rather than changing options on a web portal. Therefore I typically use a combination of build YAMLs, ARM templates, and PowerShell scripts to build and deploy applications.
It's fairly straightforward to automate the creation, upload, and deployment of App Service packages using these technologies. In this post I'll describe one approach that I use to deploy an App Service package, but towards the end I'll outline some variations you might want to consider.
I'll use the hypothetical example of an ASP.NET web application. I'm not going to deploy any actual real application here - just the placeholder code that you get from Visual Studio when creating a new ASP.NET MVC app - but this could easily be replaced with a real application. Similarly, you could replace this with a Node.js or PHP app, or any other language and framework supported by App Service.
You can access the entire repository on GitHub here, and I've included the relevant pieces below.
Step 1: Deploy Storage Account
The first step is to deploy a storage account to contain the packages. My preference is for a storage account to be created separately to the build and release process. As I noted above, this helps with reusability - that account can be used for any number of applications you want to deploy - and it also ensures that you're not tying the lifetime of your storage account to the lifetime of your application.
I've provided an ARM template below that deploys a storage account with a unique name, and creates a blob container called
packages that is used to store all of our packages. The template also outputs the connection details necessary to upload the blobs and generate a SAS token in later steps.
Step 2: Build Definition
Our Azure Pipelines build definition is pretty straightforward. All we do here is build our app and publish a build artifact. In Azure Pipelines, 'publishing' a build artifact means that it's available for releases to use - the build process doesn't actually save the package to the Azure Storage account. I've used a build YAML file to define the process, which is provided here:
Step 3: Release Definition
Our release definition runs through the remaining steps. I've provided a PowerShell script that executes the release process:
First, it uploads the build artifacts to our storage account's
packages container. It generates a unique filename for each blob, ensuring that every release is fully independent and won't accidentally overwrite another release's package.
Next, it generates a SAS token for the packages it's uploaded. The token just needs to provide read access to the blob. I've used a 100-year expiry, but you could shorten this if you need to - just make sure you don't make it too short, or App Services won't be able to boot your application once the expiry date passes.
Finally, it deploys the App Service instance using an ARM template, and passes the full URL to the package - including its SAS token - into the ARM deployment parameters. The key part for the purposes of this post is on line 51 of the template, where we create a new app setting called
WEBSITE_RUN_FROM_PACKAGE and set it to the full blob URL, including the SAS token part. Here's the ARM template we execute from the script:
Note that if you want to use this PowerShell script in your own release process, you'll want to adjust the variables so that you're using the correct source alias name for the artifact, as well as the correct resource group name.
Pros and Cons
The approach I've outlined here has a few benefits: it allows for a storage account to be shared across multiple applications; it keeps the build process clean and simple, and doesn't require the build to interact with Azure Storage; and it ensures that each release runs independently, uploading its own private copy of a package.
However there are a few problems with it. First, the storage account credentials need to be available to the release definition. This may not be desirable if the account is shared by multiple teams or multiple applications. Second, while having independent copies of each package is useful, it also means there's some wasted space if we deploy a single app multiple times.
If these are concerns to you, there are a number of things you could consider, depending on your concern.
If your concern is that credentials are being shared, then you could consider creating a dedicated storage account as part of the release process. The release process can provision the storage account (if it doesn't already exist), retrieve the keys to it, upload the package, generate a SAS token, and then deploy the App Service with the URL to the package. The storage account's credentials would never leave the release process. Of course, this also makes it harder to share the storage account across multiple applications.
Keeping the storage account with the application also makes the release more complicated, since you can no longer deploy everything in a single ARM template deployment operation. You'd need at least two ARM deployments, with some scripting required in between. The first ARM template deployment would deploy the storage account and container. You'd then execute some PowerShell or another script to upload the package and generate a SAS token. Then you could execute a second ARM template deployment to deploy the App Service and point it to the package URL.
Another alternative is to pre-create SAS tokens for your deployments to use. One SAS token would be used for the upload of the blobs (and would therefore need write permissions assigned), while a second would be used for the App Service to access all blobs within the container (and would only need read permissions assigned).
Yet another alternative is to use the preview Azure RBAC feature of Azure Storage to authenticate the release process to the storage account. This is outside the scope of this post, but this approach could be used to delegate permissions to the storage account without sharing any account keys.
If your concern is that the packages may be duplicated, you have a few options. One is to simply not create unique names during each release, but instead use a naming scheme that results in consistent names for the same build artifacts. For example, you might use the convention
.zip. Subsequent releases could check if the package already exists and leave it alone if it does.
If you don't want to use Azure Storage at all, you can also upload a package directly to the App Service's
d:\home\data\SitePackages folder. This way you gain some of the benefits of the Run From Package feature - namely the speed and atomicity of deployments - but lose the advantage of having a simpler deployment with immutable blobs. This is documented on the official documentation page. Also, you can of course use any file storage system you like, such as Amazon S3, to host your packages.
Also, bear in mind that currently App Services on Linux don't support the Run from Package feature at all currently.