Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
The reliable execution model of Durable Functions requires that orchestrations be deterministic, which creates a challenge when you deploy updates. When a deployment contains breaking changes — such as modified activity function signatures or altered orchestrator logic — in-flight orchestration instances fail. This situation is especially a problem for long-running orchestrations, which might represent hours or days of work.
Note
The strategies in this article assume you're using the default Azure Storage provider for Durable Functions. If you're using a different storage provider, the guidance may not apply. The orchestration versioning strategy is the exception — it works with any storage backend. For more information on storage provider options, see Durable Functions storage providers.
The following table compares four strategies for achieving zero-downtime deployment. Choose the strategy that best matches your workload:
| Strategy | When to use | Pros | Cons |
|---|---|---|---|
| Orchestration versioning (recommended) | Applications with breaking changes that need multiple orchestration versions running concurrently. | Enables zero-downtime deployments with breaking changes. Built-in feature requiring minimal configuration. Works with any storage backend. |
Requires careful orchestrator code modifications for version compatibility. |
| Name-based versioning | Applications with infrequent breaking changes where simplicity is preferred. | Simple to implement. | Increased function app size in memory and number of functions. Code duplication. |
| Status check with slot | Systems with short-lived orchestrations (under 24 hours) and predictable gaps between executions. | Simple code base. Doesn't require additional function app management. |
Requires additional storage account or task hub management. Requires periods of time when no orchestrations are running. |
| Application routing | Systems with continuously running orchestrations (over 24 hours) or frequently overlapping executions with no idle windows. | Handles new versions of systems with continually running orchestrations that have breaking changes. | Requires an intelligent application router. Could max out the number of function apps allowed by your subscription (default is 100). |
Orchestration versioning
The orchestration versioning feature is the recommended strategy for zero-downtime deployments with breaking changes. It enables different versions of orchestrations to coexist and execute concurrently without conflicts.
With orchestration versioning:
- Each orchestration instance gets a version permanently associated with it when created.
- Workers running newer orchestrator versions can continue executing older version instances.
- Workers running older orchestration versions can't execute newer version instances.
- Orchestrator functions can examine their version and branch execution accordingly.
This approach facilitates rolling upgrades where workers running different versions of your application can coexist safely. Unlike the other strategies in this article, orchestration versioning is backend agnostic and works with any storage provider.
For full implementation steps — including how to configure versioning, handle version branching in orchestrator code, and manage rolling upgrades — see Orchestration versioning.
The remaining strategies are alternatives for scenarios where orchestration versioning isn't suitable.
Name-based versioning
With this strategy, you create new versions of your functions alongside the old versions in the same function app. Each function's version becomes part of its name (for example, MyOrchestrator_v1, MyOrchestrator_v2). Because previous versions are preserved, in-flight orchestration instances can continue to reference them. Requests for new orchestration instances call the latest version, which your orchestration client function can reference from an app setting. The following diagram illustrates this approach.
In this strategy, every function must be copied, and its references to other functions must be updated. You can make it easier by writing a script. Here's a sample project with a migration script.
Note
This strategy uses deployment slots to avoid downtime during deployment. For more detailed information about how to create and use new deployment slots, see Azure Functions deployment slots.
Status check with slot
While the current version of your function app is running in your production slot, deploy the new version of your function app to your staging slot. Before you swap your production and staging slots, check to see if there are any running orchestration instances. After all orchestration instances are complete, you can do the swap. This strategy works when you have predictable periods when no orchestration instances are in flight. This is the best approach when your orchestrations aren't long-running and when your orchestration executions don't frequently overlap.
Function app configuration
Use the following procedure to set up this scenario.
Add deployment slots to your function app for staging and production.
For each slot, set the AzureWebJobsStorage application setting to the connection of a shared storage account. This storage account connection is used by the Azure Functions runtime to securely store the functions' access keys. For the highest level of security, you should use a managed identity connection to your storage account.
For each slot, create a new app setting, for example,
DurableManagementStorage. Set its value to the connection string of different storage accounts. These storage accounts are used by the Durable Functions extension for reliable execution. Use a separate storage account for each slot. Don't mark this setting as a deployment slot setting. Again, managed identity-based connections are the most secure.In your function app's host.json file's durableTask section, specify
connectionStringName(Durable 2.x) orazureStorageConnectionStringName(Durable 1.x) as the name of the app setting you created in step 3.
The following diagram shows the described configuration of deployment slots and storage accounts. In this potential predeployment scenario, version 2 of a function app is running in the production slot, while version 1 remains in the staging slot.
host.json example
The following JSON fragment shows the connection string setting in the host.json file.
{
"version": 2.0,
"extensions": {
"durableTask": {
"hubName": "MyTaskHub",
"storageProvider": {
"connectionStringName": "DurableManagementStorage"
}
}
}
}
Note
For legacy Functions 1.x apps, use the azureStorageConnectionStringName property directly in the durableTask section instead of storageProvider.connectionStringName.
CI/CD pipeline configuration
Configure your CI/CD pipeline to deploy only when your function app has no pending or running orchestration instances. When you're using Azure Pipelines, you can create a function that checks for these conditions, as in the following C# example. The same pattern applies to other languages — query for orchestration instances with Pending or Running status and return whether any exist.
[FunctionName("StatusCheck")]
public static async Task<IActionResult> StatusCheck(
[HttpTrigger(AuthorizationLevel.Function, "get", "post")] HttpRequestMessage req,
[DurableClient] IDurableOrchestrationClient client,
ILogger log)
{
var runtimeStatus = new List<OrchestrationRuntimeStatus>();
runtimeStatus.Add(OrchestrationRuntimeStatus.Pending);
runtimeStatus.Add(OrchestrationRuntimeStatus.Running);
var result = await client.ListInstancesAsync(new OrchestrationStatusQueryCondition() { RuntimeStatus = runtimeStatus }, CancellationToken.None);
return (ActionResult)new OkObjectResult(new { HasRunning = result.DurableOrchestrationState.Any() });
}
Next, configure the staging gate to wait until no orchestrations are running. For more information, see Release deployment control using gates
Azure Pipelines checks your function app for running orchestration instances before your deployment starts.
Now the new version of your function app should be deployed to the staging slot.
Finally, swap slots.
Application settings that aren't marked as deployment slot settings are also swapped, so the version 2 app keeps its reference to storage account A. Because orchestration state is tracked in the storage account, any orchestrations running on the version 2 app continue to run in the new slot without interruption.
To use the same storage account for both slots, you can change the names of your task hubs. In this case, you need to manage the state of your slots and your app's HubName settings. To learn more, see Task hubs in Durable Functions.
Application routing
This strategy is the most complex, but it's the only option for systems with continuously running orchestrations that never have an idle window for slot swaps.
For this strategy, you create an application router in front of your Durable Functions — for example, an Azure Function with HTTP triggers or an API Management instance that routes based on version headers. The router is responsible for:
- Deploying the function app.
- Managing which version of the app is active.
- Routing orchestration requests to the correct function app based on version.
The first time an orchestration request is received, the router does the following tasks:
- Creates a new function app in Azure.
- Deploys your function app's code to the new function app in Azure.
- Forwards the orchestration request to the new app.
The router manages the state of which version of your app's code is deployed to which function app in Azure.
The router directs deployment and orchestration requests to the appropriate function app based on the version sent with the request. It ignores the patch version.
When you deploy a new version of your app without a breaking change, you can increment the patch version. The router deploys to your existing function app and sends requests for the old and new versions of the code, which are routed to the same function app.
When you deploy a new version of your app with a breaking change, you can increment the major or minor version. Then the application router creates a new function app in Azure, deploys to it, and routes requests for the new version of your app to it. In the following diagram, running orchestrations on the 1.0.1 version of the app keep running, but requests for the 1.1.0 version are routed to the new function app.
The router monitors the status of orchestrations on the 1.0.1 version and removes apps after all orchestrations are finished.
Tracking store settings
Each function app should use separate scheduling queues, possibly in separate storage accounts. If you want to query all orchestrations instances across all versions of your application, you can share instance and history tables across your function apps. You can share tables by configuring the trackingStoreConnectionStringName and trackingStoreNamePrefix settings in the host.json settings file so that they all use the same values.
For more information, see Manage instances in Durable Functions in Azure.