Bemærk
Adgang til denne side kræver godkendelse. Du kan prøve at logge på eller ændre mapper.
Adgang til denne side kræver godkendelse. Du kan prøve at ændre mapper.
In this tutorial, you deploy a Python Azure Functions app that uses Durable Functions to orchestrate parallel text file analysis. Your function app mounts an Azure Files share, analyzes multiple text files in parallel (fan-out), aggregates the results (fan-in), and returns them to the caller. This approach demonstrates a key advantage of storage mounts: shared file access across multiple function instances without per-request network overhead.
In this tutorial, you:
- Use Azure Developer CLI to deploy a Durable Functions app in a Flex Consumption plan with a mounted Azure Files share
- Trigger an orchestration to process sample text files in parallel
- Verify the aggregated analysis results
Note
The code samples for this article are available in the Azure Functions Flex Consumption with Azure Files OS Mount Samples GitHub repository.
Prerequisites
- An Azure account with an active subscription. Create an account for free.
- Azure Developer CLI (azd) version 1.9.0 or later
- Git
The CLI examples in this tutorial use Bash syntax and have been tested in Azure Cloud Shell (Bash) and Linux/macOS terminals.
Initialize the sample project
You can find the sample code for this tutorial in the Azure Functions Flex Consumption with Azure Files OS Mount Samples GitHub repository. The durable-text-analysis folder contains the function app code, a Bicep template that provisions the required Azure resources, and a post-deployment script that uploads sample text files.
Open a terminal and go to the directory where you want to clone the repository.
Clone the repository:
git clone https://github.com/Azure-Samples/Azure-Functions-Flex-Consumption-with-Azure-Files-OS-Mount-Samples.gitGo to the project folder:
cd Azure-Functions-Flex-Consumption-with-Azure-Files-OS-Mount-Samples/durable-text-analysisInitialize the
azdenvironment. When prompted, enter an environment name such asdurable-text:azd init
Review the code
The three key pieces that make this sample work are the infrastructure that creates the mount, the script that uploads sample files, and the function code that orchestrates the analysis.
The mounts.bicep module configures an Azure Files SMB mount on the function app. The mountPath value determines the local path where files appear at runtime. You pass the storage account access key as a parameter, and the platform resolves it at runtime through a Key Vault reference:
@description('Function app name')
param functionAppName string
@description('Storage account name')
param storageAccountName string
@description('Storage account access key or app setting reference for Azure Files SMB mount')
param accessKey string
@description('Array of mount configurations')
param mounts array
// Function app reference
resource functionApp 'Microsoft.Web/sites@2023-12-01' existing = {
name: functionAppName
}
// Azure Files OS mount configuration
// Deploys azureStorageAccounts site config with all mounts in one shot
resource mountConfig 'Microsoft.Web/sites/config@2023-12-01' = {
parent: functionApp
name: 'azurestorageaccounts'
properties: reduce(mounts, {}, (cur, mount) => union(cur, {
'${mount.name}': {
type: 'AzureFiles'
shareName: mount.shareName
mountPath: mount.mountPath
accountName: storageAccountName
accessKey: accessKey
}
}))
}
output mountPaths array = [for mount in mounts: mount.mountPath]
Because Azure Files SMB mounts don't yet support managed identity authentication, you need a storage account key. As a best practice, store this key in Azure Key Vault and use a Key Vault reference in an app setting. The mount configuration references that app setting by using @AppSettingRef(), so the key never appears in your Bicep templates. The keyvault.bicep module creates the vault, stores the key, and grants RBAC roles:
@description('Key Vault name')
param name string
@description('Location')
param location string
@description('Tags')
param tags object = {}
@description('Storage account name')
param storageAccountName string
@description('Principal ID of the function app identity (receives Key Vault Secrets User role)')
param functionAppPrincipalId string
@description('Principal ID of the deploying user (receives Key Vault Secrets Officer role)')
param deployerPrincipalId string = ''
// Storage account reference
resource storage 'Microsoft.Storage/storageAccounts@2023-05-01' existing = {
name: storageAccountName
}
// Key Vault with RBAC authorization
resource keyVault 'Microsoft.KeyVault/vaults@2023-07-01' = {
name: name
location: location
tags: tags
properties: {
sku: {
family: 'A'
name: 'standard'
}
tenantId: tenant().tenantId
enableRbacAuthorization: true
enabledForTemplateDeployment: true
enableSoftDelete: true
softDeleteRetentionInDays: 7
}
}
// Store storage account key as a secret (Azure Files mounts require shared key)
resource storageKeySecret 'Microsoft.KeyVault/vaults/secrets@2023-07-01' = {
parent: keyVault
name: 'storageAccountKey'
properties: {
value: storage.listKeys().keys[0].value
contentType: 'Storage account access key for Azure Files SMB mount'
}
}
// Built-in Key Vault RBAC role IDs
var roles = {
KeyVaultSecretsOfficer: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'b86a8fe4-44ce-4948-aee5-eccb2c155cd7')
KeyVaultSecretsUser: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '4633458b-17de-408a-b874-0445c86b69e6')
}
// Grant the function app identity read access to secrets
resource functionAppSecretsUser 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: guid(keyVault.id, functionAppPrincipalId, roles.KeyVaultSecretsUser)
scope: keyVault
properties: {
roleDefinitionId: roles.KeyVaultSecretsUser
principalId: functionAppPrincipalId
principalType: 'ServicePrincipal'
}
}
// Grant the deployer manage access to secrets
resource deployerSecretsOfficer 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (!empty(deployerPrincipalId)) {
name: guid(keyVault.id, deployerPrincipalId, roles.KeyVaultSecretsOfficer)
scope: keyVault
properties: {
roleDefinitionId: roles.KeyVaultSecretsOfficer
principalId: deployerPrincipalId
principalType: 'User'
}
}
output name string = keyVault.name
output uri string = keyVault.properties.vaultUri
output storageKeySecretUri string = storageKeySecret.properties.secretUri
The main.bicep file invokes the mount and Key Vault modules:
// Key Vault for secure storage of Azure Files access key
module keyVault './app/keyvault.bicep' = {
name: 'keyVault'
scope: rg
params: {
name: !empty(keyVaultName) ? keyVaultName : '${abbrs.keyVaultVaults}${resourceToken}'
location: location
tags: tags
storageAccountName: storage.outputs.name
functionAppPrincipalId: processorIdentity.outputs.principalId
deployerPrincipalId: principalId
}
}
// Azure Files mount configuration (access key resolved via Key Vault reference)
module azureFilesMount './app/mounts.bicep' = {
name: 'azureFilesMount'
scope: rg
params: {
functionAppName: functionApp.outputs.name
storageAccountName: storage.outputs.name
accessKey: '@AppSettingRef(MOUNT_SECRET_REFERENCE)'
mounts: [
{
name: 'data'
shareName: 'data'
mountPath: '/mounts/data/'
}
]
}
dependsOn: [
functionAppRoleAssignments
]
Deploy by using Azure Developer CLI
This sample is an Azure Developer CLI (azd) template. A single azd up command provisions infrastructure, deploys the function code, and uploads sample text files to the Azure Files share.
Sign in to Azure. The post-deployment script uses Azure CLI commands, so you need to authenticate by using both tools:
azd auth login az loginProvision and deploy everything:
azd upWhen prompted, select the Azure subscription and location to use. The command then:
- Creates a resource group, storage account, Key Vault, Flex Consumption function app with a Durable Functions configuration, Application Insights instance, and managed identity
- Deploys the Python function code
- Uploads sample text files to the Azure Files share
- Runs a health check
Note
Because Azure Files SMB mounts don't yet support managed identity authentication, you need a storage account key. As a best practice, the deployment stores this key in Azure Key Vault and uses a Key Vault reference so the key is never exposed in app settings. This approach provides centralized secret management, auditing, and support for key rotation.
The deployment takes a few minutes. When it completes, you see a summary of the created resources.
Save resource names as shell variables for the remaining steps:
RESOURCE_GROUP=$(azd env get-value AZURE_RESOURCE_GROUP) FUNCTION_APP_NAME=$(azd env get-value AZURE_FUNCTION_APP_NAME) FUNCTION_APP_URL=$(azd env get-value AZURE_FUNCTION_APP_URL)
Trigger the orchestration
Get the function host key:
HOST_KEY=$(az functionapp keys list \ --resource-group $RESOURCE_GROUP \ --name $FUNCTION_APP_NAME \ --query "functionKeys.default" \ -o tsv)Start the orchestration:
curl -s -X POST "${FUNCTION_APP_URL}/api/start-analysis?code=${HOST_KEY}" | jq .The response includes an instance ID and status query URIs:
{ "id": "abc123def456", "statusQueryGetUri": "https://...", "sendEventPostUri": "https://...", "terminatePostUri": "https://..." }
Verify results
Check orchestration status. Use the
statusQueryGetUrifrom the previous response, or construct the URL manually:INSTANCE_ID="<instance-id-from-trigger-response>" curl -s "${FUNCTION_APP_URL}/api/orchestrators/TextAnalysisOrchestrator/${INSTANCE_ID}?code=${HOST_KEY}" | jq .While the orchestration is running, the
runtimeStatusisRunning. When complete, the response looks like:{ "name": "TextAnalysisOrchestrator", "instanceId": "abc123def456", "runtimeStatus": "Completed", "output": { "results": [ { "file": "sample1.txt", "word_count": 15, "char_count": 98, "sentiment": "positive" }, { "file": "sample2.txt", "word_count": 18, "char_count": 120, "sentiment": "positive" }, { "file": "sample3.txt", "word_count": 12, "char_count": 85, "sentiment": "neutral" } ], "total_words": 45, "total_chars": 303, "analysis_duration_seconds": 2.34 } }
Tip
Your function app accesses all three files in parallel through the storage mount. The app doesn't need any per-request network calls. The function reads them directly from the mounted share by using standard file I/O. This approach demonstrates the power of storage mounts combined with Durable Functions.
Clean up resources
To avoid ongoing charges, delete all the resources created by this tutorial:
azd down --purge
Warning
This command deletes the resource group and all resources in it, including the function app, storage account, and Application Insights instance.