Rediger

Del via


Tutorial: Process images by using FFmpeg on a mounted Azure Files share

In this tutorial, you deploy a Python app that uses an ffmpeg binary on a mounted Azure Files share to process images in Azure Functions. When you upload an image to the container, the function triggers, calls ffmpeg from the mount to convert the image, and saves the result back to storage. By hosting large binaries like ffmpeg on a mounted share instead of in your deployment package, you keep deployments small and cold starts fast.

In this tutorial, you:

  • Deploy a Flex Consumption function app with a mounted Azure Files share by using Azure Developer CLI
  • Upload a sample image to trigger blob-based processing
  • Verify that the function called ffmpeg from the mount and saved the converted image

Note

The code samples for this article are available in the Azure Functions Flex Consumption with Azure Files OS Mount Samples GitHub repository.

Prerequisites

The CLI examples in this tutorial use Bash syntax and are tested in Azure Cloud Shell (Bash) and Linux/macOS terminals.

Initialize the sample project

The sample code for this tutorial is in the Azure Functions Flex Consumption with Azure Files OS Mount Samples GitHub repository. The ffmpeg-image-processing folder contains the function app code, a Bicep template that provisions the required Azure resources, and a post-deployment script that uploads the ffmpeg binary.

  1. Open a terminal and go to the directory where you want to clone the repository.

  2. Clone the repository:

    git clone https://github.com/Azure-Samples/Azure-Functions-Flex-Consumption-with-Azure-Files-OS-Mount-Samples.git
    
  3. Go to the project folder:

    cd Azure-Functions-Flex-Consumption-with-Azure-Files-OS-Mount-Samples/ffmpeg-image-processing
    
  4. Initialize the azd environment. When prompted, enter an environment name such as ffmpeg-processing:

    azd init
    

Review the code

The three key pieces that make OS mount–based processing work are the infrastructure that creates the mount, the script that uploads the binary, and the function code that calls it.

The mounts.bicep module configures an Azure Files SMB mount on the function app. The mountPath value determines the local path where files appear at runtime. You pass the storage account access key as a parameter, and the platform resolves it at runtime through a Key Vault reference:

@description('Function app name')
param functionAppName string

@description('Storage account name')
param storageAccountName string

@description('Storage account access key or app setting reference for Azure Files SMB mount')
param accessKey string

@description('Array of mount configurations')
param mounts array

// Function app reference
resource functionApp 'Microsoft.Web/sites@2023-12-01' existing = {
  name: functionAppName
}

// Azure Files OS mount configuration
// Deploys azureStorageAccounts site config with all mounts in one shot
resource mountConfig 'Microsoft.Web/sites/config@2023-12-01' = {
  parent: functionApp
  name: 'azurestorageaccounts'
  properties: reduce(mounts, {}, (cur, mount) => union(cur, {
    '${mount.name}': {
      type: 'AzureFiles'
      shareName: mount.shareName
      mountPath: mount.mountPath
      accountName: storageAccountName
      accessKey: accessKey
    }
  }))
}

output mountPaths array = [for mount in mounts: mount.mountPath]

Because Azure Files SMB mounts don't yet support managed identity authentication, you need a storage account key. As a best practice, store this key in Azure Key Vault and use a Key Vault reference in an app setting. The mount configuration references that app setting by using @AppSettingRef(), so the key never appears in your Bicep templates. The keyvault.bicep module creates the vault, stores the key, and grants RBAC roles:

@description('Key Vault name')
param name string

@description('Location')
param location string

@description('Tags')
param tags object = {}

@description('Storage account name')
param storageAccountName string

@description('Principal ID of the function app identity (receives Key Vault Secrets User role)')
param functionAppPrincipalId string

@description('Principal ID of the deploying user (receives Key Vault Secrets Officer role)')
param deployerPrincipalId string = ''

// Storage account reference
resource storage 'Microsoft.Storage/storageAccounts@2023-05-01' existing = {
  name: storageAccountName
}

// Key Vault with RBAC authorization
resource keyVault 'Microsoft.KeyVault/vaults@2023-07-01' = {
  name: name
  location: location
  tags: tags
  properties: {
    sku: {
      family: 'A'
      name: 'standard'
    }
    tenantId: tenant().tenantId
    enableRbacAuthorization: true
    enabledForTemplateDeployment: true
    enableSoftDelete: true
    softDeleteRetentionInDays: 7
  }
}

// Store storage account key as a secret (Azure Files mounts require shared key)
resource storageKeySecret 'Microsoft.KeyVault/vaults/secrets@2023-07-01' = {
  parent: keyVault
  name: 'storageAccountKey'
  properties: {
    value: storage.listKeys().keys[0].value
    contentType: 'Storage account access key for Azure Files SMB mount'
  }
}

// Built-in Key Vault RBAC role IDs
var roles = {
  KeyVaultSecretsOfficer: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'b86a8fe4-44ce-4948-aee5-eccb2c155cd7')
  KeyVaultSecretsUser: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '4633458b-17de-408a-b874-0445c86b69e6')
}

// Grant the function app identity read access to secrets
resource functionAppSecretsUser 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
  name: guid(keyVault.id, functionAppPrincipalId, roles.KeyVaultSecretsUser)
  scope: keyVault
  properties: {
    roleDefinitionId: roles.KeyVaultSecretsUser
    principalId: functionAppPrincipalId
    principalType: 'ServicePrincipal'
  }
}

// Grant the deployer manage access to secrets
resource deployerSecretsOfficer 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (!empty(deployerPrincipalId)) {
  name: guid(keyVault.id, deployerPrincipalId, roles.KeyVaultSecretsOfficer)
  scope: keyVault
  properties: {
    roleDefinitionId: roles.KeyVaultSecretsOfficer
    principalId: deployerPrincipalId
    principalType: 'User'
  }
}

output name string = keyVault.name
output uri string = keyVault.properties.vaultUri
output storageKeySecretUri string = storageKeySecret.properties.secretUri

The main.bicep file invokes the mount and Key Vault modules:


// Key Vault for secure storage of Azure Files access key
module keyVault './app/keyvault.bicep' = {
  name: 'keyVault'
  scope: rg
  params: {
    name: !empty(keyVaultName) ? keyVaultName : '${abbrs.keyVaultVaults}${resourceToken}'
    location: location
    tags: tags
    storageAccountName: storage.outputs.name
    functionAppPrincipalId: processorIdentity.outputs.principalId
    deployerPrincipalId: principalId
  }
}

// Azure Files mount configuration (access key resolved via Key Vault reference)
module azureFilesMount './app/mounts.bicep' = {
  name: 'azureFilesMount'
  scope: rg
  params: {
    functionAppName: functionApp.outputs.name
    storageAccountName: storage.outputs.name
    accessKey: '@AppSettingRef(MOUNT_SECRET_REFERENCE)'
    mounts: [
      {
        name: 'tools'
        shareName: 'tools'
        mountPath: '/mounts/tools/'
      }
    ]
  }
  dependsOn: [
    functionAppRoleAssignments
  ]
}

Deploy by using Azure Developer CLI

This sample is an Azure Developer CLI (azd) template. A single azd up command provisions infrastructure, deploys the function code, uploads the ffmpeg binary to Azure Files, and creates the Event Grid subscription for blob triggers.

  1. Sign in to Azure. The post-deployment script uses Azure CLI commands, so you need to authenticate by using both tools:

    azd auth login
    az login
    
  2. Provision and deploy everything:

    azd up
    

    When prompted, select the Azure subscription and location to use. The command then:

    • Creates a resource group, storage account, Key Vault, Flex Consumption function app, Application Insights instance, and managed identity.
    • Deploys the Python function code.
    • Downloads and uploads the ffmpeg binary to the Azure Files share.
    • Creates an Event Grid subscription so blob uploads trigger your function.
    • Runs a health check.

    Note

    Because Azure Files SMB mounts don't yet support managed identity authentication, a storage account key is required. As a best practice, the deployment stores this key in Azure Key Vault and uses a Key Vault reference so the key is never exposed in app settings. This approach provides centralized secret management, auditing, and support for key rotation.

    The deployment takes a few minutes. When it completes, you see a summary of the created resources.

  3. Save resource names as shell variables for the remaining steps:

    RESOURCE_GROUP=$(azd env get-value AZURE_RESOURCE_GROUP)
    STORAGE_ACCOUNT=$(azd env get-value AZURE_STORAGE_ACCOUNT_NAME)
    FUNCTION_APP_NAME=$(azd env get-value AZURE_FUNCTION_APP_NAME)
    INPUT_CONTAINER=$(azd env get-value AZURE_STORAGE_INPUT_CONTAINER)
    OUTPUT_CONTAINER=$(azd env get-value AZURE_STORAGE_OUTPUT_CONTAINER)
    

Process an image

  1. Upload the sample image included in the repository to the input container. The Event Grid subscription created during deployment automatically triggers your function when a blob is uploaded.

    az storage blob upload \
      --container-name $INPUT_CONTAINER \
      --name sample_image.png \
      --file sample_image.png \
      --account-name $STORAGE_ACCOUNT \
      --auth-mode login
    

    Tip

    If the trigger doesn't fire immediately, wait 10-15 seconds, and then check the function's execution logs in the Azure portal.

  2. Verify the function processed the image by listing the blobs in the output container:

    az storage blob list \
      --container-name $OUTPUT_CONTAINER \
      --account-name $STORAGE_ACCOUNT \
      --auth-mode login \
      -o table
    

    You should see sample_image.jpg in the output container.

  3. Download the converted image:

    az storage blob download \
      --container-name $OUTPUT_CONTAINER \
      --name sample_image.png \
      --file ./output_image.png \
      --account-name $STORAGE_ACCOUNT \
      --auth-mode login
    

Note

The first execution might be slightly slower (cold start). Subsequent invocations are faster because the function container stays warm and ffmpeg is cached. To minimize cold starts, consider enabling always-ready instances.

Clean up resources

To avoid ongoing charges, delete all the resources created by this tutorial:

azd down --purge

Warning

This command deletes the resource group and all resources in it, including the function app, storage account, and Application Insights instance.