Bemærk
Adgang til denne side kræver godkendelse. Du kan prøve at logge på eller ændre mapper.
Adgang til denne side kræver godkendelse. Du kan prøve at ændre mapper.
This article compares three ways to access files from Azure Functions: storage bindings, external databases, and Azure Files storage mounts. You learn the trade-offs between each approach, see when mounts are the right choice, and find patterns for real-world scenarios.
Storage bindings and external databases work on all hosting plans. Storage mounts are Linux only and aren't supported on the Consumption plan.
If you want to jump straight to working code, see the Tutorial: Durable text analysis with a mounted Azure Files share for parallel file processing or Tutorial: Process images by using FFmpeg on a mounted Azure Files share for hosting large binaries on a mount.
Note
The code samples for this article are available in the Azure Functions Flex Consumption with Azure Files OS Mount Samples GitHub repository.
File access options at a glance
When you need to access files from your functions, you have three main options:
| Approach | Pros | Cons | Best for | Learn more |
|---|---|---|---|---|
| Storage bindings | Simple, cloud-native, secure | Network overhead, eventual consistency | Moving data to/from cloud services (queues, blobs) | Blob, Queue, Table bindings |
| External database | Flexible, transactional | Network calls, complexity | Structured data, complex queries | Manage connections |
| Storage mount (Azure Files) | Direct file access, POSIX semantics, large binaries | Slower than local disk, Linux only | Large files, shared executables, frequent access | What is a storage mount? |
Not every option is available on every hosting plan:
| Hosting plan | Storage bindings | External database | Storage mount (Azure Files) |
|---|---|---|---|
| Flex Consumption | ✅ | ✅ | ✅ |
| Elastic Premium | ✅ | ✅ | ✅ |
| Dedicated (App Service) | ✅ | ✅ | ✅ |
| Consumption (Windows only) | ✅ | ✅ | ❌ |
The rest of this article focuses on mounts: when they're the right choice, and how to use them safely.
What is a storage mount?
A storage mount is a network file share that you mount as if it were a local directory. When you mount an Azure Files share on your function app, the path appears in the function container's file system:
┌─────────────────────────────────────┐
│ Your function code │
│ (reads/writes to /mnt/mydata/) │
├─────────────────────────────────────┤
│ POSIX file-system layer │
│ (appears as a local directory) │
├─────────────────────────────────────┤
│ SMB protocol (over network) │
├─────────────────────────────────────┤
│ Azure Files share │
│ (in your storage account) │
└─────────────────────────────────────┘
Your code uses standard file system APIs (for example, open(), os.listdir() in Python, or equivalent calls in other languages) without knowing it's communicating over the network. This setup provides POSIX semantics, which means your code looks like local file I/O.
When not to use mounts
Mounts aren't the right choice for every scenario. Consider these alternatives:
| Scenario | Recommended alternative |
|---|---|
| Small transient data | Azure Queue Storage Azure Blob Storage |
| Frequent small reads/writes | Azure Cosmos DB or Azure Cache for Redis |
| Real-time streaming | Azure Event Hubs or Azure IoT Hub |
| Cross-region data sharing | Blob Storage replication |
Important
Storage mounts are Linux only and aren't supported on the Consumption plan.
Compare storage options
Consider the three main file access options when processing 1,000 images (1 MB each) stored in a reference folder:
| Approach | Mechanism | Network calls | Relative cost | Best for |
|---|---|---|---|---|
| Blob storage binding | Download each file | 1,000 GET requests | High bandwidth + latency | One-time or infrequent access |
| Storage mount | Read from share | One mount setup | Minimal bandwidth | Repeated or high-volume access |
| External database | Azure Cosmos DB | One query | RU charges + network latency | Structured data with complex queries |
Note
The following code examples use Python, but the same pattern applies to any language that supports file system APIs, including C#, Java, JavaScript, and PowerShell.
files = container_client.list_blobs(name_starts_with="reference/")
for blob in files:
stream = container_client.download_blob(blob.name)
For large shared files with repeated access, use share mounts. Let's investigate more detailed scenarios that use share mounts.
Share mount scenarios
These example scenarios also benefit from using mounted storage shares:
| Scenario | Problem solved | Example |
|---|---|---|
| Parallel file analysis | Avoid packaging large reference data or downloading it per invocation | ML models, lookup tables, corpus data shared across 1,000+ instances |
| Shared executables | Keep large binaries out of the deployment package | ffmpeg, ImageMagick, or other 500+ MB tools |
| Cross-app data sharing | Share files between producer and consumer apps without message passing | App A writes results, App B reads them from the same mount |
Select each tab to view details about the specific scenario:
Use case: You have 1,000 analysis tasks that all need to read from the same set of reference data files (for example, ML models, lookup tables, or corpus data).
Note
For a complete walkthrough of this pattern, see Tutorial: Durable text analysis with a mounted Azure Files share.
The problem: Without mounts, you have two suboptimal options:
- Package the reference files with your function: This approach results in a huge deployment artifact, slow cold starts, and storage redundancy.
- Download from Blob Storage each time: This approach introduces network latency on every function invocation and wastes bandwidth.
The mount-based solution: All instances read from the mounted share directly. After mount initialization, there's no per-request network overhead and no redundant storage.
┌─────────────────────────┐
│ Function Instance 1 │
│ Function Instance 2 ├──→ /mnt/models/ ──→ Azure Files share
│ Function Instance 3 │ (shared mount)
└─────────────────────────┘
Implementation pattern: (Python)
import os
from pathlib import Path
MOUNT_PATH = "/mnt/models"
def analyze_data(item: str) -> dict:
"""Activity function: reads from shared mount."""
model_path = Path(MOUNT_PATH) / "model.pkl"
# Direct file I/O — no SDK call, no network overhead
with open(model_path, "rb") as f:
model = pickle.load(f)
result = model.predict(item)
return {"item": item, "score": result}
Key points:
- All instances of your function app see the same mount.
- File reads are POSIX-compliant. You use standard file system APIs.
- No need to authenticate per read (the mount is authenticated once at startup).
- Changes written by one instance are visible to others immediately.
Security considerations:
- Storage account key: Azure Files storage mounts on Flex Consumption authenticate by using a storage account access key configured in the function app's mount settings. Managed identity with
Storage File Data SMB Share ContributorRBAC isn't supported for SMB mounts on Azure Functions. Keep the access key secure and rotate it periodically. - Read-only option: If your workload doesn't need to write, restrict the mount to read-only.
- Quotas: Set Azure Files share quotas to prevent runaway costs if instances write large files.
Mount limits
These Azure Files storage limits apply to all hosting plans that support mounts:
| Limit | Value |
|---|---|
| Share size | Up to 100 TiB |
| File size | Up to 4 TiB |
| Throughput | ~60 MB/s (standard), ~100+ MB/s (premium) |
| Concurrency | Many (SMB handles it), but writes serialize |
For more information, see Azure Files scale targets.
These limits vary by supported hosting plan:
| Limit | Flex Consumption | Elastic Premium | Dedicated (App Service) |
|---|---|---|---|
| Mount points per app | 5 | 5 | 5 |
| Protocols | SMB only | SMB, NFS, Azure Blobs (read-only) | SMB, NFS, Azure Blobs (read-only) |
To prevent runaway storage costs, set a quota on your Azure Files share:
az storage share-rm update \
--resource-group $RESOURCE_GROUP \
--storage-account $STORAGE_ACCOUNT \
--name myshare \
--quota 100 # 100 GB limit
Mount authentication
Azure Files storage mounts and the Azure SDK use different authentication mechanisms:
- Storage mounts (SMB): Authenticate by using a storage account access key at mount time. The key is stored in the function app's site configuration (
azureStorageAccounts). Managed identity isn't currently supported for SMB mounts on Azure Functions. - Azure SDK (REST API): For programmatic access by using the Azure Storage SDK, use managed identity when possible.
This Bicep example configures the storage mount by using the storage account shared secret key:
resource mountConfig 'Microsoft.Web/sites/config@2023-12-01' = {
parent: functionApp
name: 'azurestorageaccounts'
properties: {
dataMount: {
type: 'AzureFiles'
shareName: shareName
mountPath: '/mounts/data'
accountName: storageAccountName
accessKey: storageAccount.listKeys().keys[0].value
}
}
}
Important
Rotate storage account keys periodically. When you rotate keys, update the mount configuration on every function app that references the account.
Best practices
Use read-only mounts when possible. If your function only reads from the mount, configure it as read-only to prevent accidental writes.
Monitor file access. Enable diagnostics on your storage account to track mount access patterns:
az monitor metrics list \ --resource /subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Storage/storageAccounts/$STORAGE_ACCOUNT/fileServices/default \ --metric TransactionsClean up temporary files. If your functions write to the mount, implement cleanup to avoid unbounded growth:
from pathlib import Path import time MOUNT_PATH = "/mnt/temp" MAX_AGE = 24 * 60 * 60 # 24 hours def cleanup_old_files(): cutoff = time.time() - MAX_AGE for f in Path(MOUNT_PATH).iterdir(): if f.stat().st_mtime < cutoff: f.unlink()
Troubleshoot storage mounts
The following table lists common issues with Azure Files storage mounts on function apps:
| Issue | Resolution |
|---|---|
| Binary or file not found on mount path | Verify the file is in the correct Azure Files share. Check that the mount path configured on the function app matches the path your code references. In the Azure portal, check Settings > Configuration > Path Mappings. |
| Permission denied when accessing mounted files | Storage mounts authenticate by using a storage account access key. Verify the key in the mount configuration is correct and wasn't rotated. When you rotate keys, update the mount configuration on every function app that references the account. |
| Binary lacks execute permissions | Azure Files preserves POSIX permissions set at upload time. Re-upload the binary after running chmod +x locally, or set permissions after upload. |
| Mount adds latency to cold starts | SMB mount initialization adds approximately 200-500 ms on first execution. Subsequent invocations reuse the mount. For latency-sensitive apps, consider the always-ready instances feature. |