Share via

Significant delay in Azure Communication Services (ACS) call recording BYOS export for specific recording chunks.

Alexander Karaberov 0 Reputation points
2026-03-03T15:33:51.6033333+00:00

Hello Microsoft support,

We observed a significant delay in Azure Communication Services (ACS) call recording BYOS (Bring your own Azure storage) export for specific recording chunks.

For one long call (~12.03 hours), some chunk metadata files appeared in our BYOS Blob Storage first, but the corresponding media blobs for chunk indexes 0 and 2 were not initially present. At first, StorageBlobLogs showed normal write operations (PutBlob / PutBlockList) for the available chunks, but no blob write logs for the missing chunk media files. It's worth to note that acsmetadata.json files were present for all 4 chunks from the beginning including initially missing ones.

Later, approximately 7 hours after the other chunks, the previously missing media blobs were finally written to Blob Storage, and we then also received the Microsoft.Communication.RecordingFileStatusUpdated event from ACS to Event Grid that was saved to an event queue. The event payload includes endReason: "ChunkMaximumTimeExceeded".
More precisely: chunks 1 and 3 were written at around 05:27 CET, but no event was received. Then finally at 12:44 and 12:47 CET chunks 0 and 2 were written respectively.

This suggests the chunks were not lost, but were processed/exported with an unusually long delay. This is the first time we see this, although we usually had shorter calls.

We would like Microsoft to investigate if possible:

why specific recording chunks were delayed by ~7 hours before BYOS export,

why the corresponding RecordingFileStatusUpdated event was also delayed,

  • whether this behavior is expected for long recordings with chunking (ChunkMaximumTimeExceeded) ? (as I was not able to find mentions of such delays in the public documentation).
  • and whether there was any service-side issue affecting recording chunk finalization or export timing.

ServerCallId for the ACS call in question is the following:
"aHR0cHM6Ly9hcGkuZmxpZ2h0cHJveHkuc2t5cGUuY29tL2FwaS92Mi9jcC9jb252LWZyY2UtMDUtcHJvZC1ha3MuY29udi5za3lwZS5jb20vY29udi9TXzFKdWExcGJFQ3p4TnhkNm51Q3JnP2k9MTAtMTI4LTExNy01MSZlPTYzOTA3ODM4MDc3NzMzNTg4OA"

Recording ID for chunk 1 which was exported 7 hours later and has "endReason": "ChunkMaximumTimeExceeded" is the following:
"eyJQbGF0Zm9ybUVuZHBvaW50SWQiOiIwZTAwM2M4MC04YmM0LTQwOTAtOWVmMy0xZTE3MmQ0MGVmN2YiLCJSZXNvdXJjZVNwZWNpZmljSWQiOiJkNWFlZDRlYS1lODFmLTQxMTEtOGMwZC1jYzlhYjViMjljN2MifQ"

Let us know what contextual information do you need to help investigate the issue. We can provide:

  • ACS resource ID
  • other recording IDs
  • storage account/container/blob paths

Event Grid payload

Best regards,
Alexander

Azure Communication Services
{count} votes

1 answer

Sort by: Most helpful
  1. kagiyama yutaka 1,170 Reputation points
    2026-03-12T22:24:40.1366667+00:00

    i think that chunks fall into the cold‑finalize lane when they hit max‑dur under high backend load, n once ACS starts aging them in temp‑store the only gentle workaround is havin ur app watch for meta‑no‑media n let users tap a tiny refresh so u catch the stall early n don’t wait hrs again.

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.