A central hub of Azure cloud migration services and tools to discover, assess, and migrate workloads to the cloud.
Hello Argus Admin User
Thank you for reaching out to the Microsoft Q&A forum.
It looks like your DCC-Fileserver VM’s replication cycle is failing with an AzureStorageException and a 412 ConditionNotMet error during the “WaitingForExportCompletion” phase. That HTTP 412 means a storage‐side ETag or conditional header didn’t match—most often a transient quirk in snapshot export—but repeated failures mean we should dig deeper. Here’s a set of things you can try, plus some follow-up questions if it keeps happening:
What you can try
Review the VM’s replication events
• In the Azure portal go to your Azure Migrate project → Server migration → DCC-Fileserver → Events.
• Look at the full chain of “Replication cycle failed” messages to see if anything else is failing earlier.
Run the built-in replication diagnostics
• Still in Server migration, click “Run diagnostics” (this triggers the GatewayAgentlessVMwareReplicationCycleIssues checks).
• If any insights show up, follow the remediation steps there.
Restart the gateway/appliance services
• RDP into your Azure Migrate appliance VM. • Open services.msc → find “Microsoft Azure Gateway Service” (and “cxpsprocessserver” or “InMage Scout VX Agent Sentinel/Outpost” on the process server if used) → Stop and then Start.
• This clears out any hung VDDK or snapshot process.
Verify network and storage configuration
• Ensure TCP port 443 and 9443 from your process/configuration server to Azure aren’t being blocked by a firewall.
• Confirm that no storage vMotion or disk detach happened recently on the on-prem VM—agentless replication can fail if disks move or snapshots change outside of the Migrate tool.
Disable & re-enable replication (fresh initial sync)
• In the portal go to Recovery Services vault → Replicated items → select DCC-Fileserver → Disable replication.
• After it cleans up, re-enable replication. This kicks off a brand-new initial sync.
Keep your appliance up to date
• Make sure you’re running the latest Azure Migrate appliance bits—each update fixes edge-case snapshot or VDDK issues.
Follow-up questions
• Has this VM ever completed an initial replication successfully, or is this the first cycle?
• Did you see any storage-vMotion, datastore changes, or manual snapshot operations on the VMware host right before the error?
• Are other VMs on the same appliance/project replicating fine, or are they failing with similar errors?
• Which version of the Azure Migrate appliance and VDDK are you running?
• After you run the replication diagnostics, do you get any specific insights flagged?
Reference documentation
• Resolve server migration replication issues in Azure Migrate – https://aka.ms/AzureMigrateReplicationTroubleshoot
• Troubleshoot replication in agentless VMware VM migration – https://aka.ms/AgentlessVMwareReplicationTroubleshoot
• Control bandwidth throttling & common replication-cycle errors – https://aka.ms/AzureMigrateCommonReplicationIssues
Hope this helps track down the root cause—let me know what you find!