An Azure service that integrates speech processing into apps and services.
Hello Ramachandran, Iaiswarya I ,
Welcome to the Microsoft Q&A and thank you for posting your questions here.
I understand that your Custom Neural Voice (CNV Pro) model in East US and East US 2 is failing to train the model.
Re‑run your CNV Pro training in a different supported Speech region (Pro availability varies by region and trained voices can be copied after training) using the official lists: Train your professional voice model and Supported regions. In parallel, open an Azure Support request with the failing job IDs/timestamps so engineering can inspect backend training pipeline/node health: Create a support request (Azure portal) and Foundry/Speech support options. Before re‑attempting, validate network/auth from your environment by fetching an STS token such as inline test: curl -v -X POST "https://YOUR_REGION.api.cognitive.microsoft.com/sts/v1.0/issueToken" -H "Ocp-Apim-Subscription-Key: YOUR_RESOURCE_KEY" -H "Content-type: application/x-www-form-urlencoded" -H "Content-Length: 0" and confirm regioned endpoints: Troubleshoot Speech SDK, REST TTS auth/endpoints. Next, re‑check dataset quality against Pro requirements (professionally recorded 300–2000 utterances with clean alignment/consent) and follow Tech Community best practices (balanced scripts; single continuous export): CNV overview & Pro requirements, Practical CNV tips (Tech Community). Run a quick CNV Lite smoke test to validate project/data plumbing while Pro resources queue: Try CNV Lite in 5 minutes. After all the steps are successful, launch Pro training in the alternate region and keep the ticket open until Microsoft confirms backend health, then proceed to deploy as it written in Train your professional voice model.
I hope this is helpful! Do not hesitate to let me know if you have any other questions or clarifications.
Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful.