Share via

Getting error while creating Databricks Classic Compute Cluster

Shivangi Pathak 0 Reputation points
2026-04-24T11:59:47.4433333+00:00

Cluster fails with the following error: Allocation failed. We do not have sufficient capacity for the requested VM size in this region. All the SKUs in the dropdown are disabled. Is it because of the region I selected. If yes, How can I change my region?

Azure Databricks
Azure Databricks

An Apache Spark-based analytics platform optimized for Azure.

0 comments No comments

3 answers

Sort by: Most helpful
  1. SAI JAGADEESH KUDIPUDI 2,535 Reputation points Microsoft External Staff Moderator
    2026-05-03T09:50:27.57+00:00

    Hi Shivangi Pathak,
    it looks like your cluster is hitting an “Allocation failed. We do not have sufficient capacity for the requested VM size in this region” error. That usually means one of two things:

    1. Azure doesn’t have enough spare machines of that family (or SKU) where your workspace lives
    2. You’re requesting a VM family that’s no longer supported (e.g., the H-series was retired)

    Here’s what you can try:

    • Pick a different VM family that has capacity in your region

    – In many regions, the Dv3/Dv4 or Ev3/Ev4 series (for example Standard_D4s_v3, Standard_D8s_v4, Standard_E8a_v4, etc.) will be available when H-series isn’t.

    – Go to your cluster configuration, open the Node Type dropdown and pick one of those.

    • Check your subscription quotas

    – Even if capacity exists, you might have hit your cores/instances quota for a VM family. In the Azure Portal go to Subscriptions → Usage + quotas and verify you haven’t exceeded the limit for that VM family.

    • Use a different region

    – If you truly need that H-series machine (or your preferred SKU) in a region that’s sold out, you’ll have to spin up a new Databricks workspace in another Azure region where capacity is available.

    – Unfortunately, you can’t “move” an existing workspace to a new region. You’d create a new workspace via the Azure Portal (Azure Databricks → + Create → pick the new Region from the dropdown) and then recreate your clusters there.

    Hope one of these approaches gets you up and running! Let me know if you have questions on which VM sizes to pick or need help checking quotas.

    Reference list

    • H-series VM Family Retirement

    https://learn.microsoft.com/azure/virtual-machines/h-series-retirement

    • Classic compute termination error codes (CLOUD_PROVIDER_LAUNCH_FAILURE)

    https://learn.microsoft.com/azure/databricks/compute/troubleshooting/cluster-error-codes#cloud_provider_launch_failure

    • Subscription quotas

    https://learn.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits-quotas

    • Create an Azure Databricks workspace (to pick a different region)

    https://learn.microsoft.com/azure/databricks/getting-started/azure/quickstart
    Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.
    Please do not forget to "Accept Answer" and "up-vote" wherever the information provided helps you, as this can be beneficial to other community members.

    0 comments No comments

  2. kagiyama yutaka 1,745 Reputation points
    2026-04-26T11:55:46.2666667+00:00

    I think classic nails its region at WS‑create, so when that region’s SKU pool dies the only path that holds is spin a tiny WS in a region that still allocs that size and rebuild and if it still stalls, drop one SKU tier or slide one region and the allocator opens.

    0 comments No comments

  3. Q&A Assist
    2026-04-24T12:00:11.1066667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The error indicates an Azure capacity issue for the selected VM SKU in that region. This is a cloud provider stockout problem, not specific to Databricks.

    To proceed:

    1. Change the VM size (SKU)
      • In the Databricks Classic Compute cluster configuration, choose a different worker/driver VM size from the list (for example, another D-series or E-series SKU).
      • If all SKUs in the dropdown are disabled for that region, capacity for those sizes is currently unavailable.
    2. Change availability zone or let Azure choose automatically
      • If the cluster configuration allows selecting an availability zone, switch to another zone in the same region or use an auto zone setting so Azure can pick a zone with capacity.
    3. Use a different region
      • If capacity issues persist for all SKUs in the current region, create the Databricks workspace (or a new workspace) in another Azure region where the desired VM sizes are available.
      • Then create the Classic Compute cluster in that new region (the cluster region is tied to the workspace region, so changing region means using a workspace in that region).
    4. Retry later
      • Capacity issues can be temporary. Retrying the cluster launch later may succeed once Azure capacity is restored.

    If the problem persists even after trying different SKUs and regions, contact Azure support or Databricks support to confirm regional capacity constraints and get guidance on suitable alternative SKUs.


    References:

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.