Share via

quotas and resources constraints in apache spark for azure synapse

Haijun Zhai 20 Reputation points Microsoft Employee
2026-04-27T23:49:27.94+00:00

quotas and resources constraints in apache spark for azure synapse

Azure Synapse Analytics
Azure Synapse Analytics

An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.

0 comments No comments

Answer accepted by question author

  1. Manoj Kumar Boyini 13,850 Reputation points Microsoft External Staff Moderator
    2026-04-28T00:20:54.19+00:00

    Hi @Haijun Zhai

    Azure Synapse Spark has two layers of constraints: a workspace-level vCore quota and Spark pool-level limits. Microsoft documents that every workspace has a Spark vCore quota, and if a workload requests more vCores than available, the documented error is MAXIMUM_WORKSPACE_CAPACITY_EXCEEDED. You can request a quota increase through the Azure portal under Azure Synapse Analytics → Apache Spark (vCore) per workspace.

    At the Spark pool level, the pool definition controls compute behavior such as node size, node count, autoscaling, and time to live. Compute charges start only when a Spark job runs and a Spark instance is created. Spark pools are shared across users, and all workloads consume from the same pool capacity, while job concurrency and resource usage are governed by pool-level and workspace-level limits.

    The documented concurrency limits are 50 running jobs per Spark pool, 200 queued jobs per Spark pool, 250 active jobs per Spark pool, and 1000 active jobs per workspace. Core usage is constrained by both pool capacity and the overall workspace vCore quota.

    The documented API throttling limits are 2 requests/sec for create session and create batch job operations, and 200 requests/sec for GET operations, with an approximate 200 QPS overall cap.

    If the error is workspace-capacity related, reduce requested vCores or request a quota increase. If the pool capacity is exhausted, reduce concurrent workloads, lower resource usage, increase the pool size, or use another pool.

    Helpful References:
    https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-concepts
    https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-pool-configurations
    https://learn.microsoft.com/en-us/rest/api/synapse/concurrency-limits-spark-pools

    Please let us know if you have any questions and concerns.

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.