An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
Hi @Haijun Zhai
Azure Synapse Spark has two layers of constraints: a workspace-level vCore quota and Spark pool-level limits. Microsoft documents that every workspace has a Spark vCore quota, and if a workload requests more vCores than available, the documented error is MAXIMUM_WORKSPACE_CAPACITY_EXCEEDED. You can request a quota increase through the Azure portal under Azure Synapse Analytics → Apache Spark (vCore) per workspace.
At the Spark pool level, the pool definition controls compute behavior such as node size, node count, autoscaling, and time to live. Compute charges start only when a Spark job runs and a Spark instance is created. Spark pools are shared across users, and all workloads consume from the same pool capacity, while job concurrency and resource usage are governed by pool-level and workspace-level limits.
The documented concurrency limits are 50 running jobs per Spark pool, 200 queued jobs per Spark pool, 250 active jobs per Spark pool, and 1000 active jobs per workspace. Core usage is constrained by both pool capacity and the overall workspace vCore quota.
The documented API throttling limits are 2 requests/sec for create session and create batch job operations, and 200 requests/sec for GET operations, with an approximate 200 QPS overall cap.
If the error is workspace-capacity related, reduce requested vCores or request a quota increase. If the pool capacity is exhausted, reduce concurrent workloads, lower resource usage, increase the pool size, or use another pool.
Helpful References:
https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-concepts
https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-pool-configurations
https://learn.microsoft.com/en-us/rest/api/synapse/concurrency-limits-spark-pools
Please let us know if you have any questions and concerns.