Share via

Unbale to run synapse pipeline when triggered from ADF, but runs inside synapse just fine.

Indu Jaishwal 0 Reputation points Microsoft Employee
2026-04-28T21:30:37.91+00:00

We have two Synapse pipelines inside our synapse resource which runs fine when triggered from synapse, but when triggered from ADF, they fail with the below error: Error details

This application failed due to the total number of errors: 1. Error code 1 Spark_Ambiguous_UserApp_NullPointer Message Job failed during run time with state=[dead]. TSG:The code tried to dereference a null value. At some point the code attempted to call a method or access a property on a null value. To avoid de-referencing null add null check guards around method calls or property accesses on values that can potentially be null. 1. Check the logs for this Spark application. Inspect the logs for a clearer indication of what was de-referenced. Source Unknown

The synapse resource has the following roles enabled for the ADF resource: Synapse Artifact User, Synapse Compute Operator, Synapse Credential User.
The storage account where the synapse resource stores the jar that is triggered from the synapse pipeline has granted the Storage Blob Data Contributor role to the ADF resource

Azure Synapse Analytics
Azure Synapse Analytics

An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.


1 answer

Sort by: Most helpful
  1. Salamat Shah 180 Reputation points MVP
    2026-05-04T22:01:41.1466667+00:00

    The failure occurs because the Azure Data Factory (ADF) trigger runs the Synapse Spark job under ADF’s managed identity, which differs from the identity used when running the pipeline directly in Synapse Studio. This causes missing runtime context (linked service / Spark config / parameter values) and results in a Spark NullPointerException.

    Try below Fix Steps:

    1. Ensure the ADF managed identity has:
      • Synapse Administrator or all required roles plus explicit permissions on:
      • The Spark pool
      • Linked services used by the pipeline
      • Access to all storage paths referenced at runtime (input, output, temp).
    2. Verify the pipeline does not rely on Studio-only defaults (parameters, Spark config) and explicitly pass all required values when triggered from ADF.
    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.