Share via

Timestamp being logged incorrectly some of the time.

Greg R 0 Reputation points
2026-05-05T19:57:37.6533333+00:00

I have an ADF pipeline that starts by writing utcnow to a string pipeline variable, then performs operations with assorted casts to different form factors with formatdatetime, including several script blocks that log to a snowflake log table, all based on the pipeline variable. For some reason, the final script (and only the final script) that logs pipeline completion will sometimes insert a timestamp that's a few seconds earlier than all of the others. This timestamp difference is reflected in the input JSON for the logging script, so it appears to be on the ADF side of things, but in the set variable block that runs concurrently to set the output variables the correct timestamp is accessed. This only happens on some of the runs but is consistent across dev environments. Does anyone know why this might be happening, and how to fix it?

Azure Data Factory
Azure Data Factory

An Azure service for ingesting, preparing, and transforming data at scale.

0 comments No comments

1 answer

Sort by: Most helpful
  1. SAI JAGADEESH KUDIPUDI 2,625 Reputation points Microsoft External Staff Moderator
    2026-05-05T23:29:36.64+00:00

    Hi Greg R,
    thanks for sharing the details. I understand how confusing it can be to see timestamps drift slightly within the same pipeline run, especially when you expect them to be consistent.

    Based on your description, what you’re seeing is actually a common behavior in Azure Data Factory when timestamps are evaluated at different points during pipeline execution.

    Functions like utcNow() return the current time at the moment they are evaluated. Depending on how your activities are structured, that evaluation can happen at slightly different times across activities, which can lead to small differences (a few seconds) in the generated timestamps. Microsoft documentation confirms that utcNow() simply returns the current timestamp and can be used dynamically in expressions.

    At the same time, each pipeline run has a fixed trigger time that represents when the pipeline started. This can be accessed using pipeline().TriggerTime, which remains consistent throughout the run.
    To fix the issue

    To make your timestamps consistent across all logging steps, I recommend the following:

    • Capture the timestamp once at the beginning of the pipeline
      • Use a Set Variable activity: startTime = @utcNow()
    • Use that variable everywhere
      • Reference variables('startTime') in all downstream activities, including your final script
    • Alternatively, use a built‑in stable value
      • Replace utcNow() with: @pipeline().TriggerTime
      • This ensures all activities use the same timestamp tied to the pipeline invocation

    Make sure your final logging activity has a clear dependency on the step where the variable is set. This ensures the correct value is used and avoids any timing inconsistencies in expression evaluation during execution.

    This behavior is expected when timestamps are evaluated independently across activities. By standardizing on a single timestamp source (either a pipeline variable or pipeline().TriggerTime), you should see consistent logging across all steps.
    Microsoft Reference link:

    Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.