Microsoft OpenTelemetry Distro

Microsoft OpenTelemetry Distro is a unified observability distribution that provides a single onboarding experience for collecting traces, metrics, and logs from agentic and nonagentic applications. It supports observability for Microsoft Agent 365, Microsoft Foundry, Azure Monitor, and any OpenTelemetry Protocol (OTLP)-compatible backend. The distro supports .NET, Node.js, and Python, and replaces fragmented setup across multiple observability stacks with one import and one configuration call.

Key benefits

The Microsoft OpenTelemetry Distro provides these benefits:

  • One package, one API: Replace multiple exporter and instrumentation packages with a single dependency.
  • Multi-backend support: Send telemetry to Azure Monitor, any OpenTelemetry Protocol (OTLP)-compatible endpoint such as Datadog, Grafana, or New Relic, and Microsoft Agent 365 at the same time.
  • Built-in instrumentations: Use automatic instrumentation for HTTP, databases, Azure SDK, Azure Functions, and more with no extra configuration.
  • Standards-based: Build on OpenTelemetry, the industry-standard observability framework.
  • Minimal boilerplate: Add one import and one function call to your application entry point.

Installation and configuration

This guidance shows you how to add observability to your application with Microsoft OpenTelemetry Distro. The Distro automatically collects traces, metrics, and logs with built-in instrumentations, and exports the telemetry to Azure Monitor, any OpenTelemetry Protocol (OTLP) endpoint, or Microsoft Agent 365.

Install library

To get started with the Microsoft OpenTelemetry Distro, install the appropriate library for your development platform by using your language's package manager.

Prerequisites: Python 3.10 or later.

pip install microsoft-opentelemetry

Configuration

The Agent 365 exporter doesn't use a connection string. It discovers its endpoint automatically based on tenant. To enable export to Agent 365, set the exporter target and provide a token resolver that returns an access token for a given agent ID and tenant ID.

Call use_microsoft_opentelemetry() to enable observability.

from microsoft.opentelemetry import use_microsoft_opentelemetry
from microsoft.opentelemetry.a365.hosting.token_cache_helpers import AgenticTokenCache

token_cache = AgenticTokenCache()

use_microsoft_opentelemetry(
    enable_a365=True,
    a365_token_resolver=lambda agent_id, tenant_id: (
        (t := asyncio.run(token_cache.get_observability_token(agent_id, tenant_id)))
        and t.token or None
    ),
)

For custom token resolution (instead of the default token resolver), see Manual token resolver.

You can customize the exporter behavior by passing optional a365_* kwargs to use_microsoft_opentelemetry().

Parameter Description Default
a365_use_s2s_endpoint When True, uses the service-to-service endpoint path. False
a365_max_queue_size Maximum queue size for the batch processor. 2048
a365_scheduled_delay_ms Delay in milliseconds between export batches. 5000
a365_exporter_timeout_ms Timeout in milliseconds for the export operation. 30000
a365_max_export_batch_size Maximum batch size for export operations. 512

Propagate context

To maintain observability across distributed Agent 365 operations, you need to propagate context. When you propagate context through your agents and services, you ensure that traces, logs, and metrics are properly correlated across the entire request lifecycle. This correlation is required for a complete and effective Microsoft Agent 365 monitoring experience.

Baggage attributes

Use BaggageBuilder to set contextual information that flows through all spans in a request. The SDK implements a SpanProcessor that copies all nonempty baggage entries to newly started spans without overwriting existing attributes.

from microsoft.opentelemetry.a365.core import BaggageBuilder

with (
    BaggageBuilder()
    .tenant_id("tenant-123")
    .agent_id("agent-456")
    .conversation_id("conv-789")
    .build()
):
    # Any spans started in this context will receive these as attributes
    pass

To auto-populate the BaggageBuilder from the TurnContext, use the populate helper in the microsoft-opentelemetry package. This helper automatically extracts caller, agent, tenant, channel, and conversation details from the activity.

from microsoft.opentelemetry.a365.core import BaggageBuilder
from microsoft.opentelemetry.a365.hosting.scope_helpers.populate_baggage import populate

builder = BaggageBuilder()
populate(builder, turn_context)

with builder.build():
    # Baggage is auto-populated from the TurnContext activity
    pass

Baggage middleware

If your agent uses the hosting integration package, register baggage middleware to automatically populate baggage for every incoming request. This step removes the need to call BaggageBuilder manually in each activity handler.

In Python, register baggage middleware through ObservabilityHostingManager.configure() rather than directly on the adapter.

from microsoft.opentelemetry.a365.hosting import ObservabilityHostingManager, ObservabilityHostingOptions

options = ObservabilityHostingOptions(enable_baggage=True)
ObservabilityHostingManager.configure(adapter.middleware_set, options)

The middleware skips baggage setup for async replies (ContinueConversation events) to avoid overwriting baggage that the originating request already set.

Validate data is flowing in product

To view agent telemetry in Microsoft Purview or Microsoft Defender, make sure the following requirements are met:

Automatic instrumentation

The Microsoft OpenTelemetry Distro combines standard OpenTelemetry pipelines with Microsoft-curated instrumentation. The Distro can collect application telemetry, infrastructure telemetry, and agent or generative AI telemetry depending on language and configuration.

Category What it covers
Signal pipelines Traces, metrics, and logs.
Resource detection Service, host, cloud, and Azure runtime context where supported.
Infrastructure instrumentation HTTP, ASP.NET Core, Azure SDK, database clients, and logging frameworks where supported.
Generative AI instrumentation OpenAI, Azure OpenAI, Semantic Kernel, LangChain, OpenAI Agents SDK, and Agent Framework where supported.
Manual agent scopes Agent invocation, tool execution, inference, and output telemetry where supported.
Exporters and processors Azure Monitor, Microsoft Agent 365, OTLP, console output, span processors, log processors, and metric readers.

Instrumentation coverage

Language Common application instrumentation Common agent and generative AI instrumentation
Python OpenTelemetry resources, processors, readers, logging, metrics, and traces. Semantic Kernel, OpenAI Agents SDK, Agent Framework, LangChain, Microsoft Agent 365 baggage, and Microsoft Agent 365 scopes.
Node.js HTTP, Azure SDK, Azure Functions, MongoDB, MySQL, PostgreSQL, Redis, Bunyan, and Winston. OpenAI Agents SDK, LangChain, Microsoft Agent 365 baggage, and Microsoft Agent 365 scopes.
.NET ASP.NET Core, HttpClient, SQL Client, Azure SDK, resource detection, metrics, and logs. Semantic Kernel, OpenAI and Azure OpenAI, Agent Framework, Microsoft Agent 365 baggage, and Microsoft Agent 365 scopes.

Automatic instrumentation listens to telemetry signals emitted by supported libraries and frameworks. Manual instrumentation is used when an application needs to describe agent-specific operations, such as invocation, tool execution, inference, or asynchronous output.

Add custom OpenTelemetry sources, meters, processors, or readers when your application emits telemetry that isn't covered by the built-in instrumentations.

Built-in instrumentation libraries

Auto-instrumentation listens to telemetry emitted by supported frameworks and forwards it through the Distro's OpenTelemetry pipeline. For agent scenarios, set baggage such as tenant ID and agent ID before the instrumented framework creates spans.

Framework Python Node.js .NET
Semantic Kernel Supported Not supported Supported
OpenAI and OpenAI Agents SDK Supported Supported Supported
Agent Framework Supported Not supported Supported
LangChain Supported Supported Not listed

Semantic Kernel

from microsoft.opentelemetry import use_microsoft_opentelemetry

def token_resolver(agent_id, tenant_id):
    return "your-token"

use_microsoft_opentelemetry(
    enable_a365=True,
    a365_token_resolver=token_resolver,
    instrumentation_options={
        "semantic_kernel": {"enabled": True},
    },
)

OpenAI

from microsoft.opentelemetry import use_microsoft_opentelemetry

def token_resolver(agent_id, tenant_id):
    return "your-token"

use_microsoft_opentelemetry(
    enable_a365=True,
    a365_token_resolver=token_resolver,
    instrumentation_options={
        "openai_agents": {"enabled": True},
    },
)

Agent Framework

from microsoft.opentelemetry import use_microsoft_opentelemetry

def token_resolver(agent_id, tenant_id):
    return "your-token"

use_microsoft_opentelemetry(
    enable_a365=True,
    a365_token_resolver=token_resolver,
    instrumentation_options={
        "agent_framework": {"enabled": True},
    },
)

LangChain

from microsoft.opentelemetry import use_microsoft_opentelemetry

def token_resolver(agent_id, tenant_id):
    return "your-token"

use_microsoft_opentelemetry(
    enable_a365=True,
    a365_token_resolver=token_resolver,
    instrumentation_options={
        "langchain": {"enabled": True},
    },
)

Manual instrumentation

Use manual instrumentation when automatic instrumentation doesn't describe the agent operation with enough detail. Manual scopes let an application describe common agent activities in a consistent way across languages.

Scope Use for
InvokeAgentScope The start and completion of an agent invocation.
ExecuteToolScope A tool call made by an agent.
InferenceScope An AI model inference operation.
OutputScope Output that must be recorded after the originating scope has already completed.

Reuse the same request and agent identity values across scopes in a request so related telemetry can be correlated.

Agent invocation

from microsoft.opentelemetry.a365.core import (
    AgentDetails,
    Channel,
    InvokeAgentScope,
    InvokeAgentScopeDetails,
    Request,
    ServiceEndpoint,
)

agent_details = AgentDetails(
    agent_id="agent-456",
    agent_name="Email Assistant",
    agent_description="An AI agent powered by Azure OpenAI",
    agentic_user_id="auid-123",
    agentic_user_email="agent@contoso.com",
    agent_blueprint_id="blueprint-789",
    tenant_id="tenant-123",
)

request = Request(
    content="Please help me organize my emails",
    session_id="session-42",
    conversation_id="conv-xyz",
    channel=Channel(name="msteams"),
)

scope_details = InvokeAgentScopeDetails(
    endpoint=ServiceEndpoint(hostname="myagent.contoso.com", port=443),
)

with InvokeAgentScope.start(
    request=request,
    scope_details=scope_details,
    agent_details=agent_details,
) as scope:
    scope.record_input_messages(["Please help me organize my emails"])

    # Run the agent invocation.

    invoke_scope.record_output_messages(["I found 15 urgent emails."])

Tool execution

from microsoft.opentelemetry.a365.core import (
    ExecuteToolScope,
    ServiceEndpoint,
    ToolCallDetails,
    ToolType,
)

tool_details = ToolCallDetails(
    tool_name="email-search",
    arguments={"query": "from:manager@contoso.com"},
    tool_call_id="tool-call-456",
    description="Search emails by criteria",
    tool_type=ToolType.FUNCTION.value,
    endpoint=ServiceEndpoint(
        hostname="tools.contoso.com",
        port=8080,
        protocol="https",
    ),
)

with ExecuteToolScope.start(
    request=request,
    details=tool_details,
    agent_details=agent_details,
) as scope:
    result = search_emails(tool_details.arguments)
    scope.record_response(result)

Inference

from microsoft.opentelemetry.a365.core import (
    InferenceCallDetails,
    InferenceOperationType,
    InferenceScope,
)

inference_details = InferenceCallDetails(
    operationName=InferenceOperationType.CHAT,
    model="gpt-4o-mini",
    providerName="azure-openai",
)

with InferenceScope.start(
    request=request,
    details=inference_details,
    agent_details=agent_details,
) as scope:
    scope.record_input_messages(["Summarize the following emails for me."])
    response = call_llm()
    scope.record_output_messages([response.text])
    scope.record_input_tokens(response.usage.input_tokens)
    scope.record_output_tokens(response.usage.output_tokens)
    scope.record_finish_reasons(["stop"])

Output

from microsoft.opentelemetry.a365.core import OutputScope, Response, SpanDetails

# Capture this before exiting the originating InvokeAgentScope context.
parent_context = invoke_scope.get_span_context()
response = Response(
    messages=["Here is your organized inbox."],
)

with OutputScope.start(
    request=request,
    response=response,
    agent_details=agent_details,
    user_details=None,
    span_details=SpanDetails(parent_context=parent_context),
) as scope:
    pass

Product documentation should define any product-specific validation requirements for these scopes.

Local validation

Local validation confirms that the application produces telemetry before a product-specific destination is validated. Use console output or a local OTLP endpoint to check that traces, metrics, and logs are created.

Validate with a local OTLP endpoint

Configure the Distro to send telemetry to a local collector or another OTLP-compatible endpoint.

export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
from microsoft.opentelemetry import use_microsoft_opentelemetry

use_microsoft_opentelemetry()

Validate with local output

Use local output when you want to confirm instrumentation before sending telemetry to a remote destination.

export ENABLE_A365_OBSERVABILITY_EXPORTER=false
from microsoft.opentelemetry import use_microsoft_opentelemetry

def token_resolver(agent_id, tenant_id):
    return "local-validation-token"

use_microsoft_opentelemetry(
    enable_a365=True,
    a365_token_resolver=token_resolver,
)

# Run instrumented application code.

Review the local output for spans from expected sources, such as HTTP requests, OpenAI or Azure OpenAI calls, agent invocation scopes, tool execution scopes, or inference scopes. Destination-specific validation belongs in the product documentation for that destination.

Manually set up authentication

When you use the Agent 365 exporter, you must provide a mechanism to supply an authentication token. The token resolver works per export batch by using the agent ID and tenant ID from the active baggage context. The distro supports three approaches.

Manual token resolver

Use a manual resolver when you acquire tokens outside the Agent Framework pipeline, or when you're building non-Agent Framework apps.

The resolver must be synchronous. Acquire the token in your async activity handler and cache it for the resolver.

from microsoft.opentelemetry import use_microsoft_opentelemetry
from microsoft.opentelemetry.a365.runtime import get_observability_authentication_scope

_cached_token: str | None = None

def my_token_resolver(agent_id: str, tenant_id: str) -> str | None:
    return _cached_token

use_microsoft_opentelemetry(enable_a365=True, a365_token_resolver=my_token_resolver)

@AGENT_APP.activity("message", auth_handlers=["AGENTIC"])
async def on_message(context: TurnContext, _state: TurnState):
    global _cached_token
    _cached_token = await AGENT_APP.auth.exchange_token(
        context,
        scopes=get_observability_authentication_scope(),
        auth_handler_id="AGENTIC",
    )

Agentic token cache with Agent Framework apps

For Agent Framework apps, the distro automatically registers IExporterTokenCache<AgenticTokenStruct> via DI when you don't set a custom TokenResolver. Your agent calls RegisterObservability() at runtime to supply credentials, and the cache handles token acquisition and refresh.

from microsoft.opentelemetry import use_microsoft_opentelemetry
from microsoft.opentelemetry.a365.hosting.token_cache_helpers import AgenticTokenCache, AgenticTokenStruct
from microsoft.opentelemetry.a365.runtime import get_observability_authentication_scope

token_cache = AgenticTokenCache()

_cached_tokens: dict[tuple[str, str], str | None] = {}

# Keep the sync resolver side-effect free; refresh the cache in the async request handler.
def sync_token_resolver(agent_id: str, tenant_id: str) -> str | None:
    return _cached_tokens.get((agent_id, tenant_id))

use_microsoft_opentelemetry(enable_a365=True, a365_token_resolver=sync_token_resolver)

@AGENT_APP.activity("message", auth_handlers=["AGENTIC"])
async def on_message(context: TurnContext, _state: TurnState):
    agent_id = context.activity.recipient.id
    tenant_id = context.activity.recipient.tenant_id
    token_cache.register_observability(
        agent_id=agent_id,
        tenant_id=tenant_id,
        token_generator=AgenticTokenStruct(
            authorization=AGENT_APP.auth,
            turn_context=context,
        ),
        observability_scopes=get_observability_authentication_scope(),
    )
    _cached_tokens[(agent_id, tenant_id)] = await token_cache.get_observability_token(
        agent_id, tenant_id,
    )

Store validation attributes

For successful store validation, your agent must implement InvokeAgentScope, InferenceScope, and ExecuteToolScope. The following attributes are required or optional for each scope.

InvokeAgentScope

Attribute Status
gen_ai.agent.id Required
gen_ai.agent.name Required
gen_ai.operation.name Required
gen_ai.input.messages Required
gen_ai.output.messages Required
gen_ai.conversation.id Required
microsoft.tenant.id Required
microsoft.a365.agent.blueprint.id Required
microsoft.agent.user.id Required
microsoft.agent.user.email Required
microsoft.channel.name Required
user.id Required
user.email Required
client.address Required
server.address Required
server.port Required
error.type Optional
gen_ai.agent.description Optional
gen_ai.agent.version Optional
microsoft.a365.agent.platform.id Optional
microsoft.channel.link Optional
microsoft.conversation.item.link Optional
microsoft.session.id Optional
microsoft.session.description Optional
user.name Optional
microsoft.a365.caller.agent.id Optional
microsoft.a365.caller.agent.name Optional
microsoft.a365.caller.agent.blueprint.id Optional
microsoft.a365.caller.agent.platform.id Optional
microsoft.a365.caller.agent.user.email Optional
microsoft.a365.caller.agent.user.id Optional
microsoft.a365.caller.agent.version Optional

ExecuteToolScope

Attribute Status
gen_ai.agent.id Required
gen_ai.agent.name Required
gen_ai.operation.name Required
gen_ai.tool.name Required
gen_ai.tool.call.id Required
gen_ai.tool.call.arguments Required
gen_ai.tool.call.result Required
gen_ai.tool.type Required
gen_ai.conversation.id Required
microsoft.tenant.id Required
microsoft.a365.agent.blueprint.id Required
microsoft.agent.user.id Required
microsoft.agent.user.email Required
microsoft.channel.name Required
user.id Required
user.email Required
client.address Required
error.type Optional
gen_ai.agent.description Optional
gen_ai.agent.version Optional
gen_ai.tool.description Optional
microsoft.a365.agent.platform.id Optional
microsoft.channel.link Optional
microsoft.conversation.item.link Optional
microsoft.session.id Optional
microsoft.session.description Optional
server.address Optional
server.port Optional
user.name Optional

InferenceScope

Attribute Status
gen_ai.agent.id Required
gen_ai.agent.name Required
gen_ai.operation.name Required
gen_ai.input.messages Required
gen_ai.output.messages Required
gen_ai.request.model Required
gen_ai.provider.name Required
gen_ai.conversation.id Required
microsoft.tenant.id Required
microsoft.a365.agent.blueprint.id Required
microsoft.agent.user.id Required
microsoft.agent.user.email Required
microsoft.channel.name Required
user.id Required
user.email Required
client.address Required
error.type Optional
gen_ai.agent.description Optional
gen_ai.agent.version Optional
gen_ai.response.finish_reasons Optional
gen_ai.usage.input_tokens Optional
gen_ai.usage.output_tokens Optional
microsoft.a365.agent.platform.id Optional
microsoft.a365.agent.thought.process Optional
microsoft.channel.link Optional
microsoft.conversation.item.link Optional
microsoft.session.id Optional
microsoft.session.description Optional
server.address Optional
server.port Optional
user.name Optional

OutputScope

Attribute Status
gen_ai.agent.id Required
gen_ai.agent.name Required
gen_ai.operation.name Required
gen_ai.output.messages Required
gen_ai.conversation.id Required
microsoft.tenant.id Required
microsoft.a365.agent.blueprint.id Required
microsoft.agent.user.id Required
microsoft.agent.user.email Required
microsoft.channel.name Required
user.id Required
user.email Required
client.address Required
gen_ai.agent.description Optional
gen_ai.agent.version Optional
microsoft.a365.agent.platform.id Optional
microsoft.channel.link Optional
microsoft.conversation.item.link Optional
microsoft.session.id Optional
microsoft.session.description Optional
user.name Optional

Test your agent with observability

After implementing observability, verify that telemetry is being captured:

  1. Go to https://admin.cloud.microsoft/#/agents/all.
  2. Select your agent, and then select Activity.
  3. Verify that sessions and tool calls appear.

Sample applications and advanced configuration

For working samples and advanced configuration options, see the GitHub repositories for each language:

Troubleshooting

This section describes common problems when implementing and using the Microsoft OpenTelemetry Distro with Agent 365.

Tip

Agent 365 Troubleshooting Guide contains high-level troubleshooting recommendations, best practices, and links to troubleshooting content for each part of the Agent 365 development lifecycle.

Observability data doesn't appear

Symptoms:

  • Agent is running
  • No telemetry in admin center
  • Can't see agent activity

Root cause:

  • Agent 365 export isn't enabled
  • Configuration errors
  • Token resolver problems

Solutions: Try the following steps to resolve the problem:

  • Verify Agent 365 export is enabled

    You must explicitly enable the Agent 365 exporter. When you don't set it, the distro might fall back to a console exporter or export nothing. Enable it in code:

    from microsoft.opentelemetry import use_microsoft_opentelemetry
    
    use_microsoft_opentelemetry(
        enable_a365=True,
        a365_enable_observability_exporter=True,
        a365_token_resolver=my_token_resolver,
    )
    

    Or set the environment variable:

    export ENABLE_A365_OBSERVABILITY_EXPORTER=true
    

    Note

    ENABLE_A365_OBSERVABILITY_EXPORTER is a secondary toggle that only takes effect when enable_a365=True is set in code. You can also control it via the a365_enable_observability_exporter kwarg.


  • Check token resolver configuration

    The exporter requires a valid token resolver that returns a Bearer token for each export request. If the token resolver is missing or returns null, the export is silently skipped.

  • Enable console export and check for telemetry locally

    Add a console exporter to verify telemetry is being generated before it reaches the Agent 365 endpoint:

    use_microsoft_opentelemetry(enable_a365=True, enable_console=True)
    

  • Enable verbose logging

    import logging
    
    logging.basicConfig(level=logging.DEBUG)
    

  • Check logs for export errors

    Use the az webapp log tail command to search logs for observability-related errors:

    az webapp log tail --name <your-app-name> --resource-group <your-resource-group> | Select-String "observability"
    

Missing tenant ID or agent ID — spans skipped

Symptoms: The system silently drops spans and never exports them. Some platforms log a count of skipped spans or a message such as No spans with tenant/agent identity found. Others drop them without logging.

Resolution:

  • Before export, the distro partitions spans by tenant and agent identity. Spans that lack either a tenant ID or agent ID are dropped and never sent to the service.
  • Ensure BaggageBuilder is set up with the tenant ID and agent ID before creating spans. These values propagate through the OpenTelemetry context and attach to all spans created within the baggage scope. For the platform-specific API, see Baggage attributes.
  • If you're using the baggage middleware or turn context helper from the hosting integration package, confirm the TurnContext activity has a valid recipient with agent identity.

Token resolution failure — export skipped or unauthorized

Symptoms: The token resolver returns null or throws an error. Depending on the platform, the export is either skipped entirely or fails with HTTP 401.

Resolution:

  • The token resolver is required. If it's missing, the exporter throws an error on startup. Verify that a token resolver is provided and returns a valid Bearer token.
  • Make sure the correct tenant ID and agent ID are passed to BaggageBuilder, because these values are forwarded to the token resolver.
  • For Azure-hosted agents, verify the Managed Identity has the required API permission for the observability scope.
  • For .NET apps using the Agent Framework hosting package, token exchange is handled automatically via DI. If tokens are missing, confirm Microsoft.Agents.A365.Observability.Hosting is installed and registered.

HTTP 401 Unauthorized

Symptoms: Export fails with HTTP 401. The exporter doesn't retry this error.

Resolution:

  • Verify the token audience matches the observability endpoint scope.
  • Check that the token resolver isn't returning a delegated user token, a token for an incorrect audience, or an expired token.

HTTP 403 Forbidden

Symptoms: Export fails with HTTP 403. The exporter doesn't retry this error.

Root cause: An HTTP 403 error can have different causes. Check the following resolutions in order.

Resolution:

  • Missing license — Verify that your tenant has one of the following licenses assigned in Microsoft 365 admin center:

    • Test - Microsoft 365 E7
    • Microsoft 365 E7
    • Microsoft Agent 365 Frontier
  • Missing Agent365.Observability.OtelWrite permission — You must grant this permission to your identity (Managed Identity or app registration). Without it, telemetry export fails with HTTP 403.

    Grant the permission using one of the following options.

    Option A — Agent 365 CLI (requires a365.config.json and a365.generated.config.json in your config directory, a Global Administrator account, and Agent 365 CLI v1.1.139-preview or later)

    a365 setup admin --config-dir "<path-to-config-dir>"
    

    Option B — Entra Portal (no config files required; requires Global Administrator access to the blueprint app registration)

    1. Go to Entra portal > App registrations > select your Blueprint app.
    2. Go to API permissions > Add a permission > APIs my organization uses > search for 9b975845-388f-4429-889e-eab1ef63949c.
    3. Select Delegated permissions > check Agent365.Observability.OtelWrite > Add permissions.
    4. Repeat steps 2–3, this time select Application permissions > check Agent365.Observability.OtelWrite > Add permissions.
    5. Click Grant admin consent and confirm.

    Both Agent365.Observability.OtelWrite (Delegated) and Agent365.Observability.OtelWrite (Application) should show Granted status.

HTTP 429 / 5xx — Transient errors

Symptoms: Export fails with a transient HTTP status code such as 429 or 5xx.

Resolution:

  • These errors are usually transient and resolve on their own. The Python and JavaScript distros automatically retry on HTTP 408, 429, and 5xx status codes. The .NET distro doesn't retry automatically.
  • If errors persist, check the service health dashboard.
  • Consider reducing export frequency by increasing the scheduled delay between batches or the max export batch size. For Python and JavaScript, use the relevant exporterOptions or a365_* parameters documented in the GitHub repositories. For .NET, use o.Agent365.Exporter.ScheduledDelayMilliseconds and o.Agent365.Exporter.MaxExportBatchSize.

Export timeout

Symptoms: Export attempts time out.

Resolution:

  • Check network connectivity to the observability endpoint.

  • The default HTTP request timeout is 30 seconds across all platforms. If timeouts occur frequently, increase the timeout value in your exporter options:

    use_microsoft_opentelemetry(
        enable_a365=True,
        a365_token_resolver=my_token_resolver,
        # No direct timeout kwarg — set via environment variable or exporterOptions if supported
    )
    

    See the Python repository for the full list of a365_* options.


Export succeeds but telemetry doesn't appear in Defender or Purview

Symptoms: Logs show a successful export (HTTP 200) but telemetry isn't visible in Microsoft Defender or Microsoft Purview.

Resolution:

  • Verify that you meet the prerequisites for viewing exported logs:
  • Telemetry can take several minutes to populate after a successful export. Wait before investigating further.
  • Verify that spans contain valid microsoft.tenant.id and gen_ai.agent.id attributes. Missing identity attributes cause spans to be dropped server-side even if the HTTP export returns 200.