-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Description
In agent-framework-core==1.0.0rc5 and agent-framework-azure-ai==1.0.0rc5, the default Azure AI observability setup does not produce end-to-end trace continuity in Azure AI Foundry.
This is not a new regression only in rc5. We first observed and reproduced this during internal evaluation at my company in December 2025, and the same behavior is still present in python-1.0.0rc5 released on March 20, 2026.
The user-visible problem is that a single logical agent run that triggers multiple Responses API calls is split into multiple unrelated trace IDs in Azure AI Foundry instead of appearing as one coherent trace tree.
That makes Foundry observability effectively broken for this path:
- one run does not appear as one trace
- each Responses call looks like an independent server-side trace
- debugging a single run across multiple model calls becomes much harder
This issue is specifically about the default non-streaming Azure AI path. Streaming appears to have a separate root cause.
Code Sample
Error Messages / Stack Traces
Package Versions
agent-framework-core: 1.0.0rc5, agent-framework-azure-ai: 1.0.0rc5
Python Version
Python 3.12
Additional Context
Likely root cause
AzureAIClient.configure_azure_monitor() instruments Azure Monitor and the Agent Framework's own spans, but it does not instrument the httpx transport that the OpenAI Python SDK uses internally. The relevant rc5 code paths are:
python/packages/azure-ai/agent_framework_azure_ai/_client.py#L242-L328python/packages/azure-ai/agent_framework_azure_ai/_client.py#L683-L685
The effective flow today is:
configure_azure_monitor()callsazure.monitor.opentelemetry.configure_azure_monitor(…)- It then calls
agent_framework.observability.enable_instrumentation(…) RawAzureAIClient._initialize_client()later creates the OpenAI client viaself.project_client.get_openai_client()- That client uses
httpx.AsyncClientunder the hood - Because
httpxis uninstrumented, outbound Responses requests never propagate the active trace context to Foundry
This is why local/in-process spans may still appear coherent while Foundry server-side traces are fragmented across separate trace IDs.
How we worked around it in our codebase
In our codebase, we restored non-streaming Foundry trace continuity by explicitly instrumenting httpx after the normal MAF / Azure Monitor setup.
Our workaround is effectively:
await client.configure_azure_monitor(...)
from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor
HTTPXClientInstrumentor().instrument()This workaround solved the trace fragmentation for the non-streaming path in our repo, but it should not be required from every consumer when AzureAIClient.configure_azure_monitor() is presented as the supported observability setup path.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status