Skip to content

Python: agent_with_hosted_mcp sample fails - AzureOpenAIChatClient sends unsupported mcp tool type to API #4861

@Arturo-Quiroga-MSFT

Description

@Arturo-Quiroga-MSFT

Bug Description

The agent_with_hosted_mcp hosted agent sample fails at runtime when sending the MCP tool config to Azure OpenAI. The AzureOpenAIChatClient passes the MCP tool dict with "type": "mcp" directly to the Azure OpenAI API, which rejects it with a 400 error.

Error Message

Error code: 400 - {'error': {'message': "Invalid value: 'mcp'. Supported values are: 'function' and 'custom'.", 'type': 'invalid_request_error', 'param': 'tools[0].type', 'code': 'invalid_value'}}

Steps to Reproduce

  1. Install agent-framework 1.0.0rc5 and azure-ai-agentserver-agentframework 1.0.0b17
  2. Navigate to python/samples/05-end-to-end/hosted_agents/agent_with_hosted_mcp/
  3. Create .env with AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_CHAT_DEPLOYMENT_NAME
  4. Run python main.py
  5. Send a request:
    curl -X POST http://localhost:8088/responses \
      -H "Content-Type: application/json" \
      -d '{"input": "What is Azure AI Foundry?", "stream": false}'
  6. Response: 400 error — Invalid value: 'mcp'

Root Cause

The sample's main.py passes the MCP tool spec as a raw dict:

mcp_tool = {
    "type": "mcp",
    "server_label": "Microsoft_Learn_MCP",
    "server_url": "https://learn.microsoft.com/api/mcp",
}

agent = AzureOpenAIChatClient(credential=DefaultAzureCredential()).as_agent(
    name="DocsAgent",
    instructions="...",
    tools=mcp_tool,
)

The AzureOpenAIChatClient does not translate the "type": "mcp" tool config into a format the Azure OpenAI Chat Completions API supports. The API only accepts "function" and "custom" as tool types.

Expected Behavior

Either:

  1. AzureOpenAIChatClient should handle MCP tool configs by resolving them (listing available tools from the MCP server and converting them to function tool definitions), or
  2. The sample should use AzureOpenAIResponsesClient instead of AzureOpenAIChatClient, since the Responses API may natively support MCP tool types, or
  3. The sample README should document that AzureOpenAIChatClient does not support MCP tools and recommend an alternative approach

Additional Context

Note: The same hosted agents directory has other samples that work correctly:

  • agent_with_local_tools (PASS) — uses AzureOpenAIResponsesClient with @tool decorator
  • agent_with_text_search_rag (PASS) — uses AzureOpenAIChatClient with BaseContextProvider
  • agents_in_workflow (PASS) — uses AzureOpenAIChatClient with ConcurrentBuilder
  • writer_reviewer_agents_in_workflow (PASS) — uses AzureOpenAIResponsesClient with WorkflowBuilder

Also note: The agent_with_hosted_mcp/requirements.txt pins azure-ai-agentserver-agentframework==1.0.0b3 while agent_with_local_tools pins ==1.0.0b16. The latest version on PyPI is 1.0.0b17. The inconsistent version pinning across samples in the same directory could also confuse users.

Environment

  • agent-framework 1.0.0rc5 from PyPI
  • azure-ai-agentserver-agentframework 1.0.0b17
  • Python 3.12.13
  • macOS (Apple Silicon)
  • Azure OpenAI endpoint (Azure AI Foundry)

Found during V1.0 bug bash (March 23, 2026)

Metadata

Metadata

Assignees

Labels

pythonv1.0Features being tracked for the version 1.0 GA

Type

No type

Projects

Status

In Review

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions