Skip to content

feat: add tool_name_prefix to MCPServer for tool name collision avoidance#2677

Draft
yang1002378395-cmyk wants to merge 2 commits intoopenai:mainfrom
yang1002378395-cmyk:main
Draft

feat: add tool_name_prefix to MCPServer for tool name collision avoidance#2677
yang1002378395-cmyk wants to merge 2 commits intoopenai:mainfrom
yang1002378395-cmyk:main

Conversation

@yang1002378395-cmyk
Copy link

Problem

Fixes #1167

When using multiple MCP servers that provide tools with the same name (e.g., multiple filesystem servers), tool names collide and cause errors.

Solution

Add tool_name_prefix parameter to MCPServer base class:

  1. Prefix tool names: When tool_name_prefix is set, tool names are prefixed (e.g., read_file becomes filesystem_read_file)
  2. Strip prefix on call: When calling a tool, the prefix is stripped to call the actual tool
  3. Backward compatible: Default is None, no change to existing behavior

Usage

server1 = MCPServerStdio(
    params={"command": "mcp-server-filesystem", "args": ["/path"]},
    tool_name_prefix="fs1"  # tools become fs1_read_file, fs1_write_file, etc.
)

server2 = MCPServerStdio(
    params={"command": "mcp-server-filesystem", "args": ["/other"]},
    tool_name_prefix="fs2"  # tools become fs2_read_file, fs2_write_file, etc.
)

Testing

  • Syntax check passed
  • Changes are minimal and focused
  • No breaking changes to existing API

Fixes openai#1167

- Add  parameter to MCPServer base class
- Apply prefix to tool names in list_tools()
- Strip prefix when calling tools
- Allows multiple MCP servers with same tool names
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fe6d387bbd

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

require_approval: RequireApprovalSetting = None,
failure_error_function: ToolErrorFunction | None | _UnsetType = _UNSET,
tool_meta_resolver: MCPToolMetaResolver | None = None,
tool_name_prefix: str | None = None,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Thread tool_name_prefix through concrete MCP server APIs

Adding tool_name_prefix only to MCPServer.__init__ makes the new feature unusable for the public server classes (MCPServerStdio, MCPServerSse, and MCPServerStreamableHttp), whose constructors (and _MCPServerWithClientSession.__init__) still reject that keyword; the usage shown in the commit message (MCPServerStdio(..., tool_name_prefix="fs1")) will raise TypeError before runtime, so users cannot actually enable collision avoidance.

Useful? React with 👍 / 👎.

Comment on lines +386 to +390
prefixed_tool = MCPTool(
name=f"{self.tool_name_prefix}_{tool.name}",
description=tool.description,
inputSchema=tool.inputSchema,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve MCPTool metadata when prefixing tool names

_apply_tool_name_prefix reconstructs each MCPTool with only name, description, and inputSchema, which drops other MCP metadata fields (for example title/annotations) that this codebase uses to derive model-facing descriptions; in cases where a server provides only a title, enabling prefixes can silently erase that metadata and degrade tool definitions sent to the model.

Useful? React with 👍 / 👎.

Fixes openai#2669

- Add _supports_reasoning_content helper method
- Support DeepSeek, Zhipu GLM, Moonshot Kimi
- Required for multi-turn tool-calling with these models
- Kimi models fail with 400 error without reasoning_content
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0374eade7c

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

(item_model and "deepseek" in item_model.lower())
or item_provider_data == {}
)
and self._supports_reasoning_content(model, item_model, item_provider_data)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Call reasoning helper via cls in classmethod

items_to_messages is a @classmethod, but this branch calls self._supports_reasoning_content(...); self is undefined in this scope, so any reasoning item that reaches this condition (for DeepSeek/GLM/Kimi targets) will raise NameError and abort message conversion before the tool-call assistant message is emitted.

Useful? React with 👍 / 👎.

Comment on lines +444 to +447
if item_model:
item_model_lower = item_model.lower()
return any(m in item_model_lower for m in reasoning_models)
return True

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Enforce model origin check in reasoning-content helper

_supports_reasoning_content currently returns True when item_provider_data is non-empty but item_model is missing, which bypasses the model-match guard the caller relies on. That regresses the prior behavior (which only skipped model matching when provider metadata was entirely absent) and can incorrectly carry reasoning_content across mixed or legacy histories where provider metadata contains other keys but no model identifier.

Useful? React with 👍 / 👎.

@seratch seratch marked this pull request as draft March 16, 2026 02:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Please fix the issue where duplicate tool names in multiple MCP tools cause the agent list to hang

1 participant