feat: add tool_name_prefix to MCPServer for tool name collision avoidance#2677
feat: add tool_name_prefix to MCPServer for tool name collision avoidance#2677yang1002378395-cmyk wants to merge 2 commits intoopenai:mainfrom
Conversation
Fixes openai#1167 - Add parameter to MCPServer base class - Apply prefix to tool names in list_tools() - Strip prefix when calling tools - Allows multiple MCP servers with same tool names
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fe6d387bbd
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| require_approval: RequireApprovalSetting = None, | ||
| failure_error_function: ToolErrorFunction | None | _UnsetType = _UNSET, | ||
| tool_meta_resolver: MCPToolMetaResolver | None = None, | ||
| tool_name_prefix: str | None = None, |
There was a problem hiding this comment.
Thread tool_name_prefix through concrete MCP server APIs
Adding tool_name_prefix only to MCPServer.__init__ makes the new feature unusable for the public server classes (MCPServerStdio, MCPServerSse, and MCPServerStreamableHttp), whose constructors (and _MCPServerWithClientSession.__init__) still reject that keyword; the usage shown in the commit message (MCPServerStdio(..., tool_name_prefix="fs1")) will raise TypeError before runtime, so users cannot actually enable collision avoidance.
Useful? React with 👍 / 👎.
| prefixed_tool = MCPTool( | ||
| name=f"{self.tool_name_prefix}_{tool.name}", | ||
| description=tool.description, | ||
| inputSchema=tool.inputSchema, | ||
| ) |
There was a problem hiding this comment.
Preserve MCPTool metadata when prefixing tool names
_apply_tool_name_prefix reconstructs each MCPTool with only name, description, and inputSchema, which drops other MCP metadata fields (for example title/annotations) that this codebase uses to derive model-facing descriptions; in cases where a server provides only a title, enabling prefixes can silently erase that metadata and degrade tool definitions sent to the model.
Useful? React with 👍 / 👎.
Fixes openai#2669 - Add _supports_reasoning_content helper method - Support DeepSeek, Zhipu GLM, Moonshot Kimi - Required for multi-turn tool-calling with these models - Kimi models fail with 400 error without reasoning_content
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0374eade7c
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| (item_model and "deepseek" in item_model.lower()) | ||
| or item_provider_data == {} | ||
| ) | ||
| and self._supports_reasoning_content(model, item_model, item_provider_data) |
There was a problem hiding this comment.
Call reasoning helper via
cls in classmethod
items_to_messages is a @classmethod, but this branch calls self._supports_reasoning_content(...); self is undefined in this scope, so any reasoning item that reaches this condition (for DeepSeek/GLM/Kimi targets) will raise NameError and abort message conversion before the tool-call assistant message is emitted.
Useful? React with 👍 / 👎.
| if item_model: | ||
| item_model_lower = item_model.lower() | ||
| return any(m in item_model_lower for m in reasoning_models) | ||
| return True |
There was a problem hiding this comment.
Enforce model origin check in reasoning-content helper
_supports_reasoning_content currently returns True when item_provider_data is non-empty but item_model is missing, which bypasses the model-match guard the caller relies on. That regresses the prior behavior (which only skipped model matching when provider metadata was entirely absent) and can incorrectly carry reasoning_content across mixed or legacy histories where provider metadata contains other keys but no model identifier.
Useful? React with 👍 / 👎.
Problem
Fixes #1167
When using multiple MCP servers that provide tools with the same name (e.g., multiple filesystem servers), tool names collide and cause errors.
Solution
Add
tool_name_prefixparameter to MCPServer base class:tool_name_prefixis set, tool names are prefixed (e.g.,read_filebecomesfilesystem_read_file)None, no change to existing behaviorUsage
Testing