Replies: 1 comment
-
|
The Responses API does support custom/self-hosted MCP servers, but there are a few requirements that commonly trip people up: 1. Your server must implement the Streamable HTTP transport OpenAI's servers connect to your MCP endpoint directly. They expect the MCP Streamable HTTP transport (not stdio, not the older SSE-only transport). Your endpoint should handle POST requests with JSON-RPC payloads and respond accordingly. If you're using FastMCP, make sure you're running it with the HTTP transport: from fastmcp import FastMCP
mcp = FastMCP("my-server")
@mcp.tool()
def my_tool(query: str) -> str:
return f"Result for: {query}"
# Run with HTTP transport
mcp.run(transport="http", host="0.0.0.0", port=8000)2. The server must be publicly reachable from OpenAI's infrastructure When you pass Quick options for testing:
3. HTTPS is strongly recommended Some MCP configurations require HTTPS. Use a reverse proxy with TLS or a tunnel service that provides it. 4. Verify your endpoint externally Test from a machine that isn't your server: curl -X POST https://your-public-url/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "tools/list", "id": 1}'If this returns your tool list, OpenAI should be able to reach it too. If it doesn't, the issue is network/transport configuration, not the SDK. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Does the new version responseapi support the mcpserver which is deployed by myself?Why I change the server_url to my mcpserver's url , it did not work ?

Beta Was this translation helpful? Give feedback.
All reactions