You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add support for tool calling during sampling requests, enabling MCP servers
to execute agentic workflows using client LLM capabilities.
Key changes:
- Add ToolUseContent type for assistant tool invocation requests
- Add ToolResultContent type for tool execution results
- Add ToolChoice type to control tool usage behavior
- Add UserMessage and AssistantMessage types for role-specific messages
- Extend SamplingMessage to support tool content (backward compatible)
- Add SamplingToolsCapability for capability negotiation
- Update CreateMessageRequestParams with tools and toolChoice fields
- Update CreateMessageResult to support tool use content
- Update StopReason to include "toolUse" value
- Add comprehensive unit tests for all new types
The implementation maintains backward compatibility by keeping SamplingMessage
as a flexible BaseModel while adding more specific UserMessage and
AssistantMessage types for type-safe tool interactions.
All new types follow existing patterns:
- Use Pydantic V2 BaseModel
- Allow extra fields with ConfigDict(extra="allow")
- Include proper docstrings and field descriptions
- Support optional fields where appropriate
Github-Issue: #1577
See [MCP specification](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/47339c03c143bb4ec01a26e721a1b8fe66634ebe/docs/specification/draft/basic/index.mdx#general-fields)
802
+
for notes on _meta usage.
803
+
"""
804
+
model_config=ConfigDict(extra="allow")
805
+
806
+
807
+
classToolResultContent(BaseModel):
808
+
"""
809
+
Content representing the result of a tool execution.
810
+
811
+
This content type appears in user messages as a response to a ToolUseContent
812
+
from the assistant. It contains the output of executing the requested tool.
813
+
"""
814
+
815
+
type: Literal["tool_result"]
816
+
"""Discriminator for tool result content."""
817
+
818
+
toolUseId: str
819
+
"""The unique identifier that corresponds to the tool call's id field."""
See [MCP specification](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/47339c03c143bb4ec01a26e721a1b8fe66634ebe/docs/specification/draft/basic/index.mdx#general-fields)
838
+
for notes on _meta usage.
839
+
"""
840
+
model_config=ConfigDict(extra="allow")
841
+
842
+
745
843
classSamplingMessage(BaseModel):
746
-
"""Describes a message issued to or received from an LLM API."""
844
+
"""
845
+
Describes a message issued to or received from an LLM API.
846
+
847
+
For backward compatibility, this class accepts any role and any content type.
848
+
For type-safe usage with tool calling, use UserMessage or AssistantMessage instead.
See [MCP specification](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/47339c03c143bb4ec01a26e721a1b8fe66634ebe/docs/specification/draft/basic/index.mdx#general-fields)
See [MCP specification](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/47339c03c143bb4ec01a26e721a1b8fe66634ebe/docs/specification/draft/basic/index.mdx#general-fields)
893
+
for notes on _meta usage.
894
+
"""
895
+
model_config=ConfigDict(extra="allow")
896
+
897
+
898
+
classAssistantMessage(BaseModel):
899
+
"""
900
+
A message from the assistant (LLM) in a sampling conversation.
901
+
902
+
Assistant messages can include tool use requests when the LLM wants to call tools.
See [MCP specification](https://github.com/modelcontextprotocol/modelcontextprotocol/blob/47339c03c143bb4ec01a26e721a1b8fe66634ebe/docs/specification/draft/basic/index.mdx#general-fields)
911
+
for notes on _meta usage.
912
+
"""
750
913
model_config=ConfigDict(extra="allow")
751
914
752
915
@@ -1035,6 +1198,31 @@ class ModelPreferences(BaseModel):
1035
1198
model_config=ConfigDict(extra="allow")
1036
1199
1037
1200
1201
+
classToolChoice(BaseModel):
1202
+
"""
1203
+
Controls tool usage behavior during sampling.
1204
+
1205
+
Allows the server to specify whether and how the LLM should use tools
0 commit comments