-
Notifications
You must be signed in to change notification settings - Fork 1k
Python: Added tests for OpenAI content types + Unit test improvement #3259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds comprehensive unit tests for OpenAI content types across three test files, improving test coverage for various content handling scenarios in the agent framework's OpenAI clients.
Changes:
- Added 15+ new test cases for OpenAI Responses client covering content type preparation, parsing, response format handling, and edge cases
- Added 14+ new test cases for OpenAI Chat client covering reasoning content, approval content, usage content, refusal handling, and various edge cases
- Added 2 new test cases for OpenAI Assistants client covering code interpreter and MCP server tool call parsing
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| python/packages/core/tests/openai/test_openai_responses_client.py | Adds tests for FunctionApprovalResponseContent, ErrorContent, UsageContent, HostedVectorStoreContent, HostedFileContent, MCP server tool handling, response format validation, and conversation ID handling |
| python/packages/core/tests/openai/test_openai_chat_client.py | Adds tests for TextReasoningContent parsing/preparation, FunctionApprovalContent skipping, UsageContent in streaming, refusal handling, and various edge cases for options preparation |
| python/packages/core/tests/openai/test_openai_assistants_client.py | Adds tests for parsing code interpreter and MCP server tool calls from assistant run steps |
Comments suppressed due to low confidence (1)
python/packages/core/tests/openai/test_openai_responses_client.py:683
- Python tests do not follow the C# test guidelines for Arrange, Act, Assert comments. While this is acceptable for Python code, consider adding these comments for consistency and readability, especially in complex tests. However, since the custom coding guidelines specifically apply to C# unit tests and this is Python code, this is a minor observation rather than a requirement.
def test_prepare_content_for_openai_function_approval_response() -> None:
"""Test _prepare_content_for_openai with FunctionApprovalResponseContent."""
client = OpenAIResponsesClient(model_id="test-model", api_key="test-key")
# Test approved response
function_call = FunctionCallContent(
call_id="call_123",
name="send_email",
arguments='{"to": "user@example.com"}',
)
approval_response = FunctionApprovalResponseContent(
approved=True,
id="approval_001",
function_call=function_call,
)
result = client._prepare_content_for_openai(Role.ASSISTANT, approval_response, {})
assert result["type"] == "mcp_approval_response"
assert result["approval_request_id"] == "approval_001"
assert result["approve"] is True
Motivation and Context
This PR adds comprehensive unit tests for OpenAI content types across three test files, improving test coverage for various content handling scenarios in the agent framework's OpenAI clients.
Description
Contribution Checklist