Skip to content

fix: chat completions extra_args collision with omitted kwargs#3192

Open
SuperMarioYL wants to merge 2 commits intoopenai:mainfrom
SuperMarioYL:fix/chatcompletions-extra-args-omit-parity
Open

fix: chat completions extra_args collision with omitted kwargs#3192
SuperMarioYL wants to merge 2 commits intoopenai:mainfrom
SuperMarioYL:fix/chatcompletions-extra-args-omit-parity

Conversation

@SuperMarioYL
Copy link
Copy Markdown

Summary

Mirrors the Responses-path fix from #3185 in the parallel chat-completions
path. OpenAIChatCompletionsModel._fetch_response builds create_kwargs
where many fields go through _non_null_or_omit(...) and become the
OpenAI omit sentinel when the user did not set them. The duplicate-key
check then incorrectly counted those omitted entries as collisions with
ModelSettings.extra_args, raising:

TypeError: chat.completions.create() got multiple values for keyword argument '<key>'

even though the first-class field was never explicitly set. As a result,
users could not legitimately route fields like reasoning_effort through
extra_args while leaving model_settings.reasoning unset.

This patch ports the same one-clause filter introduced for Responses in
#3185: skip duplicate-detection on keys whose create_kwargs value is
the OpenAI Omit/NotGiven sentinel, and only raise when both the
first-class field and extra_args actually carry a value. The companion
_is_openai_omitted_value helper is defined locally for parity with
openai_responses.py.

Test plan

  • make format
  • make lint
  • uv run mypy src/agents/models/openai_chatcompletions.py tests/models/test_openai_chatcompletions.py
  • uv run pytest tests/models/test_openai_chatcompletions.py -q
  • New regression tests:
    • tests/models/test_openai_chatcompletions.py::test_fetch_response_chat_completions_allows_extra_arg_when_explicit_arg_is_omitted — supplies extra_args={\"reasoning_effort\": \"high\"} while model_settings.reasoning is unset; the underlying chat.completions.create(**kwargs) call now receives reasoning_effort=\"high\" instead of failing with a spurious duplicate-argument error.
    • tests/models/test_openai_chatcompletions.py::test_fetch_response_chat_completions_rejects_duplicate_extra_args_keys — supplies temperature=0.5 together with extra_args={\"temperature\": 0.7}; the legitimate duplicate is still rejected with the original TypeError.

Issue number

N/A — parity follow-up to #3185.

Checks

  • I've added new tests (if relevant)
  • I've added/updated the relevant documentation
  • I've run `make lint` and `make format`
  • I've made sure tests pass where not blocked by unrelated local-only failures

Mirror the Responses-path fix from openai#3185: when a key in
ModelSettings.extra_args matches an entry in create_kwargs whose value
is the OpenAI omit/NotGiven sentinel, the user has not actually set the
first-class field, so the duplicate-detection check should let the
extra_args entry through instead of raising TypeError.

Adds two regression tests mirroring tests/models/test_openai_responses.py:
one that succeeds when only extra_args supplies a key whose first-class
value is omitted, and one that still rejects a true duplicate where both
the first-class field and extra_args set the same key.
@github-actions github-actions Bot added bug Something isn't working feature:chat-completions labels May 8, 2026
@seratch seratch added this to the 0.17.x milestone May 8, 2026
@seratch
Copy link
Copy Markdown
Member

seratch commented May 8, 2026

@codex review

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 09555d52d1

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/agents/models/openai_chatcompletions.py Outdated
Copy link
Copy Markdown
Member

@seratch seratch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you fix the issue pointed out by Codex and add sufficient tests for covering the pattern?

Codex review noted that the previous duplicate-key filter — which ignores
create_kwargs entries whose value is the OpenAI omit sentinel — also lets
`extra_args={"stream": True}` slip through on the non-streaming path.
`get_response()` pins `create_kwargs["stream"]` to omit when stream=False
and then expects a `ChatCompletion`; if extra_args overrode it to True the
client would return an async stream that the non-streaming code path
cannot consume.

Mark `stream` as a reserved key that must always trigger duplicate-key
detection regardless of its current sentinel value. Other create_kwargs
slots (`reasoning_effort`, `tool_choice`, etc.) keep the omit-aware
behavior, so legitimate fall-through via extra_args still works.

Add two regression tests:
- extra_args={"stream": True} with stream=False must raise TypeError
- extra_args={"stream": False} with stream=True must raise TypeError
@SuperMarioYL
Copy link
Copy Markdown
Author

Thanks for the review @seratch and @codex. You're right — the previous filter let extra_args={"stream": True} slip through on the non-streaming path because get_response() pins create_kwargs["stream"] to the OpenAI omit sentinel, which the duplicate-key check was treating as "no value present".

Fixed in 4a5f382 by marking stream as a reserved key that always triggers duplicate-key detection, regardless of whether the SDK currently passes it as omit or as a real value. Other create_kwargs slots (e.g. reasoning_effort, tool_choice) keep the omit-aware behavior so legitimate fall-through via extra_args still works.

Added two regression tests in tests/models/test_openai_chatcompletions.py:

  • extra_args={"stream": True} with stream=False (non-streaming get_response) → TypeError
  • extra_args={"stream": False} with stream=True (streaming) → TypeError

Verified make format, make lint, focused mypy, and pytest tests/models/test_openai_chatcompletions.py all pass locally.

@seratch
Copy link
Copy Markdown
Member

seratch commented May 9, 2026

@codex review

@openai openai deleted a comment from chatgpt-codex-connector Bot May 9, 2026
@chatgpt-codex-connector
Copy link
Copy Markdown

Codex Review: Didn't find any major issues. Keep it up!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working feature:chat-completions

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants