fix(types): correct prompt_cache_retention literal from in-memory to in_memory#2991
fix(types): correct prompt_cache_retention literal from in-memory to in_memory#2991NIK-TIGER-BILL wants to merge 1 commit intoopenai:mainfrom
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 33fdffc119
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| """ | ||
|
|
||
| prompt_cache_retention: Optional[Literal["in-memory", "24h"]] | ||
| prompt_cache_retention: Optional[Literal["in_memory", "24h"]] |
There was a problem hiding this comment.
Update public Responses overloads to accept
in_memory
This only fixes the generated model/param types; the public Responses.create signatures still declare Literal["in-memory", "24h"] in src/openai/resources/responses/responses.py (for example line 130, and the same literal is repeated in the other overloads in that file). In a typed codebase, client.responses.create(prompt_cache_retention="in_memory") still fails pyright/mypy, and forwarding a ResponseCreateParamsBase["prompt_cache_retention"] value into Responses.create is now an incompatible call even though "in_memory" is the API-accepted value.
Useful? React with 👍 / 👎.
| """ | ||
|
|
||
| prompt_cache_retention: Optional[Literal["in-memory", "24h"]] | ||
| prompt_cache_retention: Optional[Literal["in_memory", "24h"]] |
There was a problem hiding this comment.
Keep chat completion method signatures in sync with this literal
CompletionCreateParamsBase now exposes "in_memory", but the public chat-completions entry points still use Literal["in-memory", "24h"] in src/openai/resources/chat/completions/completions.py (for example lines 112 and 267, with the same mismatch repeated across the overloads). That means typed callers of client.chat.completions.create(...)/.parse(...) are still blocked from passing the corrected value, and code that forwards CompletionCreateParamsBase["prompt_cache_retention"] into those methods no longer type-checks.
Useful? React with 👍 / 👎.
Fixes #2883
Problem
The SDK declares
prompt_cache_retentionwithLiteral["in-memory", "24h"](hyphen), but the OpenAI API rejects"in-memory"with 400 and only accepts"in_memory"(underscore).Fix
Replaced
"in-memory"with"in_memory"in all affected type declarations.Testing
Users relying on the typed value will now get the API-accepted value.