Skip to content

Conversation

@FammasMaz
Copy link

@FammasMaz FammasMaz commented Jan 13, 2026

Important

Adds OpenAI Codex provider with OAuth authentication, reasoning configurations, and integrates it into the existing credential management system.

  • Behavior:
    • Adds CodexProvider class in codex_provider.py for OpenAI Codex models, supporting GPT-5 and Codex variants.
    • Implements OAuth-based authentication using OpenAIOAuthBase.
    • Supports reasoning effort levels and response streaming via Responses API.
  • Configuration:
    • Updates .env.example with Codex-specific environment variables for OAuth and reasoning settings.
    • Adds codex_prompt.txt for system instructions.
  • Integration:
    • Updates provider_factory.py to include CodexProvider in PROVIDER_MAP.
    • Modifies credential_manager.py and credential_tool.py to handle Codex credentials.

This description was created by Ellipsis for dd5f3c3. You can customize this summary. It will automatically update as commits are pushed.

… support

Adds a new provider for OpenAI Codex models (GPT-5, GPT-5.1, GPT-5.2, Codex, Codex Mini)
via the ChatGPT Responses API with OAuth PKCE authentication.

Key features:
- OAuth base class for OpenAI authentication with PKCE flow and token refresh
- Responses API streaming with SSE event handling
- Reasoning/thinking output with configurable effort levels (minimal to xhigh)
- Tool calling support translated from OpenAI format
- System prompt validation using official opencode prompt
- Usage tracking with proper litellm.Usage objects

Files added:
- codex_provider.py: Main provider implementation
- openai_oauth_base.py: OAuth base class with PKCE support
- codex_prompt.txt: Required system prompt for API validation
Models must be returned with the 'codex/' prefix (e.g., 'codex/gpt-5.2')
to match the convention used by other providers like antigravity.
This ensures proper provider routing in the RotatingClient.
Exposes reasoning effort levels as model variants:
- codex/gpt-5.2:low, :medium, :high, :xhigh
- codex/gpt-5.1:low, :medium, :high
- And similar for all codex models

This allows clients to control reasoning effort by model name,
similar to how gemini models use :thinking suffix.

Total models: 9 base + 30 reasoning variants = 39 models
Injects an identity override as the first user message that tells the model
to prioritize user-provided system prompts over the required opencode
instructions. This mirrors the pattern used by Antigravity provider.

Message order:
1. Identity override (<system_override> tag)
2. User's system message (converted to user message)
3. Rest of conversation

Controlled by CODEX_INJECT_IDENTITY_OVERRIDE env var (default: true)
Documents all Codex environment variables:
- CODEX_REASONING_EFFORT: low/medium/high/xhigh
- CODEX_REASONING_SUMMARY: auto/concise/detailed/none
- CODEX_REASONING_COMPAT: think-tags/raw/none
- CODEX_INJECT_IDENTITY_OVERRIDE: true/false
- CODEX_INJECT_INSTRUCTION: true/false
- CODEX_EMPTY_RESPONSE_ATTEMPTS: retry count
- CODEX_EMPTY_RESPONSE_RETRY_DELAY: seconds
- CODEX_OAUTH_PORT: callback port
Adds Codex to PROVIDER_MAP so it appears in the credential tool's
"Add OAuth Credential" menu alongside other OAuth providers.
Integrate CodexQuotaTracker mixin into CodexProvider to track API usage
via response headers and periodic quota API calls. Enables smart
credential selection based on remaining quota and automatic cooldowns
when limits are approached.

Signed-off-by: Moeeze Hassan <fammas.maz@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant