Skip to content

docs(openapi): align GET /models examples and parameter semantics#135

Open
akhilmmenon wants to merge 1 commit intomasterfrom
hotfix/4312-models-api-fix
Open

docs(openapi): align GET /models examples and parameter semantics#135
akhilmmenon wants to merge 1 commit intomasterfrom
hotfix/4312-models-api-fix

Conversation

@akhilmmenon
Copy link
Copy Markdown

Description

This PR updates the OpenAPI spec for GET /models to match the current models API behavior and avoid confusion in generated docs/examples.

What changed

  • Updated x-code-samples to use ai_service=openai instead of provider=openai:
    • cURL (default + self-hosted)
    • Python SDK examples (extra_query)
    • JavaScript SDK examples (client.models.list)
  • Clarified query parameter descriptions:
    • ai_service: AI service(e.g. openai, anthropic)
    • provider: virtual key slug (provider segment in @provider/model)
    • limit: maximum models returned per page/request
    • offset: number of models to skip before collecting results
    • sort: clarified that provider sort refers to virtual key slug

Why

The previous docs examples implied filtering with provider=openai, which conflicted with intended semantics and caused implementation/documentation mismatch. This update ensures docs and API behavior stay consistent and reduces integration errors for users.

Validation

  • Verified GET /models parameter block and all code samples render with updated query params.
  • Confirmed examples are consistent with current backend contract for models endpoint filtering and pagination semantics.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the OpenAPI spec for GET /models so the documented query parameters and generated code samples reflect the endpoint’s current filtering and pagination semantics (notably using ai_service for filtering by AI service rather than provider).

Changes:

  • Clarified GET /models query parameter semantics for ai_service, provider, limit, offset, and sort.
  • Updated x-code-samples (cURL, Python extra_query, JS client.models.list) to filter via ai_service=openai instead of provider=openai.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

name: ai_service
required: false
description: Filter models by the AI service (e.g., 'openai', 'anthropic').
description: Filter models by AI service (e.g., 'openai', 'anthropic').
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor consistency: this description uses single quotes around example values ('openai', 'anthropic'), but most other parameter descriptions in this spec either omit quotes or use backticks for inline code (e.g., "AI provider organization (e.g., openai, anthropic)" at openapi.yaml:33920). Consider aligning the formatting to match the rest of the document.

Copilot uses AI. Check for mistakes.
Comment on lines 5077 to 5081
label: Self-Hosted
source: |
# Example of sending a query parameter in the URL
curl 'https://YOUR_SELF_HOSTED_URL/models?provider=openai' \
curl 'https://YOUR_SELF_HOSTED_URL/models?ai_service=openai' \
-H "x-portkey-api-key: $PORTKEY_API_KEY"
Copy link

Copilot AI Apr 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The self-hosted samples use a different placeholder (YOUR_SELF_HOSTED_URL) than the rest of the spec and the declared DataPlaneServers (SELF_HOSTED_GATEWAY_URL). Consider switching these self-hosted examples to SELF_HOSTED_GATEWAY_URL for consistency (this placeholder also appears in the adjacent Python/JS self-hosted samples at openapi.yaml:5103 and :5135).

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants