Skip to content

feat: Update openrouter package, models, and scripts to generate openrouter models#312

Open
tombeckenham wants to merge 18 commits intoTanStack:mainfrom
tombeckenham:tombeckenham/issue310
Open

feat: Update openrouter package, models, and scripts to generate openrouter models#312
tombeckenham wants to merge 18 commits intoTanStack:mainfrom
tombeckenham:tombeckenham/issue310

Conversation

@tombeckenham
Copy link
Contributor

@tombeckenham tombeckenham commented Feb 23, 2026

Fixes #310

🎯 Changes

  • Upgraded @openrouter/sdk from 0.8.0 to 0.9.11
  • Updated model list with latest models (Opus 4.6, Sonnet 4.6, Gemini 3.1 Pro, etc.)
  • Added native structured output support for OpenRouter using JSON Schema response formats
  • Refactored text-provider-options to derive types from SDK's ChatGenerationParams
  • Refactored options passthrough to use camelCase naming convention
  • Improved error handling and cleanup
  • Added scripts to fetch and compare OpenRouter model lists
  • Removed deprecated "openrouter/auto" model and unified request payload envelope
  • Updated and added tests for nested payloads, structured output parsing, and error cases

✅ Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

@tombeckenham tombeckenham requested a review from a team February 23, 2026 21:34
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 23, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Bumps OpenRouter SDK to 0.8.0; wraps text and image adapter request payloads inside chatGenerationParams; updates tests and image model string; removes openrouter/auto and excludes parallel_tool_calls from generated model metadata; adds scripts to fetch and compare OpenRouter models and an npm script to run the fetcher.

Changes

Cohort / File(s) Summary
Release & Config
\.changeset/giant-garlics-crash.md, package.json, packages/typescript/ai-openrouter/package.json
Adds a changeset for version bump, adds fetch:models npm script, and updates @openrouter/sdk to 0.8.0.
Adapters (text & image)
packages/typescript/ai-openrouter/src/adapters/text.ts, packages/typescript/ai-openrouter/src/adapters/image.ts
Request payloads are now nested under chatGenerationParams. Image adapter sets modalities: ['image'], stream: false, and nests image options in imageConfig (handles n/numberOfImages, aspect_ratio, image_size). Structured output uses responseFormat: { type: 'json_schema', ... } path and direct JSON parsing from final message content.
Provider Options
packages/typescript/ai-openrouter/src/text/text-provider-options.ts
Expands response_format type to a discriminated union adding { type: 'json_schema'; json_schema: { ... } }.
Tests
packages/typescript/ai-openrouter/tests/image-adapter.test.ts, packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts
Mocks and assertions updated to read payloads from .chatGenerationParams; image model string changed from google/gemini-2.5-flash-image-preview to google/gemini-2.5-flash-image; added structured output and inline error chunk tests.
Model generation & tooling
scripts/convert-openrouter-models.ts, scripts/fetch-openrouter-models.ts, scripts/compare-openrouter-models.ts, scripts/openrouter.models.ts
Removed openrouter/auto from generated lists/types and excluded parallel_tool_calls from per-model supported params; added fetch-openrouter-models.ts to fetch/serialize models and compare-openrouter-models.ts to diff models against main.

Sequence Diagram(s)

sequenceDiagram
  participant Fetcher as scripts/fetch-openrouter-models.ts
  participant Remote as openrouter.ai API
  participant FS as Local filesystem (scripts/openrouter.models.ts)

  Fetcher->>Remote: GET /api/v1/models
  Remote-->>Fetcher: 200 JSON models
  Note right of Fetcher: validate & serialize to TypeScript literals
  Fetcher->>FS: write updated scripts/openrouter.models.ts
  FS-->>Fetcher: write success
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • AlemTuzlak

Poem

🐰 I hopped through models, byte by byte,

Wrapped chat params snug and tidy, right?
I fetched the list from far-off lands,
Updated tests with nimble hands,
A tiny rabbit cheers the new release tonight!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main changes: updating the openrouter package, updating models, and adding a script to generate openrouter models, which aligns with the changeset and file modifications.
Linked Issues check ✅ Passed The PR successfully addresses linked issue #310 by upgrading @openrouter/sdk to 0.8.0, with corresponding package.json updates and enhanced structured output support matching the SDK upgrade.
Out of Scope Changes check ✅ Passed All changes are within scope: SDK upgrade and test updates, model list updates via fetch script, adapter refactoring to support new SDK payload structure, and new utility scripts for model management are all directly related to the issue #310 objectives.
Description check ✅ Passed The PR description follows the required template structure with all major sections completed: changes overview, completed checklist items, and release impact declaration.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@nx-cloud
Copy link

nx-cloud bot commented Feb 23, 2026

View your CI Pipeline Execution ↗ for commit ed027a1

Command Status Duration Result
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 1m 20s View ↗
nx affected --targets=test:sherif,test:knip,tes... ✅ Succeeded 7s View ↗

☁️ Nx Cloud last updated this comment at 2026-03-04 07:24:13 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Feb 23, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/@tanstack/ai@312

@tanstack/ai-anthropic

npm i https://pkg.pr.new/@tanstack/ai-anthropic@312

@tanstack/ai-client

npm i https://pkg.pr.new/@tanstack/ai-client@312

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/@tanstack/ai-devtools-core@312

@tanstack/ai-fal

npm i https://pkg.pr.new/@tanstack/ai-fal@312

@tanstack/ai-gemini

npm i https://pkg.pr.new/@tanstack/ai-gemini@312

@tanstack/ai-grok

npm i https://pkg.pr.new/@tanstack/ai-grok@312

@tanstack/ai-ollama

npm i https://pkg.pr.new/@tanstack/ai-ollama@312

@tanstack/ai-openai

npm i https://pkg.pr.new/@tanstack/ai-openai@312

@tanstack/ai-openrouter

npm i https://pkg.pr.new/@tanstack/ai-openrouter@312

@tanstack/ai-preact

npm i https://pkg.pr.new/@tanstack/ai-preact@312

@tanstack/ai-react

npm i https://pkg.pr.new/@tanstack/ai-react@312

@tanstack/ai-react-ui

npm i https://pkg.pr.new/@tanstack/ai-react-ui@312

@tanstack/ai-solid

npm i https://pkg.pr.new/@tanstack/ai-solid@312

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/@tanstack/ai-solid-ui@312

@tanstack/ai-svelte

npm i https://pkg.pr.new/@tanstack/ai-svelte@312

@tanstack/ai-vue

npm i https://pkg.pr.new/@tanstack/ai-vue@312

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/@tanstack/ai-vue-ui@312

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/@tanstack/preact-ai-devtools@312

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/@tanstack/react-ai-devtools@312

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/@tanstack/solid-ai-devtools@312

commit: f53bcf2

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
scripts/fetch-openrouter-models.ts (1)

69-73: Content after the original models array will be silently dropped.

Line 73 reconstructs the file as preamble + ARRAY_START + entries + "]", discarding anything that existed after the array's closing bracket in the original file. If openrouter.models.ts ever gains trailing exports or content, they'll be lost. This is fine if the file is known to end with the array, but worth a note.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/fetch-openrouter-models.ts` around lines 69 - 73, The current
reconstruction builds output as
`${preamble}${ARRAY_START}\n${modelEntries.join('\n')}\n]\n` which discards any
content after the original array; modify the logic that writes `output`
(referencing preamble, ARRAY_START, modelEntries, serializeValue and models) to
preserve the original file's trailing content by locating the end of the
original array in the source (e.g., find the closing `]` for ARRAY_START or use
a regex) and capturing the suffix (originalSuffix) and then produce output as
`preamble + ARRAY_START + entries + closingBracket + originalSuffix` so any
trailing exports or other content remain intact. Ensure you handle cases with
trailing whitespace or comments when computing the array end index.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/fetch-openrouter-models.ts`:
- Around line 36-44: The serializer emits object keys raw which breaks
TypeScript when keys contain hyphens or start with digits; update the
serializeValue logic (the branch handling typeof value === 'object' that builds
entries and lines) to detect whether each key is a valid JS identifier (e.g.
match /^[A-Za-z_$][A-Za-z0-9_$]*$/) and if not, quote and escape the key (use a
safe stringifier like JSON.stringify(key)) before producing `${childPad}${k}:
...`; keep existing comma/formatting but replace unquoted k with the quoted form
so keys like "some-key" or "123a" produce valid TS output.

---

Nitpick comments:
In `@scripts/fetch-openrouter-models.ts`:
- Around line 69-73: The current reconstruction builds output as
`${preamble}${ARRAY_START}\n${modelEntries.join('\n')}\n]\n` which discards any
content after the original array; modify the logic that writes `output`
(referencing preamble, ARRAY_START, modelEntries, serializeValue and models) to
preserve the original file's trailing content by locating the end of the
original array in the source (e.g., find the closing `]` for ARRAY_START or use
a regex) and capturing the suffix (originalSuffix) and then produce output as
`preamble + ARRAY_START + entries + closingBracket + originalSuffix` so any
trailing exports or other content remain intact. Ensure you handle cases with
trailing whitespace or comments when computing the array end index.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1f800aa and 2bbda65.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (11)
  • .changeset/giant-garlics-crash.md
  • package.json
  • packages/typescript/ai-openrouter/package.json
  • packages/typescript/ai-openrouter/src/adapters/image.ts
  • packages/typescript/ai-openrouter/src/adapters/text.ts
  • packages/typescript/ai-openrouter/src/model-meta.ts
  • packages/typescript/ai-openrouter/tests/image-adapter.test.ts
  • packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts
  • scripts/convert-openrouter-models.ts
  • scripts/fetch-openrouter-models.ts
  • scripts/openrouter.models.ts

Copy link
Contributor Author

@tombeckenham tombeckenham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created a script to compare the model list. Wondering if it should be sorted in the future

Added models (28)

  • allenai/molmo-2-8b
  • anthropic/claude-opus-4.6
  • anthropic/claude-sonnet-4.6
  • arcee-ai/trinity-large-preview:free
  • google/gemini-3.1-pro-preview
  • liquid/lfm-2.5-1.2b-instruct:free
  • liquid/lfm-2.5-1.2b-thinking:free
  • minimax/minimax-m2-her
  • minimax/minimax-m2.5
  • moonshotai/kimi-k2.5
  • openai/gpt-5.2-codex
  • openai/gpt-audio
  • openai/gpt-audio-mini
  • openrouter/auto
  • openrouter/bodybuilder
  • openrouter/free
  • qwen/qwen3-coder-next
  • qwen/qwen3-max-thinking
  • qwen/qwen3-next-80b-a3b-instruct:free
  • qwen/qwen3.5-397b-a17b
  • qwen/qwen3.5-plus-02-15
  • stepfun/step-3.5-flash
  • stepfun/step-3.5-flash:free
  • upstage/solar-pro-3:free
  • writer/palmyra-x5
  • xiaomi/mimo-v2-flash
  • z-ai/glm-4.7-flash
  • z-ai/glm-5

Removed models (39)

  • ai21/jamba-mini-1.7
  • allenai/molmo-2-8b:free
  • anthropic/claude-3.5-haiku-20241022
  • deepcogito/cogito-v2-preview-llama-109b-moe
  • deepcogito/cogito-v2-preview-llama-405b
  • deepcogito/cogito-v2-preview-llama-70b
  • deepseek/deepseek-prover-v2
  • deepseek/deepseek-r1-0528-qwen3-8b
  • deepseek/deepseek-r1-distill-qwen-14b
  • google/gemini-2.0-flash-exp:free
  • google/gemini-2.5-flash-image-preview
  • google/gemini-2.5-flash-preview-09-2025
  • meta-llama/llama-3.1-405b-instruct:free
  • meta-llama/llama-3.2-90b-vision-instruct
  • microsoft/phi-4-multimodal-instruct
  • microsoft/phi-4-reasoning-plus
  • minimax/minimax-m1-80k
  • mistral/ministral-8b
  • mistralai/devstral-2512:free
  • mistralai/devstral-small-2505
  • mistralai/ministral-3b
  • mistralai/ministral-8b
  • mistralai/mistral-7b-instruct:free
  • mistralai/mistral-tiny
  • mistralai/pixtral-12b
  • moonshotai/kimi-dev-72b
  • moonshotai/kimi-k2:free
  • nousresearch/deephermes-3-mistral-24b-preview
  • openai/chatgpt-4o-latest
  • openai/codex-mini
  • qwen/qwen-2.5-vl-7b-instruct:free
  • qwen/qwen3-vl-4b-instruct
  • stepfun-ai/step3
  • tngtech/deepseek-r1t-chimera
  • tngtech/deepseek-r1t-chimera:free
  • tngtech/deepseek-r1t2-chimera:free
  • tngtech/tng-r1t-chimera
  • tngtech/tng-r1t-chimera:free
  • xiaomi/mimo-v2-flash:free

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
scripts/compare-openrouter-models.ts (2)

97-105: stderr from git leaks to the console before the friendly error message

execSync inherits the parent's stdio by default, so when git show main:… fails (e.g., wrong branch name), git's own error is printed before your console.error message. Suppress it explicitly:

♻️ Proposed fix
   mainSource = execSync(`git show main:${modelsPath}`, {
     encoding: 'utf-8',
+    stdio: ['pipe', 'pipe', 'pipe'],
   })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/compare-openrouter-models.ts` around lines 97 - 105, The call to
execSync when populating mainSource (execSync(`git show main:${modelsPath}`))
inherits the parent's stdio so git's stderr leaks to the console before your
friendly message; change the execSync options to explicitly capture stdio (e.g.,
add stdio: 'pipe' alongside encoding) and when catching the error log the
captured error.message (or error.stderr) in your console.error so the raw git
output is suppressed and you still surface useful error details before calling
process.exit(1).

12-19: snake_case identifiers violate the camelCase guideline

context_length, pricing_prompt, and pricing_completion appear as both interface properties (Lines 15–17) and local variable names (Lines 36–40). Even if the interface mirrors the OpenRouter API payload, the local variables are pure in-script identifiers and should use camelCase.

♻️ Proposed rename
 interface ModelSnapshot {
   id: string
   name: string
-  context_length: string
-  pricing_prompt: string
-  pricing_completion: string
+  contextLength: string
+  pricingPrompt: string
+  pricingCompletion: string
   modality: string
 }
-    const context_length =
-      block.match(/^\s*context_length:\s*(.+),?\s*$/m)?.[1] ?? ''
-    const pricing_prompt = block.match(/prompt:\s*'([^']+)'/)?.[1] ?? ''
-    const pricing_completion = block.match(/completion:\s*'([^']+)'/)?.[1] ?? ''
+    const contextLength =
+      block.match(/^\s*context_length:\s*(.+),?\s*$/m)?.[1] ?? ''
+    const pricingPrompt = block.match(/prompt:\s*'([^']+)'/)?.[1] ?? ''
+    const pricingCompletion = block.match(/completion:\s*'([^']+)'/)?.[1] ?? ''

Update models.set(...) and describeChanges accordingly.

As per coding guidelines: "Use camelCase for function and variable names throughout the codebase."

Also applies to: 35-40

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/compare-openrouter-models.ts` around lines 12 - 19, Rename the
snake_case identifiers to camelCase: change ModelSnapshot properties
context_length, pricing_prompt, pricing_completion to contextLength,
pricingPrompt, pricingCompletion and update any local variables/destructuring
that use those names; then update the call sites referenced (models.set(...) and
describeChanges) to use the new camelCase names. If the interface must mirror
the external API payload, add a mapping step when parsing the API response to
convert context_length -> contextLength, pricing_prompt -> pricingPrompt,
pricing_completion -> pricingCompletion before storing in ModelSnapshot and
before calling models.set or describeChanges.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/compare-openrouter-models.ts`:
- Around line 36-37: The regex assigned to context_length (the block.match call)
is greedy and can capture a trailing comma; update the pattern to avoid greedy
capture — e.g., replace the `(.+),?\s*$` part with a non-greedy or
comma-excluding capture like `(.+?)\s*,?\s*$` or `([^,]+)\s*,?\s*$`, then trim
the resulting match before using it so context_length stores the clean value;
locate the block.match(...) expression and update its regex and/or apply .trim()
to the captured group.
- Around line 91-94: The code uses import.meta.dirname when building the path
for readFileSync (currentSource) but import.meta.dirname is only defined on Node
>=20.11.0; add a Node-compatibility fallback: detect if import.meta.dirname is
undefined and compute a dirname from import.meta.url using fileURLToPath +
dirname (or use an existing __dirname helper) and then use that resolvedDir in
the resolve(...) call that builds the path for modelsPath before calling
readFileSync; update references to import.meta.dirname in this file (e.g., the
expression passed to resolve for currentSource) to use the fallback variable so
the script works on Node 18.x and earlier 20.x releases.

---

Nitpick comments:
In `@scripts/compare-openrouter-models.ts`:
- Around line 97-105: The call to execSync when populating mainSource
(execSync(`git show main:${modelsPath}`)) inherits the parent's stdio so git's
stderr leaks to the console before your friendly message; change the execSync
options to explicitly capture stdio (e.g., add stdio: 'pipe' alongside encoding)
and when catching the error log the captured error.message (or error.stderr) in
your console.error so the raw git output is suppressed and you still surface
useful error details before calling process.exit(1).
- Around line 12-19: Rename the snake_case identifiers to camelCase: change
ModelSnapshot properties context_length, pricing_prompt, pricing_completion to
contextLength, pricingPrompt, pricingCompletion and update any local
variables/destructuring that use those names; then update the call sites
referenced (models.set(...) and describeChanges) to use the new camelCase names.
If the interface must mirror the external API payload, add a mapping step when
parsing the API response to convert context_length -> contextLength,
pricing_prompt -> pricingPrompt, pricing_completion -> pricingCompletion before
storing in ModelSnapshot and before calling models.set or describeChanges.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2bbda65 and 6f8873b.

📒 Files selected for processing (2)
  • scripts/compare-openrouter-models.ts
  • scripts/fetch-openrouter-models.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • scripts/fetch-openrouter-models.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
scripts/compare-openrouter-models.ts (2)

18-25: Rename snake_case fields to camelCase per coding guidelines

context_length, pricing_prompt, and pricing_completion appear as both interface properties and local variables — all in snake_case. These are purely internal identifiers with no external schema dependency, so they should follow the project's camelCase convention.

♻️ Proposed rename
 interface ModelSnapshot {
   id: string
   name: string
-  context_length: string
-  pricing_prompt: string
-  pricing_completion: string
+  contextLength: string
+  pricingPrompt: string
+  pricingCompletion: string
   modality: string
 }
-    const context_length = (
+    const contextLength = (
       block.match(/^\s*context_length:\s*([^,]+)\s*,?\s*$/m)?.[1] ?? ''
     ).trim()
-    const pricing_prompt = block.match(/prompt:\s*'([^']+)'/)?.[1] ?? ''
-    const pricing_completion = block.match(/completion:\s*'([^']+)'/)?.[1] ?? ''
+    const pricingPrompt = block.match(/prompt:\s*'([^']+)'/)?.[1] ?? ''
+    const pricingCompletion = block.match(/completion:\s*'([^']+)'/)?.[1] ?? ''

     models.set(id, {
       id,
       name,
-      context_length,
-      pricing_prompt,
-      pricing_completion,
+      contextLength,
+      pricingPrompt,
+      pricingCompletion,
       modality,
     })

And update describeChanges property accesses accordingly:

-  if (oldModel.context_length !== newModel.context_length) {
-    changes.push(`context_length: ${oldModel.context_length} → ${newModel.context_length}`)
+  if (oldModel.contextLength !== newModel.contextLength) {
+    changes.push(`context_length: ${oldModel.contextLength} → ${newModel.contextLength}`)
   }
-  if (oldModel.pricing_prompt !== newModel.pricing_prompt) {
-    changes.push(`prompt price: ${oldModel.pricing_prompt} → ${newModel.pricing_prompt}`)
+  if (oldModel.pricingPrompt !== newModel.pricingPrompt) {
+    changes.push(`prompt price: ${oldModel.pricingPrompt} → ${newModel.pricingPrompt}`)
   }
-  if (oldModel.pricing_completion !== newModel.pricing_completion) {
-    changes.push(`completion price: ${oldModel.pricing_completion} → ${newModel.pricing_completion}`)
+  if (oldModel.pricingCompletion !== newModel.pricingCompletion) {
+    changes.push(`completion price: ${oldModel.pricingCompletion} → ${newModel.pricingCompletion}`)
   }

As per coding guidelines: "Use camelCase for function and variable names throughout the codebase" (**/*.{ts,tsx,js,jsx}).

Also applies to: 41-47

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/compare-openrouter-models.ts` around lines 18 - 25, The interface
ModelSnapshot and its related local variables use snake_case for internal fields
(context_length, pricing_prompt, pricing_completion); rename these to camelCase
(contextLength, pricingPrompt, pricingCompletion) in the ModelSnapshot interface
and everywhere they're referenced (including any local vars and in
describeChanges property accesses) to follow project conventions; update all
property reads/writes and any destructuring or assignments that reference
context_length, pricing_prompt, pricing_completion (including the other
occurrences noted around the compare/describe logic) so the code compiles with
the new names.

117-123: currentSet/mainSet are redundant — Map.has() already gives O(1) lookup

currentModels and mainModels are already Maps, so creating intermediate Sets adds allocations without benefit.

♻️ Proposed simplification
 const currentIds = [...currentModels.keys()]
 const mainIds = [...mainModels.keys()]
-const currentSet = new Set(currentIds)
-const mainSet = new Set(mainIds)

-const added = currentIds.filter((id) => !mainSet.has(id)).sort()
-const removed = mainIds.filter((id) => !currentSet.has(id)).sort()
+const added = currentIds.filter((id) => !mainModels.has(id)).sort()
+const removed = mainIds.filter((id) => !currentModels.has(id)).sort()

And update the showUpdated block:

-    if (!mainSet.has(id)) continue
+    if (!mainModels.has(id)) continue
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/compare-openrouter-models.ts` around lines 117 - 123, The current
code creates unnecessary Sets (currentSet/mainSet) from currentModels and
mainModels maps; remove those allocations and use Map.has() directly: compute
currentIds and mainIds as before, then compute added by filtering currentIds
with !mainModels.has(id) and removed by filtering mainIds with
!currentModels.has(id), and update any related logic in the showUpdated block to
call currentModels.has(...) / mainModels.has(...) instead of checking the
removed/added Sets so all lookups use the existing Map.has O(1) method.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@scripts/compare-openrouter-models.ts`:
- Around line 18-25: The interface ModelSnapshot and its related local variables
use snake_case for internal fields (context_length, pricing_prompt,
pricing_completion); rename these to camelCase (contextLength, pricingPrompt,
pricingCompletion) in the ModelSnapshot interface and everywhere they're
referenced (including any local vars and in describeChanges property accesses)
to follow project conventions; update all property reads/writes and any
destructuring or assignments that reference context_length, pricing_prompt,
pricing_completion (including the other occurrences noted around the
compare/describe logic) so the code compiles with the new names.
- Around line 117-123: The current code creates unnecessary Sets
(currentSet/mainSet) from currentModels and mainModels maps; remove those
allocations and use Map.has() directly: compute currentIds and mainIds as
before, then compute added by filtering currentIds with !mainModels.has(id) and
removed by filtering mainIds with !currentModels.has(id), and update any related
logic in the showUpdated block to call currentModels.has(...) /
mainModels.has(...) instead of checking the removed/added Sets so all lookups
use the existing Map.has O(1) method.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 50ec22e and cde3155.

📒 Files selected for processing (1)
  • scripts/compare-openrouter-models.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-openrouter/src/adapters/text.ts`:
- Around line 239-242: The code currently does JSON.parse(rawText) even when
result.choices is empty or message.content is null/empty; update the parsing
block in the adapter (the code around
result.choices[0]?.message.content/rawText) to first check that result.choices
exists and that content is a non-empty string, and only then call JSON.parse; if
content is missing or empty return a clear failure value (e.g., { data: null,
rawText: '' } or throw a specific error) so the caller can distinguish "no
content" from "invalid JSON" — reference the variables result, choices,
message.content, rawText and perform the guard before JSON.parse.
- Around line 116-119: The call to this.client.chat.send currently wraps the
ChatGenerationParams in a { chatGenerationParams: ... } object which produces a
malformed payload; change both the streaming call (where you pass {
chatGenerationParams: { ...requestParams, stream: true } }) and the
non-streaming call to pass the flat ChatGenerationParams directly as the first
argument (i.e., pass { ...requestParams, stream: true } for streaming and {
...requestParams } for non-streaming). Locate usages where mapTextOptionsToSDK()
(which already returns ChatGenerationParams) is used and update chat.send(...)
invocations accordingly (also fix the same pattern in other adapters such as
image.ts).

In `@packages/typescript/ai-openrouter/src/text/text-provider-options.ts`:
- Around line 281-291: The response_format type in OpenRouterBaseOptions uses
snake_case (json_schema) which doesn't match the SDK's ChatGenerationParams
expecting camelCase (jsonSchema); update the response_format union to rename the
json_schema property to jsonSchema so the shape matches what structuredOutput
and mapTextOptionsToSDK expect, ensuring modelOptions spread into
mapTextOptionsToSDK preserves the schema.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cde3155 and 70f0bcf.

📒 Files selected for processing (4)
  • .changeset/giant-garlics-crash.md
  • packages/typescript/ai-openrouter/src/adapters/text.ts
  • packages/typescript/ai-openrouter/src/text/text-provider-options.ts
  • packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/typescript/ai-openrouter/tests/openrouter-adapter.test.ts
  • .changeset/giant-garlics-crash.md

…leanup

- Add explicit guard for empty content in structuredOutput before JSON.parse
- Remove redundant Sets in compare script (Map.has() is already O(1))
- Suppress stderr leaks from execSync in compare script
- Update example to use valid model name

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@AlemTuzlak
Copy link
Contributor

let's update the PR!

Copy link
Contributor Author

@tombeckenham tombeckenham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll take a look tomorrow at the conflicts. Did you change much on main?

tombeckenham and others added 6 commits March 4, 2026 08:05
…refactor parameter handling

- Introduced new models: AI21 Jamba Large 1.7, AionLabs Aion-1.0, AionLabs Aion-1.0 Mini, AionLabs Aion-2.0, AionLabs Aion-RP Llama 3.1 8B, AlfredPros CodeLLaMa 7B Instruct Solidity, and Tongyi DeepResearch 30B A3B.
- Updated existing model parameters, including context windows and max output tokens.
- Refactored parameter handling in scripts to improve consistency and readability, including the introduction of a mapping function for API parameters.
- Adjusted pricing structures for several models to reflect updated costs.
- Ensured all model entries are sorted for better organization.
Copy link
Contributor Author

@tombeckenham tombeckenham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is updated now. Great that everything is now camelCase. I've also added sorting to the model meta generation to make it easier to compare changes in the future.

Copy link
Contributor Author

@tombeckenham tombeckenham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait... just realised openrouter updated the sdk again since we started this PR

tombeckenham and others added 3 commits March 4, 2026 14:34
Spread modelOptions first in request construction so provider-specific
options pass through correctly, only override with explicit options when
defined. Remove unused InternalTextProviderOptions import. Bump
@fal-ai/client to ^1.9.4.

Fixes TanStack#310

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ionParams

Replace hand-written interfaces with type aliases derived from
@openrouter/sdk's ChatGenerationParams, eliminating type drift and
keeping provider options aligned with the SDK automatically.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor Author

@tombeckenham tombeckenham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated this now. The types we needed are now exported from the openrouter sdk, so I've removed our duplicate definiations. There's a few parameters that are no longer supported through the sdk. By doing this, I could remove the cast in the text adapter to preserve type safety all the way through.

OpenRouter appears to be moving further down the track of "taking anything" then using the backend to accept or reject. There's quite a few instances where the supported_paramaters don't cleanly map to what the model supports.

Parameters that map cleanly (snake_case in Parameter → camelCase in ChatGenerationParams):

supportedParameters value ChatGenerationParams field
temperature temperature
top_p topP
frequency_penalty frequencyPenalty
presence_penalty presencePenalty
max_tokens maxTokens / maxCompletionTokens
logit_bias logitBias
logprobs logprobs
top_logprobs topLogprobs
seed seed
response_format responseFormat
stop stop
tools tools
tool_choice toolChoice
parallel_tool_calls parallelToolCalls
reasoning reasoning

Parameters in the enum with NO corresponding ChatGenerationParams field:

  • top_k
  • min_p
  • top_a
  • repetition_penalty
  • include_reasoning (separate from reasoning)
  • reasoning_effort (separate from reasoning)
  • web_search_options
  • verbosity

The image generation needs a bit of work as there's no type safety at all in image generation, and it's clear the open router team hasn't done much to support image generation yet in the sdk either.

ready to go through

@tombeckenham tombeckenham requested a review from AlemTuzlak March 4, 2026 21:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update openrouter package

2 participants