feat: extract @tanstack/openai-base and @tanstack/ai-utils packages#409
feat: extract @tanstack/openai-base and @tanstack/ai-utils packages#409AlemTuzlak wants to merge 28 commits intomainfrom
Conversation
🚀 Changeset Version Preview11 package(s) bumped directly, 3 bumped as dependents. 🟨 Minor bumps
🟩 Patch bumps
|
|
View your CI Pipeline Execution ↗ for commit da20ffb
☁️ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-utils
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/openai-base
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds two new packages— Changes
Sequence Diagram(s)sequenceDiagram
participant App
participant ProviderAdapter
participant OpenAIBase
participant OpenAI_SDK
App->>ProviderAdapter: chatStream(options)
ProviderAdapter->>OpenAIBase: delegate chatStream(options)
OpenAIBase->>OpenAIBase: mapOptionsToRequest()
OpenAIBase->>OpenAI_SDK: responses.create(request, stream=true)
OpenAI_SDK-->>OpenAIBase: async chunks
OpenAIBase->>OpenAIBase: processStreamChunks() -> emit RUN_/TEXT_/TOOL_ events
OpenAIBase-->>ProviderAdapter: StreamChunk events
ProviderAdapter-->>App: StreamChunk events forwarded
sequenceDiagram
participant Provider
participant Utils
participant Env
Provider->>Utils: getApiKeyFromEnv('ENV_VAR')
Utils->>Env: check globalThis.window.env / process.env
alt found
Env-->>Utils: apiKey
Utils-->>Provider: apiKey
else not found
Env-->>Utils: undefined
Utils-->>Provider: throw Error("ENV_VAR not set")
end
Provider-->>Caller: apiKey or thrown error
Estimated code review effort🎯 5 (Critical) | ⏱️ ~90+ minutes Possibly related PRs
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/typescript/ai-openai/src/adapters/image.ts (1)
100-103:⚠️ Potential issue | 🟡 MinorFix the env lookup order in the JSDoc.
getOpenAIApiKeyFromEnv()now preferswindow.envbeforeprocess.env, so this list documents the opposite behavior. The same copy appears in the sibling OpenAI factory helpers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/image.ts` around lines 100 - 103, JSDoc lists env lookup order incorrectly; update the comment for getOpenAIApiKeyFromEnv to state it checks window.env (Browser injected env) first, then process.env (Node.js), and apply the same change to the matching OpenAI factory helper docs so both accurately reflect the current lookup order.
🟡 Minor comments (10)
packages/typescript/ai-utils/src/id.ts-1-5 (1)
1-5:⚠️ Potential issue | 🟡 Minor
substring(7)may produce inconsistent or empty random segments.
Math.random().toString(36)generates strings of varying length (e.g.,"0.5"for 0.5, or"0.abc123..."for typical values). Usingsubstring(7)starts extraction at index 7, which can yield an empty string or very short segment for smaller random numbers.Consider using
substring(2, 10)to skip the"0."prefix and take a consistent 8-character segment:🔧 Proposed fix
export function generateId(prefix: string): string { const timestamp = Date.now() - const randomPart = Math.random().toString(36).substring(7) + const randomPart = Math.random().toString(36).substring(2, 10) return `${prefix}-${timestamp}-${randomPart}` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-utils/src/id.ts` around lines 1 - 5, The random segment in generateId can be empty because substring(7) may exceed the length of Math.random().toString(36); update the randomPart generation in function generateId to consistently extract characters after the "0." (for example use Math.random().toString(36).substring(2, 10) to get an 8-character segment) so the id always includes a predictable non-empty random portion; ensure the change is applied to the randomPart variable in generateId and keep the rest of the template literal intact.packages/typescript/openai-base/src/tools/web-search-preview-tool.ts-12-17 (1)
12-17:⚠️ Potential issue | 🟡 MinorInconsistent
typefield handling.Other tool converters in this package hardcode the
typevalue (e.g.,type: 'apply_patch',type: 'image_generation'), but this converter readsmetadata.typefrom the input. This could produce invalid output ifmetadata.typeis undefined or incorrect.🔧 Proposed fix for consistency
export function convertWebSearchPreviewToolToAdapterFormat( tool: Tool, ): WebSearchPreviewTool { const metadata = tool.metadata as WebSearchPreviewTool return { - type: metadata.type, + type: 'web_search_preview', search_context_size: metadata.search_context_size, user_location: metadata.user_location, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/tools/web-search-preview-tool.ts` around lines 12 - 17, The returned object's type field should be hardcoded for consistency rather than reading metadata.type; update the converter that builds the returned object (the code using metadata: WebSearchPreviewTool and returning { type, search_context_size, user_location }) to set type to a fixed literal (e.g., 'web_search_preview') and optionally fall back to that literal when metadata.type is falsy, leaving the other fields (search_context_size and user_location) as-is.packages/typescript/ai-grok/package.json-42-52 (1)
42-52:⚠️ Potential issue | 🟡 MinorRemove redundant entries from
devDependencies.
@tanstack/ai-utilsand@tanstack/openai-baseare runtime dependencies (imported in src files like adapters and utilities). They should only appear independencies, not duplicated indevDependencies. Other adapter packages in the monorepo (e.g.,ai-groq) follow this pattern successfully.Remove redundant devDependencies
"devDependencies": { - "@tanstack/ai-utils": "workspace:*", - "@tanstack/openai-base": "workspace:*", "@vitest/coverage-v8": "4.0.14", "vite": "^7.2.7" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-grok/package.json` around lines 42 - 52, Remove the duplicate runtime packages from devDependencies in package.json so `@tanstack/ai-utils` and `@tanstack/openai-base` only appear under "dependencies"; open the package.json in the ai-grok package, delete the two entries from the "devDependencies" block, ensure the versions remain defined in the top-level "dependencies" block, and then reinstall (npm/yarn/pnpm) to update the lockfile consistent with other adapter packages like ai-groq.packages/typescript/ai-groq/src/utils/client.ts-1-3 (1)
1-3:⚠️ Potential issue | 🟡 MinorFix import order to satisfy ESLint (
import/order).Static analysis reports this ordering as invalid; moving the value import before the type import should resolve it.
🔧 Proposed fix
import Groq_SDK from 'groq-sdk' -import type { ClientOptions } from 'groq-sdk' import { generateId as _generateId, getApiKeyFromEnv } from '@tanstack/ai-utils' +import type { ClientOptions } from 'groq-sdk'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-groq/src/utils/client.ts` around lines 1 - 3, ESLint import/order flags the type import position; reorder the imports so value imports come before the type-only import. Specifically, keep import Groq_SDK from 'groq-sdk' and move the value import "import { generateId as _generateId, getApiKeyFromEnv } from '@tanstack/ai-utils'" before the type import "import type { ClientOptions } from 'groq-sdk'"; ensure Groq_SDK and the ai-utils value import appear above the ClientOptions type-only import.packages/typescript/openai-base/src/tools/code-interpreter-tool.ts-22-31 (1)
22-31:⚠️ Potential issue | 🟡 MinorParameter naming is misleading and metadata structure may cause round-trip issues.
The parameter
container: CodeInterpreterToolreceives the entireCodeInterpreterToolobject, not just the container portion. This creates a confusing API and potentially incorrect metadata structure.When
codeInterpreterTool(toolData)is called, it storestoolData(the fullCodeInterpreterTool) undermetadata.container. ButconvertCodeInterpreterToolToAdapterFormatexpectsmetadata.containerto be just the container value, not the full tool object.Suggested fix
/** * Creates a standard Tool from CodeInterpreterTool parameters */ -export function codeInterpreterTool(container: CodeInterpreterTool): Tool { +export function codeInterpreterTool(toolData: CodeInterpreterTool): Tool { return { name: 'code_interpreter', description: 'Execute code in a sandboxed environment', - metadata: { - type: 'code_interpreter', - container, - }, + metadata: toolData, } }This aligns with the pattern used in
webSearchToolwhere metadata stores the entire tool configuration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/tools/code-interpreter-tool.ts` around lines 22 - 31, The codeInterpreterTool function currently takes a misleading parameter named container (typed CodeInterpreterTool) and stores the whole tool object under metadata.container, which breaks convertCodeInterpreterToolToAdapterFormat expectations; rename the parameter to toolData (or similar) and change the metadata assignment so metadata.container = toolData.container (i.e., store only the container value), optionally keeping the full tool under a separate key like metadata.tool if you need the full config; update any references to the function signature accordingly.packages/typescript/ai-utils/package.json-40-43 (1)
40-43:⚠️ Potential issue | 🟡 MinorAdd
vitesttodevDependencies.The
test:libscript usesvitest run, butvitestis missing fromdevDependencies. While workspace hoisting (viapnpm-workspace.yaml) allows this to work in the monorepo, sibling packages likeai-vueandai-solidexplicitly declarevitest ^4.0.14. Add it to make the package consistent and more portable outside the workspace.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-utils/package.json` around lines 40 - 43, The package.json devDependencies block is missing "vitest" even though the test:lib script uses `vitest run`; update the package.json devDependencies to add "vitest" with the same version used in siblings (e.g. "^4.0.14") so the package is portable outside the workspace and consistent with other packages — modify the "devDependencies" entry in packages/typescript/ai-utils/package.json accordingly.packages/typescript/openai-base/src/adapters/summarize.ts-51-53 (1)
51-53:⚠️ Potential issue | 🟡 MinorDon't return the same blank id from every
summarize()call.
const id = ''makes every result indistinguishable for tracing, cache keys, and UI bookkeeping. Seed a synthetic id at minimum, then overwrite it if a provider id becomes available later.💡 Proposed fix
+import { generateId } from '@tanstack/ai-utils' import { BaseSummarizeAdapter } from '@tanstack/ai/adapters'- const id = '' + let id = generateId()Also applies to: 78-78
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/adapters/summarize.ts` around lines 51 - 53, The summarize() function currently sets const id = '' so every call returns the same empty id; change it to generate a synthetic unique id (e.g., UUID or timestamp+random) when initializing id and keep it mutable so that you can overwrite it later if/when a provider returns a real id; update the declaration for id in summarize() (and the similar spot around the later block that currently uses '' at line ~78) to be initialized to a generated unique string and ensure later assignment from provider response replaces that value.packages/typescript/openai-base/src/adapters/image.ts-120-130 (1)
120-130:⚠️ Potential issue | 🟡 MinorReject fractional and
NaNimage counts.
NaNand non-integers both bypass the current range check, so this can still forwardn: NaNorn: 1.5to the provider.🛠️ Suggested fix
- if (numberOfImages < 1 || numberOfImages > 10) { + if ( + !Number.isInteger(numberOfImages) || + numberOfImages < 1 || + numberOfImages > 10 + ) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/adapters/image.ts` around lines 120 - 130, validateNumberOfImages currently only checks range and allows NaN or fractional values through; update validateNumberOfImages to reject non-integer and NaN inputs by verifying Number.isFinite(numberOfImages) and Number.isInteger(numberOfImages) (or equivalent) before the range check, and throw a clear Error if the value is NaN or not an integer (include the invalid value in the error message) so calls with n: NaN or n: 1.5 are rejected.packages/typescript/openai-base/src/types/provider-options.ts-7-7 (1)
7-7:⚠️ Potential issue | 🟡 MinorUse the repo's configured array style for
stop.This new shared type trips
@typescript-eslint/array-type, so it will keep reporting a lint error until it's rewritten asArray<string>.🛠️ Suggested fix
- stop?: string | string[] + stop?: string | Array<string>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/types/provider-options.ts` at line 7, The 'stop' property in the shared type currently uses the bracket array syntax (stop?: string | string[]), which trips `@typescript-eslint/array-type`; change it to use the repo's configured generic array style by updating the property to stop?: string | Array<string> so the union remains but the array form satisfies the lint rule and keeps the type meaning identical.packages/typescript/openai-base/src/adapters/transcription.ts-78-85 (1)
78-85:⚠️ Potential issue | 🟡 MinorDon't drop
avg_logprob = 0.The truthy check turns a valid
0intoundefined, so perfect-confidence segments lose their confidence field.🛠️ Suggested fix
- confidence: seg.avg_logprob ? Math.exp(seg.avg_logprob) : undefined, + confidence: + seg.avg_logprob == null + ? undefined + : Math.exp(seg.avg_logprob),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/adapters/transcription.ts` around lines 78 - 85, The mapping in the segments conversion (response.segments?.map -> TranscriptionSegment) currently uses a truthy check (avg_logprob ? Math.exp(...) : undefined) which drops valid zero values; update the confidence assignment in the segments mapping (inside the function that builds TranscriptionSegment objects) to test for presence explicitly (e.g., seg.avg_logprob !== undefined/null or typeof seg.avg_logprob === "number") and only then compute Math.exp(seg.avg_logprob), so avg_logprob = 0 yields confidence = Math.exp(0) instead of undefined.
🧹 Nitpick comments (14)
packages/typescript/openai-base/src/tools/computer-use-tool.ts (1)
23-31: Inconsistent function signature compared to other tool factories.The
computerUseToolfunction accepts the fullComputerUseTooltype includingtype, while similar factories likeimageGenerationTooluseOmit<..., 'type'>. This requires callers to redundantly specifytype: 'computer_use_preview'.♻️ Proposed fix for consistency
-export function computerUseTool(toolData: ComputerUseTool): Tool { +export function computerUseTool( + toolData: Omit<ComputerUseTool, 'type'>, +): Tool { return { name: 'computer_use_preview', description: 'Control a virtual computer', metadata: { ...toolData, }, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/tools/computer-use-tool.ts` around lines 23 - 31, Function signature inconsistent: change the parameter of computerUseTool from ComputerUseTool to Omit<ComputerUseTool, 'type'> so callers don't have to pass type. Update the factory function computerUseTool to accept toolData: Omit<ComputerUseTool, 'type'>, keep returning the same object with name: 'computer_use_preview' and metadata: { ...toolData }, and ensure any call sites that previously passed type no longer do so (or remove the redundant type property).packages/typescript/openai-base/src/tools/file-search-tool.ts (1)
15-26: Consider adding validation or type guard inconvertFileSearchToolToAdapterFormat.The function casts
tool.metadatatoOpenAI.Responses.FileSearchToolwithout validation, whilefileSearchToolvalidatesmax_num_results. If aToolwith invalid/mismatched metadata is passed, this could produce malformed API requests silently.Consider adding a runtime check or documenting that callers must ensure the
Toolwas created viafileSearchTool:♻️ Optional: Add validation for consistency
export function convertFileSearchToolToAdapterFormat( tool: Tool, ): OpenAI.Responses.FileSearchTool { const metadata = tool.metadata as OpenAI.Responses.FileSearchTool + validateMaxNumResults(metadata.max_num_results) return { type: 'file_search', vector_store_ids: metadata.vector_store_ids,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/tools/file-search-tool.ts` around lines 15 - 26, convertFileSearchToolToAdapterFormat currently casts tool.metadata to OpenAI.Responses.FileSearchTool without validation; add a runtime type guard or validation at the start of convertFileSearchToolToAdapterFormat to verify required fields (e.g., vector_store_ids, max_num_results, ranking_options, filters) and that max_num_results meets the same constraints enforced by fileSearchTool, and throw or return a clear error if validation fails so malformed metadata cannot produce invalid adapter requests.packages/typescript/ai-gemini/package.json (1)
42-54: Same redundancy concern:@tanstack/ai-utilsin both dependency sections.As noted for
ai-grok, the package appears in bothdependenciesanddevDependencies. Consider removing fromdevDependenciesunless there's a specific monorepo tooling requirement.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/package.json` around lines 42 - 54, The package.json for ai-gemini lists "@tanstack/ai-utils" in both "dependencies" and "devDependencies", creating redundancy; remove the duplicate entry from "devDependencies" so "@tanstack/ai-utils" only appears once (leave it in "dependencies"), then run the package manager install to refresh the lockfile and ensure the workspace resolution is correct; locate the entries by the symbols "dependencies", "devDependencies", and the package name "@tanstack/ai-utils" in the package.json to make the change.packages/typescript/ai-utils/tests/env.test.ts (1)
23-27: Add one regression test forwindow.envprecedence.Given the shared helper is isomorphic, adding an explicit browser-env precedence case would prevent future regressions.
Proposed test addition
describe('getApiKeyFromEnv', () => { afterEach(() => { vi.unstubAllEnvs() + delete (globalThis as { window?: { env?: Record<string, string> } }).window }) @@ it('should include the env var name in the error message', () => { expect(() => getApiKeyFromEnv('MY_PROVIDER_API_KEY')).toThrow( 'MY_PROVIDER_API_KEY', ) }) + + it('should prefer globalThis.window.env over process.env when both exist', () => { + vi.stubEnv('TEST_API_KEY', 'from-process') + ;(globalThis as { window?: { env?: Record<string, string> } }).window = { + env: { TEST_API_KEY: 'from-window' }, + } + expect(getApiKeyFromEnv('TEST_API_KEY')).toBe('from-window') + }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-utils/tests/env.test.ts` around lines 23 - 27, Add a regression test in the existing env.test.ts that verifies browser (window.env) precedence for the isomorphic helper: set process.env.MY_PROVIDER_API_KEY to one value and set global.window = { env: { MY_PROVIDER_API_KEY: 'browser_value' } } then assert getApiKeyFromEnv('MY_PROVIDER_API_KEY') returns 'browser_value'; finally clean up/reset global.window and process.env to avoid test pollution. Use the getApiKeyFromEnv symbol and window.env/process.env references so the test prevents regressions where process.env accidentally overrides browser env.packages/typescript/ai-utils/tests/model-meta.test.ts (1)
5-106: Add tests for the remaining validation branches.The suite doesn’t currently assert the
max_output_tokenspositivity check or negativepricing.input.cachedpath fromdefineModelMeta.[review_comment_end]✅ Suggested test additions
describe('defineModelMeta', () => { + it('should throw for zero max output tokens', () => { + expect(() => + defineModelMeta({ + name: 'test', + supports: { + input: ['text'] as Array<Modality>, + output: ['text'] as Array<Modality>, + }, + max_output_tokens: 0, + }), + ).toThrow('max_output_tokens') + }) + + it('should throw for negative cached input pricing', () => { + expect(() => + defineModelMeta({ + name: 'test', + supports: { + input: ['text'] as Array<Modality>, + output: ['text'] as Array<Modality>, + }, + pricing: { + input: { normal: 1, cached: -0.1 }, + output: { normal: 1 }, + }, + }), + ).toThrow('pricing.input.cached') + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-utils/tests/model-meta.test.ts` around lines 5 - 106, Tests are missing coverage for the positivity check on max_output_tokens and the validation that rejects negative pricing.input.cached in defineModelMeta; add two new test cases in model-meta.test.ts that call defineModelMeta with (1) max_output_tokens set to 0 (or negative) and assert it throws 'max_output_tokens', and (2) pricing.input.cached set to a negative value and assert it throws 'pricing' (referencing the defineModelMeta function and the max_output_tokens and pricing.input.cached fields to locate the validation branches).packages/typescript/ai-utils/tests/transforms.test.ts (1)
1-51: Good test coverage fortransformNullsToUndefined.The tests comprehensively cover the main scenarios. One edge case that might be worth adding: arrays containing
nullvalues directly (e.g.,[1, null, 3]) to verify whether nulls in arrays are filtered, converted, or preserved.💡 Optional: Add test for direct null values in arrays
+ it('should handle arrays with direct null values', () => { + const result = transformNullsToUndefined({ + items: [1, null, 'hello', null], + }) + // Verify expected behavior - adjust assertion based on intended behavior + expect(result.items).toEqual([1, null, 'hello', null]) // or filtered? + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-utils/tests/transforms.test.ts` around lines 1 - 51, Add a test case to verify how transformNullsToUndefined treats direct null entries inside arrays (e.g., items: [1, null, 3]) by calling transformNullsToUndefined with an object or array containing such an array and asserting the expected behavior (whether nulls are removed, converted to undefined, or preserved). Place the new spec alongside existing cases in transforms.test.ts and reference transformNullsToUndefined in the test name and assertions so reviewers can see the intended outcome clearly.packages/typescript/ai-fal/src/utils/client.ts (1)
19-22: Dead code after migration to shared utility.The
if (!apiKey)check is now unreachable. According to the AI summary,getApiKeyFromEnvthrows an error when the key is not found, sogetFalApiKeyFromEnv()will throw before returning a falsy value. This error handling block is dead code.Consider removing the redundant check or catching the exception from
getFalApiKeyFromEnv()if you want to customize the error message:Option 1: Remove dead code
export function configureFalClient(config?: FalClientConfig): void { if (config?.proxyUrl) { fal.config({ proxyUrl: config.proxyUrl, }) } else { const apiKey = config?.apiKey ?? getFalApiKeyFromEnv() - if (!apiKey) { - throw new Error('API key is required') - } fal.config({ credentials: apiKey, }) } }Option 2: Wrap for custom error message
} else { - const apiKey = config?.apiKey ?? getFalApiKeyFromEnv() - if (!apiKey) { - throw new Error('API key is required') - } + let apiKey = config?.apiKey + if (!apiKey) { + try { + apiKey = getFalApiKeyFromEnv() + } catch { + throw new Error('API key is required') + } + } fal.config({🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-fal/src/utils/client.ts` around lines 19 - 22, The if (!apiKey) branch is dead because getFalApiKeyFromEnv() throws when the key is missing; remove the redundant check or wrap the call in a try/catch to provide a custom error message. Specifically, update the apiKey assignment that uses config?.apiKey ?? getFalApiKeyFromEnv(): either delete the subsequent if (!apiKey) throw new Error('API key is required') block, or change the fallback call to be try { getFalApiKeyFromEnv() } catch (err) { throw new Error(`Fal API key missing: ${err.message}`) } (adjusting throw text as desired) so error handling around getFalApiKeyFromEnv and the apiKey variable is correct.packages/typescript/ai-gemini/src/utils/client.ts (1)
24-28: Consider improving the fallback error message.When both
GOOGLE_API_KEYandGEMINI_API_KEYare missing, the error will only mentionGEMINI_API_KEY. Users expectingGOOGLE_API_KEYmight be confused.Suggested improvement
export function getGeminiApiKeyFromEnv(): string { try { return getApiKeyFromEnv('GOOGLE_API_KEY') } catch { - return getApiKeyFromEnv('GEMINI_API_KEY') + try { + return getApiKeyFromEnv('GEMINI_API_KEY') + } catch { + throw new Error('GOOGLE_API_KEY or GEMINI_API_KEY not found in environment') + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/utils/client.ts` around lines 24 - 28, The fallback currently calls getApiKeyFromEnv('GOOGLE_API_KEY') and on any error returns getApiKeyFromEnv('GEMINI_API_KEY'), which leads to an error that only mentions GEMINI_API_KEY when both are missing; update the logic to attempt the second call inside its own try/catch (or capture both errors) and when both attempts fail throw a single error that clearly lists both environment variables (GOOGLE_API_KEY and GEMINI_API_KEY) and their failure reasons so callers see that both keys are missing/invalid; reference getApiKeyFromEnv('GOOGLE_API_KEY') and getApiKeyFromEnv('GEMINI_API_KEY') when implementing the combined error.packages/typescript/openai-base/src/tools/web-search-tool.ts (1)
9-12: Consider adding defensive checks for metadata.The function directly casts and returns
tool.metadatawithout validation. Ifmetadataisundefinedor malformed, this will silently pass invalid data to OpenAI.Suggested defensive check
export function convertWebSearchToolToAdapterFormat(tool: Tool): WebSearchTool { const metadata = tool.metadata as WebSearchTool + if (!metadata || metadata.type !== 'web_search') { + throw new Error('Invalid web_search tool: missing or invalid metadata') + } return metadata }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/tools/web-search-tool.ts` around lines 9 - 12, convertWebSearchToolToAdapterFormat currently casts tool.metadata to WebSearchTool without validation; add defensive checks to ensure tool.metadata exists and has the expected fields before returning it. In convertWebSearchToolToAdapterFormat, validate that tool.metadata is not undefined/null and that required WebSearchTool properties (e.g., any unique keys your type requires) are present and correctly typed, and either throw a clear error or return a safe default/converted object when validation fails; reference the Tool type, WebSearchTool type, and the metadata variable to locate where to add these checks.packages/typescript/openai-base/src/tools/tool-converter.ts (1)
16-28: Consider adding runtime exhaustiveness check to guard against drift.The
SPECIAL_TOOL_NAMESset and theswitchstatement must stay synchronized. If a name is added to the set but the corresponding case is missing, the tool silently falls through to the function tool converter at line 66—potentially incorrect behavior.A defensive
defaultwith an error would catch this during development:♻️ Proposed exhaustiveness guard
case 'custom': return convertCustomToolToAdapterFormat(tool) + default: + // This should never be reached if SPECIAL_TOOL_NAMES and switch are in sync + throw new Error(`Unhandled special tool: ${toolName}`) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/tools/tool-converter.ts` around lines 16 - 28, The SPECIAL_TOOL_NAMES set and the switch in the tool conversion logic must be kept in sync to avoid falling back to functionToolConverter incorrectly; add a defensive exhaustiveness guard by updating the switch that branches on tool names (the same switch that currently falls through to functionToolConverter) to include a default branch that throws or logs an error referencing the unexpected tool name, and also add a runtime check that every entry in SPECIAL_TOOL_NAMES is covered by a corresponding switch case (or vice versa) so any drift fails fast during development.packages/typescript/openai-base/tests/schema-converter.test.ts (1)
32-45: Add test case for already-nullable fields to improve coverage.The implementation correctly prevents adding duplicate 'null' via the
!prop.type.includes('null')check at line 55 of the converter, but the test suite doesn't verify this edge case. Add a test where a field is already nullable (type: ['string', 'null']) to confirm it remains unchanged.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/tests/schema-converter.test.ts` around lines 32 - 45, Add a test to verify makeStructuredOutputCompatible preserves existing nullable types: create a schema where one property already has type ['string','null'] (e.g., nickname) and required includes only name, call makeStructuredOutputCompatible(schema, ['name']) and assert that nickname.type remains ['string','null'] (and name stays 'string'); reference the existing test in schema-converter.test.ts and the makeStructuredOutputCompatible function to place the new case next to the current optional-fields test.packages/typescript/openai-base/src/tools/custom-tool.ts (1)
9-17: Tool converters should validate metadata schema at runtime using Zod, consistent with input/output validation guidelines.All tool converters in
openai-base/src/tools/(custom-tool, web-search-tool, mcp-tool, file-search-tool, code-interpreter-tool, computer-use-tool, image-generation-tool, web-search-preview-tool) casttool.metadatawithout runtime validation. This pattern risks silently propagating undefined values if metadata structure is malformed. Consider adding Zod schema validation for each tool type to ensure compile-time type safety and runtime correctness, aligning with the schema validation guideline for tool definitions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/tools/custom-tool.ts` around lines 9 - 17, The convertCustomToolToAdapterFormat function currently casts tool.metadata to CustomTool without runtime checks; add a Zod schema for the CustomTool metadata, validate tool.metadata at the start of convertCustomToolToAdapterFormat (using schema.parse or safeParse), and handle validation failures by throwing or logging a descriptive error so malformed metadata doesn't propagate; reference the convertCustomToolToAdapterFormat function, the metadata variable and the CustomTool shape when adding the Zod schema and validation call.packages/typescript/ai-openai/src/utils/client.ts (1)
38-41: Consolidate the config type to prevent silent interface drift.Both
OpenAIClientConfigandOpenAICompatibleClientConfigare structurally identical. The double-cast pattern at line 41 (and its duplicate inai-grok) bypasses TypeScript's structural checking. If either interface diverges, the mismatch only surfaces at runtime or in consuming code downstream.Options:
- Export
OpenAICompatibleClientConfigfromopenai-baseand reuse it directly (eliminate the localOpenAIClientConfigwrapper).- Create an explicit field-by-field mapper function that documents the boundary contract.
- Or, define a shared internal config type that both packages import.
The current approach leaves the package boundary unchecked.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/utils/client.ts` around lines 38 - 41, The double-cast in toCompatibleConfig permits silent interface drift between OpenAIClientConfig and OpenAICompatibleClientConfig; fix by consolidating types or mapping explicitly: either import and use a single shared config type exported from openai-base (replace OpenAIClientConfig/OpenAICompatibleClientConfig uses and remove the cast in toCompatibleConfig), or implement a field-by-field mapper inside toCompatibleConfig that constructs an OpenAICompatibleClientConfig from the OpenAIClientConfig properties (no unknown/as casts) so the boundary is explicit; update all callers of toCompatibleConfig accordingly (function name toCompatibleConfig is the change point).packages/typescript/openai-base/src/adapters/responses-tool-converter.ts (1)
46-49: Avoid mutating the schema object returned byschemaConverter().If a custom converter returns a shared object, forcing
additionalPropertiesin place leaks state across later conversions. Build a new object before tightening strict mode.♻️ Proposed fix
- const jsonSchema = schemaConverter(inputSchema, inputSchema.required || []) - - // Ensure additionalProperties is false for strict mode - jsonSchema.additionalProperties = false + const jsonSchema = { + ...schemaConverter(inputSchema, inputSchema.required || []), + additionalProperties: false, + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/adapters/responses-tool-converter.ts` around lines 46 - 49, The code mutates the object returned by schemaConverter(inputSchema, ...) by setting jsonSchema.additionalProperties = false which can leak state; instead create a new object (shallow or deep clone as appropriate) from the schemaConverter result and set additionalProperties=false on that new object before returning/using it. Locate the usage of schemaConverter and the jsonSchema variable in responses-tool-converter.ts and replace the in-place mutation with constructing a new schema object (e.g., via object spread or a deep clone helper) then set additionalProperties on that new object so the original converter output is not modified.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-groq/src/utils/schema-converter.ts`:
- Around line 18-37: The post-pass in removeEmptyRequired only recurses into
result.properties and result.items, so empty required arrays inside combinators
and other schema-valued branches remain; update removeEmptyRequired to also
detect and recurse into combinator arrays (result.anyOf, result.oneOf,
result.allOf) by mapping each entry through removeEmptyRequired, and recurse
into object-valued branches like result.additionalProperties (and any other
schema-valued fields you spot) by calling removeEmptyRequired on them so empty
required arrays are pruned throughout the schema tree.
In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts`:
- Around line 412-448: The adapter drops provider-specific settings by not
forwarding options.modelOptions in mapOptionsToRequest; update
mapOptionsToRequest (in chat-completions-text.ts) to merge options.modelOptions
into the returned request object (e.g., spread options.modelOptions after the
known top-level fields or merge with tools/stream fields) so provider-specific
chat-completions settings reach the request; ensure the merged keys are
typed/compatible with
OpenAI_SDK.Chat.Completions.ChatCompletionCreateParamsStreaming and cast where
necessary (e.g., when adding tools or stream_options) to avoid type errors.
- Around line 141-152: structuredOutput is passing requestParams (from
mapOptionsToRequest) which may contain stream_options (set by
mapOptionsToRequest) into a non-stream request (stream: false); remove or omit
stream_options when calling structuredOutput so stream_options is only sent when
stream: true. Update structuredOutput (the call that creates
chat.completions.create) to shallow-copy requestParams and delete or
conditionally exclude requestParams.stream_options when stream is false, or
modify mapOptionsToRequest to attach stream_options only when options specify
streaming; reference the functions mapOptionsToRequest and
structuredOutput/requestParams to locate the change.
- Around line 75-81: mapOptionsToRequest currently only returns request
parameters and ignores the optional request/requestInit in TextOptions; update
mapOptionsToRequest to return a tuple or object containing both the
requestParams and requestOptions (derived from TextOptions.request), then change
both calls to this.client.chat.completions.create(...) (the streaming call that
feeds processStreamChunks and the structured-output create call) to pass the
requestOptions as the second argument to create(). Ensure requestOptions is the
same forwarded Request | RequestInit so per-request headers and other
RequestOptions are preserved.
In `@packages/typescript/openai-base/src/adapters/responses-text.ts`:
- Around line 79-82: The tool-call metadata map (toolCallMetadata) currently
only stores index/name/started so emitted tool-call events use the internal id;
update the map value type (used in responses-text adapter) to also store the
Responses API function_call.call_id when you parse function_call items, then
change all emissions of TOOL_CALL_START, TOOL_CALL_ARGS, and TOOL_CALL_END to
include toolCallId: call_id (not the internal id) so that callers using
convertMessagesToInput() can correlate results via function_call_output.call_id;
reference the toolCallMetadata map and the event emission sites in
responses-text.ts and ensure the stored call_id is propagated into those events.
In `@packages/typescript/openai-base/src/adapters/transcription.ts`:
- Around line 61-65: The transcription base currently sets useVerbose true
whenever responseFormat is falsy and model !== 'whisper-1', which hardcodes an
OpenAI model name into the shared adapter; remove that heuristic from the base
by making useVerbose only true when responseFormat === 'verbose_json', and move
any model-specific defaults into the OpenAI-specific adapter via a protected
override (e.g., add a protected method like shouldDefaultToVerbose(model) that
the OpenAI adapter overrides to enable the legacy behavior). Update uses of
responseFormat, model, and useVerbose in
packages/typescript/openai-base/src/adapters/transcription.ts to rely on the
base opt-in rule and implement the override in the OpenAI adapter so non-OpenAI
providers remain provider-agnostic.
- Around line 105-147: The prepareAudioFile method uses ArrayBuffer and atob
without runtime guards; update prepareAudioFile to check for typeof ArrayBuffer
!== 'undefined' before using ArrayBuffer (in the ArrayBuffer branch) and check
typeof atob === 'function' before calling atob (in both data URL and raw base64
branches); if those globals are unavailable, either delegate to a runtime-safe
helper or throw a clear error indicating the missing runtime capability (mention
prepareAudioFile to locate the change) so the method behaves consistently with
the existing File/Blob guards.
In `@packages/typescript/openai-base/src/adapters/tts.ts`:
- Around line 49-56: The returned format/contentType is derived from the
original `format` variable even though `modelOptions` (spread into `request`)
can override `response_format`; update the code around the `request:
OpenAI_SDK.Audio.SpeechCreateParams` construction so the final returned `format`
and `contentType` are computed from the merged `request` (e.g., read
`request.response_format` after spreading `modelOptions`, falling back to a
default like 'mp3'), and use that derived value for the response payload
metadata (affects the same logic referenced on lines ~65-66).
In `@packages/typescript/openai-base/src/adapters/video.ts`:
- Around line 131-139: The method probes should check for the OpenAI SDK's
videos.downloadContent(jobId) entrypoint instead of content/getContent/download
so update the detection to call client.videos.downloadContent(jobId) when
available (fall back to client.videos.retrieve(jobId) only otherwise) and avoid
always falling through to the manual fetch; also fix the expires_at conversion
where new Date(expires_at) is used (in the branches handling
videoInfo.expires_at and the manual download path) by multiplying the seconds
timestamp by 1000 (new Date(expires_at * 1000)) so expiration dates are correct.
- Around line 140-146: The code returns expiresAt using new
Date(videoInfo.expires_at) but OpenAI's expires_at is in seconds, while Date
expects milliseconds; update the construction in the adapter (the block using
videoInfo and jobId) to convert seconds to milliseconds (e.g., multiply the
numeric expires_at by 1000 or use Number(...) * 1000) before passing to new
Date, and preserve the existing undefined handling when expires_at is absent.
In `@packages/typescript/openai-base/src/tools/mcp-tool.ts`:
- Around line 18-24: In convertMCPToolToAdapterFormat the runtime metadata may
include a type field that currently gets overridden by spreading metadata before
setting type, allowing non-'mcp' types to leak through; fix by ensuring the
'type' property is applied after spreading metadata (or by creating a shallow
copy of metadata with any existing type removed) so the resulting MCPTool always
has type: 'mcp' regardless of tool.metadata contents; reference
convertMCPToolToAdapterFormat, Tool, MCPTool and the local metadata variable
when making this change.
In `@packages/typescript/openai-base/src/utils/schema-converter.ts`:
- Around line 26-66: The branch logic in makeStructuredOutputCompatible
currently recurses and assigns converted children directly, so optional fields
(wasOptional) inside object/array/anyOf branches lose nullability when
result.required is set; change the implementation to always produce a single
convertedProp for each propName (for object: call
makeStructuredOutputCompatible(prop, prop.required||[] ) and assign to
convertedProp; for array: set convertedProp = { ...prop, items:
makeStructuredOutputCompatible(prop.items, prop.items.required||[]) }; for
anyOf: convertedProp = makeStructuredOutputCompatible(prop, prop.required||[]) ;
for oneOf keep throwing) and only after that apply the wasOptional null-widening
transformation to convertedProp (if prop.type exists and not array -> type =
[type,'null']; if prop.type is array and missing 'null' -> append 'null');
finally set properties[propName] = convertedProp and then set
result.properties/result.required/additionalProperties as before.
- Around line 12-24: The function makeStructuredOutputCompatible currently
treats all fields as optional when originalRequired defaults to [], which can
incorrectly widen originally required properties; change the optionality source
to use schema.required when originalRequired is not supplied—inside
makeStructuredOutputCompatible, derive a working required list (e.g., const
requiredSource = originalRequired.length ? originalRequired : (schema.required
?? [])) and use requiredSource instead of originalRequired when computing
wasOptional for each propName, ensuring originally required fields are not
widened to include null before result.required is rewritten.
---
Outside diff comments:
In `@packages/typescript/ai-openai/src/adapters/image.ts`:
- Around line 100-103: JSDoc lists env lookup order incorrectly; update the
comment for getOpenAIApiKeyFromEnv to state it checks window.env (Browser
injected env) first, then process.env (Node.js), and apply the same change to
the matching OpenAI factory helper docs so both accurately reflect the current
lookup order.
---
Minor comments:
In `@packages/typescript/ai-grok/package.json`:
- Around line 42-52: Remove the duplicate runtime packages from devDependencies
in package.json so `@tanstack/ai-utils` and `@tanstack/openai-base` only appear
under "dependencies"; open the package.json in the ai-grok package, delete the
two entries from the "devDependencies" block, ensure the versions remain defined
in the top-level "dependencies" block, and then reinstall (npm/yarn/pnpm) to
update the lockfile consistent with other adapter packages like ai-groq.
In `@packages/typescript/ai-groq/src/utils/client.ts`:
- Around line 1-3: ESLint import/order flags the type import position; reorder
the imports so value imports come before the type-only import. Specifically,
keep import Groq_SDK from 'groq-sdk' and move the value import "import {
generateId as _generateId, getApiKeyFromEnv } from '@tanstack/ai-utils'" before
the type import "import type { ClientOptions } from 'groq-sdk'"; ensure Groq_SDK
and the ai-utils value import appear above the ClientOptions type-only import.
In `@packages/typescript/ai-utils/package.json`:
- Around line 40-43: The package.json devDependencies block is missing "vitest"
even though the test:lib script uses `vitest run`; update the package.json
devDependencies to add "vitest" with the same version used in siblings (e.g.
"^4.0.14") so the package is portable outside the workspace and consistent with
other packages — modify the "devDependencies" entry in
packages/typescript/ai-utils/package.json accordingly.
In `@packages/typescript/ai-utils/src/id.ts`:
- Around line 1-5: The random segment in generateId can be empty because
substring(7) may exceed the length of Math.random().toString(36); update the
randomPart generation in function generateId to consistently extract characters
after the "0." (for example use Math.random().toString(36).substring(2, 10) to
get an 8-character segment) so the id always includes a predictable non-empty
random portion; ensure the change is applied to the randomPart variable in
generateId and keep the rest of the template literal intact.
In `@packages/typescript/openai-base/src/adapters/image.ts`:
- Around line 120-130: validateNumberOfImages currently only checks range and
allows NaN or fractional values through; update validateNumberOfImages to reject
non-integer and NaN inputs by verifying Number.isFinite(numberOfImages) and
Number.isInteger(numberOfImages) (or equivalent) before the range check, and
throw a clear Error if the value is NaN or not an integer (include the invalid
value in the error message) so calls with n: NaN or n: 1.5 are rejected.
In `@packages/typescript/openai-base/src/adapters/summarize.ts`:
- Around line 51-53: The summarize() function currently sets const id = '' so
every call returns the same empty id; change it to generate a synthetic unique
id (e.g., UUID or timestamp+random) when initializing id and keep it mutable so
that you can overwrite it later if/when a provider returns a real id; update the
declaration for id in summarize() (and the similar spot around the later block
that currently uses '' at line ~78) to be initialized to a generated unique
string and ensure later assignment from provider response replaces that value.
In `@packages/typescript/openai-base/src/adapters/transcription.ts`:
- Around line 78-85: The mapping in the segments conversion
(response.segments?.map -> TranscriptionSegment) currently uses a truthy check
(avg_logprob ? Math.exp(...) : undefined) which drops valid zero values; update
the confidence assignment in the segments mapping (inside the function that
builds TranscriptionSegment objects) to test for presence explicitly (e.g.,
seg.avg_logprob !== undefined/null or typeof seg.avg_logprob === "number") and
only then compute Math.exp(seg.avg_logprob), so avg_logprob = 0 yields
confidence = Math.exp(0) instead of undefined.
In `@packages/typescript/openai-base/src/tools/code-interpreter-tool.ts`:
- Around line 22-31: The codeInterpreterTool function currently takes a
misleading parameter named container (typed CodeInterpreterTool) and stores the
whole tool object under metadata.container, which breaks
convertCodeInterpreterToolToAdapterFormat expectations; rename the parameter to
toolData (or similar) and change the metadata assignment so metadata.container =
toolData.container (i.e., store only the container value), optionally keeping
the full tool under a separate key like metadata.tool if you need the full
config; update any references to the function signature accordingly.
In `@packages/typescript/openai-base/src/tools/web-search-preview-tool.ts`:
- Around line 12-17: The returned object's type field should be hardcoded for
consistency rather than reading metadata.type; update the converter that builds
the returned object (the code using metadata: WebSearchPreviewTool and returning
{ type, search_context_size, user_location }) to set type to a fixed literal
(e.g., 'web_search_preview') and optionally fall back to that literal when
metadata.type is falsy, leaving the other fields (search_context_size and
user_location) as-is.
In `@packages/typescript/openai-base/src/types/provider-options.ts`:
- Line 7: The 'stop' property in the shared type currently uses the bracket
array syntax (stop?: string | string[]), which trips
`@typescript-eslint/array-type`; change it to use the repo's configured generic
array style by updating the property to stop?: string | Array<string> so the
union remains but the array form satisfies the lint rule and keeps the type
meaning identical.
---
Nitpick comments:
In `@packages/typescript/ai-fal/src/utils/client.ts`:
- Around line 19-22: The if (!apiKey) branch is dead because
getFalApiKeyFromEnv() throws when the key is missing; remove the redundant check
or wrap the call in a try/catch to provide a custom error message. Specifically,
update the apiKey assignment that uses config?.apiKey ?? getFalApiKeyFromEnv():
either delete the subsequent if (!apiKey) throw new Error('API key is required')
block, or change the fallback call to be try { getFalApiKeyFromEnv() } catch
(err) { throw new Error(`Fal API key missing: ${err.message}`) } (adjusting
throw text as desired) so error handling around getFalApiKeyFromEnv and the
apiKey variable is correct.
In `@packages/typescript/ai-gemini/package.json`:
- Around line 42-54: The package.json for ai-gemini lists "@tanstack/ai-utils"
in both "dependencies" and "devDependencies", creating redundancy; remove the
duplicate entry from "devDependencies" so "@tanstack/ai-utils" only appears once
(leave it in "dependencies"), then run the package manager install to refresh
the lockfile and ensure the workspace resolution is correct; locate the entries
by the symbols "dependencies", "devDependencies", and the package name
"@tanstack/ai-utils" in the package.json to make the change.
In `@packages/typescript/ai-gemini/src/utils/client.ts`:
- Around line 24-28: The fallback currently calls
getApiKeyFromEnv('GOOGLE_API_KEY') and on any error returns
getApiKeyFromEnv('GEMINI_API_KEY'), which leads to an error that only mentions
GEMINI_API_KEY when both are missing; update the logic to attempt the second
call inside its own try/catch (or capture both errors) and when both attempts
fail throw a single error that clearly lists both environment variables
(GOOGLE_API_KEY and GEMINI_API_KEY) and their failure reasons so callers see
that both keys are missing/invalid; reference getApiKeyFromEnv('GOOGLE_API_KEY')
and getApiKeyFromEnv('GEMINI_API_KEY') when implementing the combined error.
In `@packages/typescript/ai-openai/src/utils/client.ts`:
- Around line 38-41: The double-cast in toCompatibleConfig permits silent
interface drift between OpenAIClientConfig and OpenAICompatibleClientConfig; fix
by consolidating types or mapping explicitly: either import and use a single
shared config type exported from openai-base (replace
OpenAIClientConfig/OpenAICompatibleClientConfig uses and remove the cast in
toCompatibleConfig), or implement a field-by-field mapper inside
toCompatibleConfig that constructs an OpenAICompatibleClientConfig from the
OpenAIClientConfig properties (no unknown/as casts) so the boundary is explicit;
update all callers of toCompatibleConfig accordingly (function name
toCompatibleConfig is the change point).
In `@packages/typescript/ai-utils/tests/env.test.ts`:
- Around line 23-27: Add a regression test in the existing env.test.ts that
verifies browser (window.env) precedence for the isomorphic helper: set
process.env.MY_PROVIDER_API_KEY to one value and set global.window = { env: {
MY_PROVIDER_API_KEY: 'browser_value' } } then assert
getApiKeyFromEnv('MY_PROVIDER_API_KEY') returns 'browser_value'; finally clean
up/reset global.window and process.env to avoid test pollution. Use the
getApiKeyFromEnv symbol and window.env/process.env references so the test
prevents regressions where process.env accidentally overrides browser env.
In `@packages/typescript/ai-utils/tests/model-meta.test.ts`:
- Around line 5-106: Tests are missing coverage for the positivity check on
max_output_tokens and the validation that rejects negative pricing.input.cached
in defineModelMeta; add two new test cases in model-meta.test.ts that call
defineModelMeta with (1) max_output_tokens set to 0 (or negative) and assert it
throws 'max_output_tokens', and (2) pricing.input.cached set to a negative value
and assert it throws 'pricing' (referencing the defineModelMeta function and the
max_output_tokens and pricing.input.cached fields to locate the validation
branches).
In `@packages/typescript/ai-utils/tests/transforms.test.ts`:
- Around line 1-51: Add a test case to verify how transformNullsToUndefined
treats direct null entries inside arrays (e.g., items: [1, null, 3]) by calling
transformNullsToUndefined with an object or array containing such an array and
asserting the expected behavior (whether nulls are removed, converted to
undefined, or preserved). Place the new spec alongside existing cases in
transforms.test.ts and reference transformNullsToUndefined in the test name and
assertions so reviewers can see the intended outcome clearly.
In `@packages/typescript/openai-base/src/adapters/responses-tool-converter.ts`:
- Around line 46-49: The code mutates the object returned by
schemaConverter(inputSchema, ...) by setting jsonSchema.additionalProperties =
false which can leak state; instead create a new object (shallow or deep clone
as appropriate) from the schemaConverter result and set
additionalProperties=false on that new object before returning/using it. Locate
the usage of schemaConverter and the jsonSchema variable in
responses-tool-converter.ts and replace the in-place mutation with constructing
a new schema object (e.g., via object spread or a deep clone helper) then set
additionalProperties on that new object so the original converter output is not
modified.
In `@packages/typescript/openai-base/src/tools/computer-use-tool.ts`:
- Around line 23-31: Function signature inconsistent: change the parameter of
computerUseTool from ComputerUseTool to Omit<ComputerUseTool, 'type'> so callers
don't have to pass type. Update the factory function computerUseTool to accept
toolData: Omit<ComputerUseTool, 'type'>, keep returning the same object with
name: 'computer_use_preview' and metadata: { ...toolData }, and ensure any call
sites that previously passed type no longer do so (or remove the redundant type
property).
In `@packages/typescript/openai-base/src/tools/custom-tool.ts`:
- Around line 9-17: The convertCustomToolToAdapterFormat function currently
casts tool.metadata to CustomTool without runtime checks; add a Zod schema for
the CustomTool metadata, validate tool.metadata at the start of
convertCustomToolToAdapterFormat (using schema.parse or safeParse), and handle
validation failures by throwing or logging a descriptive error so malformed
metadata doesn't propagate; reference the convertCustomToolToAdapterFormat
function, the metadata variable and the CustomTool shape when adding the Zod
schema and validation call.
In `@packages/typescript/openai-base/src/tools/file-search-tool.ts`:
- Around line 15-26: convertFileSearchToolToAdapterFormat currently casts
tool.metadata to OpenAI.Responses.FileSearchTool without validation; add a
runtime type guard or validation at the start of
convertFileSearchToolToAdapterFormat to verify required fields (e.g.,
vector_store_ids, max_num_results, ranking_options, filters) and that
max_num_results meets the same constraints enforced by fileSearchTool, and throw
or return a clear error if validation fails so malformed metadata cannot produce
invalid adapter requests.
In `@packages/typescript/openai-base/src/tools/tool-converter.ts`:
- Around line 16-28: The SPECIAL_TOOL_NAMES set and the switch in the tool
conversion logic must be kept in sync to avoid falling back to
functionToolConverter incorrectly; add a defensive exhaustiveness guard by
updating the switch that branches on tool names (the same switch that currently
falls through to functionToolConverter) to include a default branch that throws
or logs an error referencing the unexpected tool name, and also add a runtime
check that every entry in SPECIAL_TOOL_NAMES is covered by a corresponding
switch case (or vice versa) so any drift fails fast during development.
In `@packages/typescript/openai-base/src/tools/web-search-tool.ts`:
- Around line 9-12: convertWebSearchToolToAdapterFormat currently casts
tool.metadata to WebSearchTool without validation; add defensive checks to
ensure tool.metadata exists and has the expected fields before returning it. In
convertWebSearchToolToAdapterFormat, validate that tool.metadata is not
undefined/null and that required WebSearchTool properties (e.g., any unique keys
your type requires) are present and correctly typed, and either throw a clear
error or return a safe default/converted object when validation fails; reference
the Tool type, WebSearchTool type, and the metadata variable to locate where to
add these checks.
In `@packages/typescript/openai-base/tests/schema-converter.test.ts`:
- Around line 32-45: Add a test to verify makeStructuredOutputCompatible
preserves existing nullable types: create a schema where one property already
has type ['string','null'] (e.g., nickname) and required includes only name,
call makeStructuredOutputCompatible(schema, ['name']) and assert that
nickname.type remains ['string','null'] (and name stays 'string'); reference the
existing test in schema-converter.test.ts and the makeStructuredOutputCompatible
function to place the new case next to the current optional-fields test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| try { | ||
| const stream = await this.client.chat.completions.create({ | ||
| ...requestParams, | ||
| stream: true, | ||
| }) | ||
|
|
||
| yield* this.processStreamChunks(stream, options, aguiState) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find packages/typescript -name "chat-completions-text.ts" -type fRepository: TanStack/ai
Length of output: 124
🏁 Script executed:
wc -l packages/typescript/openai-base/src/adapters/chat-completions-text.tsRepository: TanStack/ai
Length of output: 128
🏁 Script executed:
cat -n packages/typescript/openai-base/src/adapters/chat-completions-text.tsRepository: TanStack/ai
Length of output: 22122
🏁 Script executed:
rg -t ts "type TextOptions" packages/typescript --max-count 5Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
fd -t ts "\.ts$" packages/typescript -type f | xargs grep -l "export.*type TextOptions" | head -5Repository: TanStack/ai
Length of output: 224
🏁 Script executed:
rg -t ts "export.*TextOptions" packages/typescript/ai/src --max-count 10Repository: TanStack/ai
Length of output: 120
🏁 Script executed:
cat -n packages/typescript/ai/src/types.ts | head -150Repository: TanStack/ai
Length of output: 5642
🏁 Script executed:
rg -A 20 "export interface TextOptions" packages/typescript/ai/src/types.tsRepository: TanStack/ai
Length of output: 910
🏁 Script executed:
rg -A 60 "export interface TextOptions" packages/typescript/ai/src/types.tsRepository: TanStack/ai
Length of output: 2504
🏁 Script executed:
rg "request\?" packages/typescript/ai/src/types.ts -B 2 -A 2Repository: TanStack/ai
Length of output: 170
🏁 Script executed:
rg -A 20 "export interface StructuredOutputOptions" packages/typescript/ai/srcRepository: TanStack/ai
Length of output: 1856
🏁 Script executed:
rg -A 20 "export interface StructuredOutputOptions" packages/typescript/adapters/srcRepository: TanStack/ai
Length of output: 127
🏁 Script executed:
rg -B 5 "request\?: Request" packages/typescript/ai/src/types.tsRepository: TanStack/ai
Length of output: 325
🏁 Script executed:
rg "chat\.completions\.create" packages/typescript/openai-base/src -A 5Repository: TanStack/ai
Length of output: 2105
🏁 Script executed:
rg "client.chat.completions.create" packages/typescript --max-count 10 -A 3Repository: TanStack/ai
Length of output: 2082
🏁 Script executed:
rg "mapOptionsToRequest" packages/typescript/openai-base/src/adapters/chat-completions-text.ts -A 30 -B 2Repository: TanStack/ai
Length of output: 3029
🏁 Script executed:
rg "options.request\|chatOptions.request" packages/typescript/openai-base/srcRepository: TanStack/ai
Length of output: 37
🏁 Script executed:
rg "options\.request\|chatOptions\.request" packages/typescript --max-count 10Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
fd "openai" packages/typescript -type dRepository: TanStack/ai
Length of output: 225
🏁 Script executed:
cat packages/typescript/openai-base/package.json | grep -A 5 -B 5 '"openai"'Repository: TanStack/ai
Length of output: 388
🏁 Script executed:
find packages/typescript -name "package.json" -exec grep -l '"openai"' {} \;Repository: TanStack/ai
Length of output: 183
🏁 Script executed:
cat packages/typescript/ai-groq/src/adapters/text.ts | head -200 | tail -100Repository: TanStack/ai
Length of output: 3097
🏁 Script executed:
rg "\.request" packages/typescript/ai/src/types.ts -B 3 -A 3Repository: TanStack/ai
Length of output: 37
🏁 Script executed:
web_search
OpenAI Node SDK chat.completions.create second argument RequestOptions headersRepository: TanStack/ai
Length of output: 150
🏁 Script executed:
rg "request\?" packages/typescript/ai/src/types.ts -B 5 -A 5Repository: TanStack/ai
Length of output: 507
🌐 Web query:
OpenAI Node.js SDK chat.completions.create RequestOptions second argument
💡 Result:
In the OpenAI Node.js SDK (latest versions as of 2026), the client.chat.completions.create method has the signature: client.chat.completions.create(ChatCompletionCreateParams body, RequestOptions? options): ChatCompletion | Stream The first argument is the ChatCompletionCreateParams object containing parameters like model, messages, temperature, etc. The second (optional) argument is RequestOptions options, which allows customizing the HTTP request. It typically includes properties for advanced request configuration such as custom headers, query parameters, timeout settings, or fetch options (e.g., proxy configuration, custom fetch implementation). This is part of the SDK's low-level request customization, generated from the OpenAPI spec using Stainless. Examples from official docs show basic usage with only the first argument, but the TypeScript reference explicitly documents the two-argument form. Usage example: const completion = await client.chat.completions.create( { model: 'gpt-4o', messages: [{ role: 'user', content: 'Hello!' }] }, { // RequestOptions headers: { 'Custom-Header': 'value' }, // other options like timeout, etc. } ); RequestOptions is an alias for Opts.RequestOptions in the SDK types, used across resource methods for request-level overrides.
Citations:
- 1: https://developers.openai.com/api/reference/typescript/resources/chat/subresources/completions/methods/create/
- 2: https://developers.openai.com/api/reference/typescript/resources/chat/subresources/completions
- 3: https://jsr.io/@openai/openai/doc/~/default.RequestOptions
- 4: https://github.com/openai/openai-node
- 5: https://developers.openai.com/api/reference/typescript
- 6: https://github.com/openai/openai-node/blob/master/api.md
🏁 Script executed:
rg -A 50 "protected mapOptionsToRequest" packages/typescript/openai-base/src/adapters/chat-completions-text.tsRepository: TanStack/ai
Length of output: 1607
🏁 Script executed:
rg "options\.request\|chatOptions" packages/typescript/openai-base/src/adapters/chat-completions-text.tsRepository: TanStack/ai
Length of output: 37
🏁 Script executed:
cat -n packages/typescript/openai-base/src/adapters/chat-completions-text.ts | sed -n '61,82p'Repository: TanStack/ai
Length of output: 870
🏁 Script executed:
cat -n packages/typescript/openai-base/src/adapters/chat-completions-text.ts | sed -n '129,153p'Repository: TanStack/ai
Length of output: 936
Forward request-scoped headers through both Chat Completions calls.
The OpenAI SDK's chat.completions.create() method accepts a second argument for RequestOptions, which includes custom headers and other request configuration. The TextOptions interface provides a request?: Request | RequestInit field for this purpose, but it's never extracted or passed to either the streaming call (line 76) or structured-output call (line 141). Update mapOptionsToRequest() to return both the request parameters and the request options, then pass them as the second argument to both create() invocations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts` around
lines 75 - 81, mapOptionsToRequest currently only returns request parameters and
ignores the optional request/requestInit in TextOptions; update
mapOptionsToRequest to return a tuple or object containing both the
requestParams and requestOptions (derived from TextOptions.request), then change
both calls to this.client.chat.completions.create(...) (the streaming call that
feeds processStreamChunks and the structured-output create call) to pass the
requestOptions as the second argument to create(). Ensure requestOptions is the
same forwarded Request | RequestInit so per-request headers and other
RequestOptions are preserved.
| export function makeStructuredOutputCompatible( | ||
| schema: Record<string, any>, | ||
| originalRequired: Array<string> = [], | ||
| ): Record<string, any> { | ||
| const result = { ...schema } | ||
|
|
||
| if (result.type === 'object' && result.properties) { | ||
| const properties = { ...result.properties } | ||
| const allPropertyNames = Object.keys(properties) | ||
|
|
||
| for (const propName of allPropertyNames) { | ||
| const prop = properties[propName] | ||
| const wasOptional = !originalRequired.includes(propName) |
There was a problem hiding this comment.
Use schema.required as the default source of optionality.
With originalRequired = [], a one-arg call like makeStructuredOutputCompatible(schema) treats every root property as optional. Line 24 then widens even originally required fields to include null, which changes the schema contract before required is rewritten.
🛠️ Suggested fix
export function makeStructuredOutputCompatible(
schema: Record<string, any>,
- originalRequired: Array<string> = [],
+ originalRequired?: Array<string>,
): Record<string, any> {
const result = { ...schema }
+ const required =
+ originalRequired ??
+ (Array.isArray(result.required) ? result.required : [])- const wasOptional = !originalRequired.includes(propName)
+ const wasOptional = !required.includes(propName)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function makeStructuredOutputCompatible( | |
| schema: Record<string, any>, | |
| originalRequired: Array<string> = [], | |
| ): Record<string, any> { | |
| const result = { ...schema } | |
| if (result.type === 'object' && result.properties) { | |
| const properties = { ...result.properties } | |
| const allPropertyNames = Object.keys(properties) | |
| for (const propName of allPropertyNames) { | |
| const prop = properties[propName] | |
| const wasOptional = !originalRequired.includes(propName) | |
| export function makeStructuredOutputCompatible( | |
| schema: Record<string, any>, | |
| originalRequired?: Array<string>, | |
| ): Record<string, any> { | |
| const result = { ...schema } | |
| const required = | |
| originalRequired ?? | |
| (Array.isArray(result.required) ? result.required : []) | |
| if (result.type === 'object' && result.properties) { | |
| const properties = { ...result.properties } | |
| const allPropertyNames = Object.keys(properties) | |
| for (const propName of allPropertyNames) { | |
| const prop = properties[propName] | |
| const wasOptional = !required.includes(propName) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/openai-base/src/utils/schema-converter.ts` around lines
12 - 24, The function makeStructuredOutputCompatible currently treats all fields
as optional when originalRequired defaults to [], which can incorrectly widen
originally required properties; change the optionality source to use
schema.required when originalRequired is not supplied—inside
makeStructuredOutputCompatible, derive a working required list (e.g., const
requiredSource = originalRequired.length ? originalRequired : (schema.required
?? [])) and use requiredSource instead of originalRequired when computing
wasOptional for each propName, ensuring originally required fields are not
widened to include null before result.required is rewritten.
ce49ae5 to
914b883
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (5)
packages/typescript/openai-base/tests/mcp-tool.test.ts (1)
20-33: Align the preservation assertion with the test’s stated intent.The test says it preserves metadata fields other than
type, but it only assertsserver_url. Add an assertion forserver_description(or assert the full projected object) to avoid false confidence.
Suggested diff
it('should preserve metadata fields other than type', () => { @@ const result = convertMCPToolToAdapterFormat(tool) expect(result.type).toBe('mcp') expect(result.server_url).toBe('https://example.com/mcp') + expect(result.server_description).toBe('Test server') })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/tests/mcp-tool.test.ts` around lines 20 - 33, The test currently only asserts that convertMCPToolToAdapterFormat(tool) preserves result.server_url even though it’s intended to preserve all metadata fields other than type; update the assertion to also check result.server_description (or assert the full projected object equals the expected projection) so the test verifies both server_url and server_description are preserved from the input tool.metadata when calling convertMCPToolToAdapterFormat.packages/typescript/openai-base/src/types/provider-options.ts (1)
7-7: UseArray<string>per ESLint configuration.Static analysis indicates the project prefers
Array<string>overstring[].✏️ Suggested fix
- stop?: string | string[] + stop?: string | Array<string>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/types/provider-options.ts` at line 7, The stop property is declared using the `string[]` shorthand; update its type to use `Array<string>` to satisfy the project's ESLint rule. Locate the `stop` property in the provider options type (the `stop?: string | string[]` declaration in provider-options.ts) and change the union to use `Array<string>` (e.g., `stop?: string | Array<string>`) so the signature remains identical but conforms to the Array<string> style.packages/typescript/ai-grok/src/utils/client.ts (1)
30-33: Consider avoiding the double cast throughunknown.The
as unknown as OpenAICompatibleClientConfigpattern bypasses type safety. IfGrokClientConfig(which extendsClientOptions) is structurally compatible withOpenAICompatibleClientConfig, consider either:
- Aligning the types so direct assignment works
- Using a type assertion with a comment explaining why it's safe
This isn't blocking since the runtime behavior is correct, but the double cast hides potential type mismatches.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-grok/src/utils/client.ts` around lines 30 - 33, The return currently uses a double cast "as unknown as OpenAICompatibleClientConfig" which bypasses type safety; update the types or the cast so we don't go through unknown: either make GrokClientConfig (or ClientOptions) structurally extend OpenAICompatibleClientConfig so the object returned by the function can be returned directly, or replace the double-cast with a single assert "as OpenAICompatibleClientConfig" and add a short comment above the return explaining why the shape is compatible (refer to the returned object created by the spread of config and baseURL in client.ts). Ensure the change touches the return expression that builds the config object so the type assertion is safe and clear.packages/typescript/openai-base/src/adapters/chat-completions-text.ts (1)
413-428: Useconsole.errorfor error logging consistency.Line 415 uses
console.logfor error logging, while the rest of the file usesconsole.errorfor errors (lines 117-122, 200-203). This inconsistency could make log filtering more difficult.♻️ Suggested fix
} catch (error: unknown) { const err = error as Error & { code?: string } - console.log(`[${this.name}] Stream ended with error:`, err.message) + console.error(`[${this.name}] Stream ended with error:`, err.message) // Emit AG-UI RUN_ERROR yield {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts` around lines 413 - 428, Replace the inconsistent console.log call in the catch block with console.error so error logging matches the rest of the file; specifically in the catch (error: unknown) block where you cast to const err = error as Error & { code?: string } and log `[${this.name}] Stream ended with error:`, change that call to use console.error and keep the rest of the RUN_ERROR yield (runId, model, timestamp, error.message, error.code) unchanged.packages/typescript/ai-grok/src/adapters/text.ts (1)
27-41: Successful refactor to use shared base adapter.The adapter correctly extends
OpenAICompatibleChatCompletionsTextAdapter, eliminating ~500 lines of duplicated streaming and structured output logic.Nit: Lines 35-36 declare
kindandnameas class properties, but these are already set by the parent class (via constructor parameter and parent'skinddeclaration). Consider removing the redundant declarations to clarify that values come from the inheritance chain.♻️ Optional: Remove redundant property declarations
export class GrokTextAdapter< TModel extends (typeof GROK_CHAT_MODELS)[number], > extends OpenAICompatibleChatCompletionsTextAdapter< TModel, ResolveProviderOptions<TModel>, ResolveInputModalities<TModel>, GrokMessageMetadataByModality > { - readonly kind = 'text' as const - readonly name = 'grok' as const + override readonly name = 'grok' as const constructor(config: GrokTextConfig, model: TModel) { super(toCompatibleConfig(config), model, 'grok') } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-grok/src/adapters/text.ts` around lines 27 - 41, Remove the redundant class properties readonly kind and readonly name from GrokTextAdapter since the parent OpenAICompatibleChatCompletionsTextAdapter already sets these via its constructor parameter (the 'grok' argument passed in the GrokTextAdapter constructor); delete the two property declarations (kind and name) so the adapter inherits them from the base class and ensure no other code in this class relies on those explicit declarations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-groq/src/utils/client.ts`:
- Line 3: The import order violates ESLint import/order: move the concrete
import of generateId and getApiKeyFromEnv from '@tanstack/ai-utils' so it comes
before the `import type` from 'groq-sdk'; update the top of the file so the
non-type import (generateId as _generateId, getApiKeyFromEnv) appears before the
`import type { ... } from 'groq-sdk'` to satisfy the rule.
In `@packages/typescript/ai-groq/tests/schema-converter.test.ts`:
- Around line 32-64: Rename the test case title to reflect the actual behavior
being tested: update the it(...) description string in the test where
makeGroqStructuredOutputCompatible and checkNoEmptyRequired are used (currently
"'should remove empty required arrays inside oneOf variants'") to a clearer name
such as "'should not have empty required arrays anywhere in the result'"; ensure
only the descriptive string is changed and leave the test body (the schema, call
to makeGroqStructuredOutputCompatible, and the checkNoEmptyRequired helper)
untouched.
In `@packages/typescript/ai-utils/tests/env.test.ts`:
- Around line 14-16: The test is flaky because it relies on host environment
state; make the assertions deterministic by explicitly controlling process.env
within the test: save the original value of the env var, delete it (delete
process.env['NONEXISTENT_KEY']) before calling
getApiKeyFromEnv('NONEXISTENT_KEY') so the function reliably throws, and then
restore the original value after the test (or use beforeEach/afterEach). Do the
same for the other test that references 'MY_PROVIDER_API_KEY' — set
process.env['MY_PROVIDER_API_KEY'] to a known value for the success case and
restore the original afterward; reference getApiKeyFromEnv in the test file to
locate the tests to change.
---
Nitpick comments:
In `@packages/typescript/ai-grok/src/adapters/text.ts`:
- Around line 27-41: Remove the redundant class properties readonly kind and
readonly name from GrokTextAdapter since the parent
OpenAICompatibleChatCompletionsTextAdapter already sets these via its
constructor parameter (the 'grok' argument passed in the GrokTextAdapter
constructor); delete the two property declarations (kind and name) so the
adapter inherits them from the base class and ensure no other code in this class
relies on those explicit declarations.
In `@packages/typescript/ai-grok/src/utils/client.ts`:
- Around line 30-33: The return currently uses a double cast "as unknown as
OpenAICompatibleClientConfig" which bypasses type safety; update the types or
the cast so we don't go through unknown: either make GrokClientConfig (or
ClientOptions) structurally extend OpenAICompatibleClientConfig so the object
returned by the function can be returned directly, or replace the double-cast
with a single assert "as OpenAICompatibleClientConfig" and add a short comment
above the return explaining why the shape is compatible (refer to the returned
object created by the spread of config and baseURL in client.ts). Ensure the
change touches the return expression that builds the config object so the type
assertion is safe and clear.
In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts`:
- Around line 413-428: Replace the inconsistent console.log call in the catch
block with console.error so error logging matches the rest of the file;
specifically in the catch (error: unknown) block where you cast to const err =
error as Error & { code?: string } and log `[${this.name}] Stream ended with
error:`, change that call to use console.error and keep the rest of the
RUN_ERROR yield (runId, model, timestamp, error.message, error.code) unchanged.
In `@packages/typescript/openai-base/src/types/provider-options.ts`:
- Line 7: The stop property is declared using the `string[]` shorthand; update
its type to use `Array<string>` to satisfy the project's ESLint rule. Locate the
`stop` property in the provider options type (the `stop?: string | string[]`
declaration in provider-options.ts) and change the union to use `Array<string>`
(e.g., `stop?: string | Array<string>`) so the signature remains identical but
conforms to the Array<string> style.
In `@packages/typescript/openai-base/tests/mcp-tool.test.ts`:
- Around line 20-33: The test currently only asserts that
convertMCPToolToAdapterFormat(tool) preserves result.server_url even though it’s
intended to preserve all metadata fields other than type; update the assertion
to also check result.server_description (or assert the full projected object
equals the expected projection) so the test verifies both server_url and
server_description are preserved from the input tool.metadata when calling
convertMCPToolToAdapterFormat.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a438aa99-181f-4c47-8e75-5a71a52dae2b
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (104)
.changeset/add-ai-utils-package.md.changeset/add-openai-base-package.md.changeset/refactor-providers-to-shared-packages.mdpackages/typescript/ai-anthropic/package.jsonpackages/typescript/ai-anthropic/src/utils/client.tspackages/typescript/ai-elevenlabs/package.jsonpackages/typescript/ai-elevenlabs/src/realtime/token.tspackages/typescript/ai-fal/package.jsonpackages/typescript/ai-fal/src/utils/client.tspackages/typescript/ai-gemini/package.jsonpackages/typescript/ai-gemini/src/utils/client.tspackages/typescript/ai-grok/package.jsonpackages/typescript/ai-grok/src/adapters/image.tspackages/typescript/ai-grok/src/adapters/summarize.tspackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-grok/src/tools/function-tool.tspackages/typescript/ai-grok/src/tools/tool-converter.tspackages/typescript/ai-grok/src/utils/client.tspackages/typescript/ai-grok/src/utils/index.tspackages/typescript/ai-grok/src/utils/schema-converter.tspackages/typescript/ai-grok/tests/grok-adapter.test.tspackages/typescript/ai-groq/package.jsonpackages/typescript/ai-groq/src/utils/client.tspackages/typescript/ai-groq/src/utils/schema-converter.tspackages/typescript/ai-groq/tests/groq-adapter.test.tspackages/typescript/ai-groq/tests/schema-converter.test.tspackages/typescript/ai-ollama/package.jsonpackages/typescript/ai-ollama/src/utils/client.tspackages/typescript/ai-openai/package.jsonpackages/typescript/ai-openai/src/adapters/image.tspackages/typescript/ai-openai/src/adapters/summarize.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openai/src/adapters/transcription.tspackages/typescript/ai-openai/src/adapters/tts.tspackages/typescript/ai-openai/src/adapters/video.tspackages/typescript/ai-openai/src/tools/apply-patch-tool.tspackages/typescript/ai-openai/src/tools/code-interpreter-tool.tspackages/typescript/ai-openai/src/tools/computer-use-tool.tspackages/typescript/ai-openai/src/tools/custom-tool.tspackages/typescript/ai-openai/src/tools/file-search-tool.tspackages/typescript/ai-openai/src/tools/function-tool.tspackages/typescript/ai-openai/src/tools/image-generation-tool.tspackages/typescript/ai-openai/src/tools/index.tspackages/typescript/ai-openai/src/tools/local-shell-tool.tspackages/typescript/ai-openai/src/tools/mcp-tool.tspackages/typescript/ai-openai/src/tools/shell-tool.tspackages/typescript/ai-openai/src/tools/tool-choice.tspackages/typescript/ai-openai/src/tools/tool-converter.tspackages/typescript/ai-openai/src/tools/web-search-preview-tool.tspackages/typescript/ai-openai/src/tools/web-search-tool.tspackages/typescript/ai-openai/src/utils/client.tspackages/typescript/ai-openai/src/utils/schema-converter.tspackages/typescript/ai-openrouter/package.jsonpackages/typescript/ai-openrouter/src/utils/client.tspackages/typescript/ai-utils/package.jsonpackages/typescript/ai-utils/src/env.tspackages/typescript/ai-utils/src/id.tspackages/typescript/ai-utils/src/index.tspackages/typescript/ai-utils/src/model-meta/define.tspackages/typescript/ai-utils/src/model-meta/types.tspackages/typescript/ai-utils/src/transforms.tspackages/typescript/ai-utils/tests/env.test.tspackages/typescript/ai-utils/tests/id.test.tspackages/typescript/ai-utils/tests/model-meta.test.tspackages/typescript/ai-utils/tests/transforms.test.tspackages/typescript/ai-utils/tsconfig.jsonpackages/typescript/ai-utils/vite.config.tspackages/typescript/openai-base/package.jsonpackages/typescript/openai-base/src/adapters/chat-completions-text.tspackages/typescript/openai-base/src/adapters/chat-completions-tool-converter.tspackages/typescript/openai-base/src/adapters/image.tspackages/typescript/openai-base/src/adapters/responses-text.tspackages/typescript/openai-base/src/adapters/responses-tool-converter.tspackages/typescript/openai-base/src/adapters/summarize.tspackages/typescript/openai-base/src/adapters/transcription.tspackages/typescript/openai-base/src/adapters/tts.tspackages/typescript/openai-base/src/adapters/video.tspackages/typescript/openai-base/src/index.tspackages/typescript/openai-base/src/tools/apply-patch-tool.tspackages/typescript/openai-base/src/tools/code-interpreter-tool.tspackages/typescript/openai-base/src/tools/computer-use-tool.tspackages/typescript/openai-base/src/tools/custom-tool.tspackages/typescript/openai-base/src/tools/file-search-tool.tspackages/typescript/openai-base/src/tools/function-tool.tspackages/typescript/openai-base/src/tools/image-generation-tool.tspackages/typescript/openai-base/src/tools/index.tspackages/typescript/openai-base/src/tools/local-shell-tool.tspackages/typescript/openai-base/src/tools/mcp-tool.tspackages/typescript/openai-base/src/tools/shell-tool.tspackages/typescript/openai-base/src/tools/tool-choice.tspackages/typescript/openai-base/src/tools/tool-converter.tspackages/typescript/openai-base/src/tools/web-search-preview-tool.tspackages/typescript/openai-base/src/tools/web-search-tool.tspackages/typescript/openai-base/src/types/config.tspackages/typescript/openai-base/src/types/message-metadata.tspackages/typescript/openai-base/src/types/provider-options.tspackages/typescript/openai-base/src/utils/client.tspackages/typescript/openai-base/src/utils/schema-converter.tspackages/typescript/openai-base/tests/chat-completions-text.test.tspackages/typescript/openai-base/tests/mcp-tool.test.tspackages/typescript/openai-base/tests/responses-text.test.tspackages/typescript/openai-base/tests/schema-converter.test.tspackages/typescript/openai-base/tsconfig.jsonpackages/typescript/openai-base/vite.config.ts
✅ Files skipped from review due to trivial changes (36)
- packages/typescript/ai-ollama/package.json
- packages/typescript/ai-fal/package.json
- packages/typescript/ai-groq/package.json
- packages/typescript/ai-utils/tests/id.test.ts
- packages/typescript/ai-elevenlabs/package.json
- packages/typescript/ai-utils/tsconfig.json
- packages/typescript/ai-elevenlabs/src/realtime/token.ts
- packages/typescript/ai-openai/package.json
- packages/typescript/openai-base/tsconfig.json
- packages/typescript/openai-base/src/utils/client.ts
- packages/typescript/ai-anthropic/package.json
- packages/typescript/ai-grok/package.json
- packages/typescript/ai-gemini/package.json
- packages/typescript/ai-openai/src/tools/tool-choice.ts
- packages/typescript/ai-openrouter/package.json
- packages/typescript/openai-base/src/tools/web-search-preview-tool.ts
- packages/typescript/openai-base/src/types/config.ts
- packages/typescript/ai-utils/src/id.ts
- .changeset/add-ai-utils-package.md
- packages/typescript/openai-base/src/tools/local-shell-tool.ts
- packages/typescript/openai-base/src/tools/apply-patch-tool.ts
- packages/typescript/openai-base/src/tools/tool-converter.ts
- packages/typescript/ai-grok/src/utils/index.ts
- packages/typescript/ai-utils/src/index.ts
- packages/typescript/openai-base/src/tools/code-interpreter-tool.ts
- packages/typescript/ai-openai/src/tools/local-shell-tool.ts
- packages/typescript/ai-openai/src/tools/custom-tool.ts
- packages/typescript/ai-utils/tests/model-meta.test.ts
- packages/typescript/openai-base/package.json
- packages/typescript/openai-base/src/tools/shell-tool.ts
- packages/typescript/openai-base/src/tools/tool-choice.ts
- packages/typescript/ai-utils/src/model-meta/types.ts
- packages/typescript/openai-base/src/types/message-metadata.ts
- packages/typescript/openai-base/src/adapters/tts.ts
- packages/typescript/openai-base/src/index.ts
- packages/typescript/openai-base/src/adapters/responses-text.ts
🚧 Files skipped from review as they are similar to previous changes (41)
- packages/typescript/ai-ollama/src/utils/client.ts
- .changeset/add-openai-base-package.md
- packages/typescript/ai-utils/src/transforms.ts
- packages/typescript/ai-openai/src/tools/function-tool.ts
- packages/typescript/ai-openai/src/tools/index.ts
- packages/typescript/ai-utils/tests/transforms.test.ts
- packages/typescript/openai-base/src/tools/web-search-tool.ts
- packages/typescript/openai-base/src/tools/function-tool.ts
- packages/typescript/ai-grok/src/tools/tool-converter.ts
- packages/typescript/ai-groq/tests/groq-adapter.test.ts
- packages/typescript/ai-openai/src/tools/apply-patch-tool.ts
- packages/typescript/ai-anthropic/src/utils/client.ts
- .changeset/refactor-providers-to-shared-packages.md
- packages/typescript/ai-grok/src/adapters/summarize.ts
- packages/typescript/ai-utils/vite.config.ts
- packages/typescript/ai-openai/src/tools/shell-tool.ts
- packages/typescript/openai-base/src/tools/image-generation-tool.ts
- packages/typescript/ai-openai/src/tools/computer-use-tool.ts
- packages/typescript/ai-openai/src/tools/mcp-tool.ts
- packages/typescript/ai-openai/src/tools/file-search-tool.ts
- packages/typescript/ai-utils/src/model-meta/define.ts
- packages/typescript/openai-base/src/tools/mcp-tool.ts
- packages/typescript/openai-base/src/tools/computer-use-tool.ts
- packages/typescript/openai-base/src/tools/custom-tool.ts
- packages/typescript/ai-utils/src/env.ts
- packages/typescript/ai-openai/src/tools/web-search-tool.ts
- packages/typescript/ai-openai/src/tools/code-interpreter-tool.ts
- packages/typescript/openai-base/src/adapters/responses-tool-converter.ts
- packages/typescript/ai-openai/src/utils/client.ts
- packages/typescript/ai-utils/package.json
- packages/typescript/openai-base/src/tools/file-search-tool.ts
- packages/typescript/ai-openai/src/utils/schema-converter.ts
- packages/typescript/openai-base/src/adapters/chat-completions-tool-converter.ts
- packages/typescript/openai-base/src/tools/index.ts
- packages/typescript/openai-base/src/utils/schema-converter.ts
- packages/typescript/openai-base/src/adapters/summarize.ts
- packages/typescript/ai-gemini/src/utils/client.ts
- packages/typescript/openai-base/src/adapters/image.ts
- packages/typescript/ai-openrouter/src/utils/client.ts
- packages/typescript/openai-base/vite.config.ts
- packages/typescript/openai-base/tests/schema-converter.test.ts
| it('should remove empty required arrays inside oneOf variants', () => { | ||
| const schema = { | ||
| type: 'object', | ||
| properties: { | ||
| data: { | ||
| type: 'object', | ||
| properties: { | ||
| inner: { type: 'string' }, | ||
| }, | ||
| required: ['inner'], | ||
| }, | ||
| }, | ||
| required: ['data'], | ||
| } | ||
|
|
||
| // First create a schema that would produce empty required after processing | ||
| const result = makeGroqStructuredOutputCompatible(schema, ['data']) | ||
|
|
||
| // Should not have empty required arrays anywhere | ||
| const checkNoEmptyRequired = (obj: any): void => { | ||
| if (obj && typeof obj === 'object') { | ||
| if (Array.isArray(obj.required)) { | ||
| expect(obj.required.length).toBeGreaterThan(0) | ||
| } | ||
| for (const value of Object.values(obj)) { | ||
| if (typeof value === 'object' && value !== null) { | ||
| checkNoEmptyRequired(value) | ||
| } | ||
| } | ||
| } | ||
| } | ||
| checkNoEmptyRequired(result) | ||
| }) |
There was a problem hiding this comment.
Misleading test name: schema doesn't contain oneOf.
The test is named 'should remove empty required arrays inside oneOf variants' but the schema only contains a nested object structure without any oneOf array. Consider renaming to accurately describe what's being tested (e.g., 'should not have empty required arrays anywhere in the result').
✏️ Suggested fix
- it('should remove empty required arrays inside oneOf variants', () => {
+ it('should not have any empty required arrays in nested structures', () => {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should remove empty required arrays inside oneOf variants', () => { | |
| const schema = { | |
| type: 'object', | |
| properties: { | |
| data: { | |
| type: 'object', | |
| properties: { | |
| inner: { type: 'string' }, | |
| }, | |
| required: ['inner'], | |
| }, | |
| }, | |
| required: ['data'], | |
| } | |
| // First create a schema that would produce empty required after processing | |
| const result = makeGroqStructuredOutputCompatible(schema, ['data']) | |
| // Should not have empty required arrays anywhere | |
| const checkNoEmptyRequired = (obj: any): void => { | |
| if (obj && typeof obj === 'object') { | |
| if (Array.isArray(obj.required)) { | |
| expect(obj.required.length).toBeGreaterThan(0) | |
| } | |
| for (const value of Object.values(obj)) { | |
| if (typeof value === 'object' && value !== null) { | |
| checkNoEmptyRequired(value) | |
| } | |
| } | |
| } | |
| } | |
| checkNoEmptyRequired(result) | |
| }) | |
| it('should not have any empty required arrays in nested structures', () => { | |
| const schema = { | |
| type: 'object', | |
| properties: { | |
| data: { | |
| type: 'object', | |
| properties: { | |
| inner: { type: 'string' }, | |
| }, | |
| required: ['inner'], | |
| }, | |
| }, | |
| required: ['data'], | |
| } | |
| // First create a schema that would produce empty required after processing | |
| const result = makeGroqStructuredOutputCompatible(schema, ['data']) | |
| // Should not have empty required arrays anywhere | |
| const checkNoEmptyRequired = (obj: any): void => { | |
| if (obj && typeof obj === 'object') { | |
| if (Array.isArray(obj.required)) { | |
| expect(obj.required.length).toBeGreaterThan(0) | |
| } | |
| for (const value of Object.values(obj)) { | |
| if (typeof value === 'object' && value !== null) { | |
| checkNoEmptyRequired(value) | |
| } | |
| } | |
| } | |
| } | |
| checkNoEmptyRequired(result) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-groq/tests/schema-converter.test.ts` around lines 32 -
64, Rename the test case title to reflect the actual behavior being tested:
update the it(...) description string in the test where
makeGroqStructuredOutputCompatible and checkNoEmptyRequired are used (currently
"'should remove empty required arrays inside oneOf variants'") to a clearer name
such as "'should not have empty required arrays anywhere in the result'"; ensure
only the descriptive string is changed and leave the test body (the schema, call
to makeGroqStructuredOutputCompatible, and the checkNoEmptyRequired helper)
untouched.
| it('should throw if the env var is not set', () => { | ||
| expect(() => getApiKeyFromEnv('NONEXISTENT_KEY')).toThrow('NONEXISTENT_KEY') | ||
| }) |
There was a problem hiding this comment.
Make missing-env assertions deterministic to avoid host-env collisions.
These tests assume NONEXISTENT_KEY / MY_PROVIDER_API_KEY are absent in the runner environment. If either is present in CI, assertions can become flaky.
💡 Proposed patch
it('should throw if the env var is not set', () => {
- expect(() => getApiKeyFromEnv('NONEXISTENT_KEY')).toThrow('NONEXISTENT_KEY')
+ const missingKey = `__AI_UTILS_TEST_MISSING_${Date.now()}__`
+ expect(() => getApiKeyFromEnv(missingKey)).toThrow(missingKey)
})
@@
it('should include the env var name in the error message', () => {
- expect(() => getApiKeyFromEnv('MY_PROVIDER_API_KEY')).toThrow(
- 'MY_PROVIDER_API_KEY',
- )
+ const missingKey = `__AI_UTILS_TEST_MISSING_MSG_${Date.now()}__`
+ expect(() => getApiKeyFromEnv(missingKey)).toThrow(missingKey)
})Also applies to: 23-27
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-utils/tests/env.test.ts` around lines 14 - 16, The
test is flaky because it relies on host environment state; make the assertions
deterministic by explicitly controlling process.env within the test: save the
original value of the env var, delete it (delete process.env['NONEXISTENT_KEY'])
before calling getApiKeyFromEnv('NONEXISTENT_KEY') so the function reliably
throws, and then restore the original value after the test (or use
beforeEach/afterEach). Do the same for the other test that references
'MY_PROVIDER_API_KEY' — set process.env['MY_PROVIDER_API_KEY'] to a known value
for the success case and restore the original afterward; reference
getApiKeyFromEnv in the test file to locate the tests to change.
9b83f1a to
012f2a1
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-groq/src/utils/schema-converter.ts`:
- Around line 82-93: The current normalization only patches a top-level `{ type:
'object' }` (variable normalised) and misses nested empty-object branches,
causing child schemas without properties to slip through
makeStructuredOutputCompatible; update schema-converter.ts to recursively walk
the schema before calling makeStructuredOutputCompatible and ensure any node
with `type === 'object'` but no `properties` gets `properties: {}` (handle
nested locations: `properties` values, `items`, `anyOf`/`oneOf`/`allOf`, and
`additionalProperties`), keep the existing call to removeEmptyRequired for final
cleanup, and add a regression test that constructs a schema like `{ type:
'object', properties: { child: { type: 'object' } }, required: ['child'] }` to
verify the nested child receives an empty properties object and passes Groq
validation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a9d14e92-f334-464f-92c8-e0413dd4905f
📒 Files selected for processing (12)
packages/typescript/ai-grok/src/text/text-provider-options.tspackages/typescript/ai-grok/src/tools/function-tool.tspackages/typescript/ai-grok/src/tools/index.tspackages/typescript/ai-grok/src/tools/tool-converter.tspackages/typescript/ai-groq/src/utils/client.tspackages/typescript/ai-groq/src/utils/schema-converter.tspackages/typescript/ai-openai/src/utils/client.tspackages/typescript/openai-base/src/adapters/chat-completions-text.tspackages/typescript/openai-base/src/adapters/responses-text.tspackages/typescript/openai-base/src/types/provider-options.tspackages/typescript/openai-base/src/utils/schema-converter.tspackages/typescript/openai-base/tests/chat-completions-text.test.ts
💤 Files with no reviewable changes (4)
- packages/typescript/ai-grok/src/tools/index.ts
- packages/typescript/ai-grok/src/text/text-provider-options.ts
- packages/typescript/ai-grok/src/tools/function-tool.ts
- packages/typescript/ai-grok/src/tools/tool-converter.ts
✅ Files skipped from review due to trivial changes (1)
- packages/typescript/openai-base/src/types/provider-options.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/typescript/ai-groq/src/utils/client.ts
- packages/typescript/openai-base/src/utils/schema-converter.ts
- packages/typescript/openai-base/tests/chat-completions-text.test.ts
| // Ensure object schemas always have properties (e.g. z.object({}) may produce | ||
| // { type: 'object' } without properties). openai-base's transformer skips | ||
| // objects without properties, so we normalise first. | ||
| const normalised = | ||
| schema.type === 'object' && !schema.properties | ||
| ? { ...schema, properties: {} } | ||
| : schema | ||
|
|
||
| result.properties = properties | ||
| // Groq rejects `required` when there are no properties, even if it's an empty array | ||
| if (allPropertyNames.length > 0) { | ||
| result.required = allPropertyNames | ||
| } else { | ||
| delete result.required | ||
| } | ||
| result.additionalProperties = false | ||
| } | ||
| const result = makeStructuredOutputCompatible(normalised, originalRequired) | ||
|
|
||
| if (result.type === 'array' && result.items) { | ||
| result.items = makeGroqStructuredOutputCompatible( | ||
| result.items, | ||
| result.items.required || [], | ||
| ) | ||
| } | ||
|
|
||
| return result | ||
| // Groq rejects `required` when it is an empty array | ||
| return removeEmptyRequired(result) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/typescript/ai-groq/src/utils/schema-converter.ts | head -100Repository: TanStack/ai
Length of output: 3859
🏁 Script executed:
# Get file size first
wc -l packages/typescript/ai-groq/src/utils/schema-converter.tsRepository: TanStack/ai
Length of output: 115
🏁 Script executed:
# Look for the makeStructuredOutputCompatible function
rg -n "makeStructuredOutputCompatible" packages/typescript/ai-groq/src/utils/schema-converter.ts -A 30Repository: TanStack/ai
Length of output: 1322
🏁 Script executed:
# Check the test file for nested object cases
cat packages/typescript/ai-groq/tests/schema-converter.test.tsRepository: TanStack/ai
Length of output: 2873
🏁 Script executed:
# Find where openai-base is used and check if we can understand its behavior
find . -name "openai-base" -type d 2>/dev/null | head -5Repository: TanStack/ai
Length of output: 88
🏁 Script executed:
# Search for any documentation or tests that show how makeStructuredOutputCompatible handles empty objects
rg "makeStructuredOutputCompatible" --type ts --type tsx -B 3 -A 10 | head -150Repository: TanStack/ai
Length of output: 82
🏁 Script executed:
# Check package.json to see openai-base version
rg '"@tanstack/openai-base"' packages/typescript/ai-groq/package.jsonRepository: TanStack/ai
Length of output: 98
🏁 Script executed:
# Look for any comments or documentation about the openai-base transformer
rg "openai-base" packages/typescript/ai-groq/src -B 2 -A 2Repository: TanStack/ai
Length of output: 916
🏁 Script executed:
# Check if there are integration tests or real Groq schema examples
fd "schema" packages/typescript/ai-groq/tests/ -type fRepository: TanStack/ai
Length of output: 225
🏁 Script executed:
# Check the openai-base transformer implementation
cat packages/typescript/openai-base/src/utils/structured-output.tsRepository: TanStack/ai
Length of output: 149
🏁 Script executed:
# Alternative: search for the transformer function
rg -n "makeStructuredOutputCompatible" packages/typescript/openai-base/src -B 5 -A 30Repository: TanStack/ai
Length of output: 50367
🏁 Script executed:
# Look at the openai-base file structure
ls -la packages/typescript/openai-base/src/utils/Repository: TanStack/ai
Length of output: 293
🏁 Script executed:
# Extract just the key part of makeStructuredOutputCompatible
cat packages/typescript/openai-base/src/utils/schema-converter.ts | sed -n '12,68p'Repository: TanStack/ai
Length of output: 2166
🏁 Script executed:
# Check if there are any test cases showing nested empty object behavior
rg "type.*object.*properties" packages/typescript/openai-base -A 3 -B 1Repository: TanStack/ai
Length of output: 1084
Normalize empty object schemas recursively before delegating to openai-base transformer.
The root-level normalization at line 85 only patches { type: 'object' } without properties at the top level. Nested branches like { type: 'object' } inside properties, items, anyOf, or additionalProperties skip normalization because makeStructuredOutputCompatible (in openai-base) only processes objects with both type === 'object' AND properties present. A schema like { type: 'object', properties: { child: { type: 'object' } }, required: ['child'] } passes through unchanged, leaving child: { type: 'object' } without properties: {}, which can fail Groq validation. Add a recursive normalization step before calling makeStructuredOutputCompatible, and include a regression test for nested empty-object branches in the test file.
💡 Fix sketch
+function normalizeObjectSchemas(schema: Record<string, any>): Record<string, any> {
+ const result =
+ schema.type === 'object' && !schema.properties
+ ? { ...schema, properties: {} }
+ : { ...schema }
+
+ if (result.properties && typeof result.properties === 'object') {
+ result.properties = Object.fromEntries(
+ Object.entries(result.properties as Record<string, any>).map(
+ ([key, value]) => [
+ key,
+ typeof value === 'object' && value !== null && !Array.isArray(value)
+ ? normalizeObjectSchemas(value)
+ : value,
+ ],
+ ),
+ )
+ }
+
+ if (result.items && typeof result.items === 'object' && !Array.isArray(result.items)) {
+ result.items = normalizeObjectSchemas(result.items)
+ }
+
+ for (const keyword of ['anyOf', 'oneOf', 'allOf'] as const) {
+ if (Array.isArray(result[keyword])) {
+ result[keyword] = result[keyword].map((entry: Record<string, any>) =>
+ normalizeObjectSchemas(entry),
+ )
+ }
+ }
+
+ if (
+ result.additionalProperties &&
+ typeof result.additionalProperties === 'object' &&
+ !Array.isArray(result.additionalProperties)
+ ) {
+ result.additionalProperties = normalizeObjectSchemas(
+ result.additionalProperties,
+ )
+ }
+
+ return result
+}
+
- const normalised =
- schema.type === 'object' && !schema.properties
- ? { ...schema, properties: {} }
- : schema
+ const normalised = normalizeObjectSchemas(schema)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-groq/src/utils/schema-converter.ts` around lines 82 -
93, The current normalization only patches a top-level `{ type: 'object' }`
(variable normalised) and misses nested empty-object branches, causing child
schemas without properties to slip through makeStructuredOutputCompatible;
update schema-converter.ts to recursively walk the schema before calling
makeStructuredOutputCompatible and ensure any node with `type === 'object'` but
no `properties` gets `properties: {}` (handle nested locations: `properties`
values, `items`, `anyOf`/`oneOf`/`allOf`, and `additionalProperties`), keep the
existing call to removeEmptyRequired for final cleanup, and add a regression
test that constructs a schema like `{ type: 'object', properties: { child: {
type: 'object' } }, required: ['child'] }` to verify the nested child receives
an empty properties object and passes Groq validation.
Introduces the @tanstack/ai-utils package providing shared, zero-dependency utilities (generateId, getApiKeyFromEnv, transformNullsToUndefined, defineModelMeta) for use across TanStack AI adapter packages.
- Fix getApiKeyFromEnv to check globalThis.window.env before process.env, matching all existing adapters - Fix generateId to use substring(7) instead of substring(2,10) for consistent random part length - Add JSDoc to transformNullsToUndefined explaining the null→undefined JSON Schema reason - Add missing test cases for empty output modalities and negative output pricing
Extract the streaming Chat Completions pipeline from ai-grok into a reusable OpenAICompatibleChatCompletionsTextAdapter. Providers that use the OpenAI Chat Completions API (/v1/chat/completions) can extend this class and only need to set baseURL, lock type parameters, and override methods for provider-specific quirks. Protected override points: mapOptionsToRequest, convertMessage, convertContentPart, processStreamChunks, makeStructuredOutputCompatible, normalizeContent, extractTextContent. Also adds Chat Completions-specific tool converter (distinct from the existing Responses API tool converter).
Extract and generalize the OpenAI Responses API text adapter into OpenAICompatibleResponsesTextAdapter. This handles the full Responses API streaming pipeline (9+ event types), including reasoning/thinking tokens, tool call streaming, and structured output via text.format. Also adds responses-tool-converter for the flat Responses API tool format (distinct from Chat Completions' nested function format).
…o adapter base classes
…ai-utils Migrate ai-openai to extend base classes from openai-base and delegate utility functions to ai-utils, eliminating ~1,800 lines of duplicated code while maintaining zero breaking changes. Changes: - utils/client.ts: generateId and getOpenAIApiKeyFromEnv delegate to ai-utils - utils/schema-converter.ts: transformNullsToUndefined and makeOpenAIStructuredOutputCompatible delegate to ai-utils/openai-base - tools/*: all 14 tool files re-export from openai-base - adapters/text.ts: extends OpenAICompatibleResponsesTextAdapter, overrides mapOptionsToRequest for OpenAI-specific tool conversion and validation - adapters/image.ts: extends OpenAICompatibleImageAdapter, overrides validation methods - adapters/summarize.ts: extends OpenAICompatibleSummarizeAdapter - adapters/tts.ts: extends OpenAICompatibleTTSAdapter, overrides validation - adapters/transcription.ts: extends OpenAICompatibleTranscriptionAdapter - adapters/video.ts: extends OpenAICompatibleVideoAdapter, overrides validation and request building - openai-base config.ts: removed explicit baseURL to avoid null incompatibility All 127 existing tests pass, types check clean, build succeeds.
Replace duplicated generateId, env-var lookup, and null-transform logic in ai-groq, ai-openrouter, and ai-ollama with imports from @tanstack/ai-utils and @tanstack/openai-base. makeGroqStructuredOutputCompatible now delegates to makeStructuredOutputCompatible and applies the Groq-specific quirk of removing empty required arrays.
…tanstack/ai-utils Replace duplicated generateId and getXxxApiKeyFromEnv implementations in ai-anthropic, ai-gemini, ai-fal, and ai-elevenlabs with imports from @tanstack/ai-utils. All provider-specific wrapper function names preserved for backwards compatibility.
- Fix schema-converter default required parameter and null-widening for nested types - Fix removeEmptyRequired to recurse into anyOf/oneOf/allOf/additionalProperties (groq) - Forward modelOptions, request headers/signal in chat-completions-text adapter - Remove stream_options leak into non-streaming structured output calls - Use call_id instead of internal id for tool call correlation (responses-text) - Make transcription verbose_json default provider-agnostic via protected override - Add runtime guards for ArrayBuffer and atob in transcription adapter - Derive TTS outputFormat from merged request after modelOptions spread - Add downloadContent probe and fix expires_at seconds-to-milliseconds (video) - Fix mcp-tool type ordering so metadata cannot override type: 'mcp' - Add tests proving all fixes work
- Fix unnecessary type assertions in chat-completions-text and responses-text - Fix eslint import order in ai-groq client.ts - Fix unnecessary condition in ai-groq schema-converter combinator recursion - Fix array-type lint error in openai-base provider-options - Remove unused files in ai-grok (tools/index.ts, tool-converter.ts, function-tool.ts) - Remove unused exports (createOpenAIClient, generateId, validateTextProviderOptions, InternalTextProviderOptions)
19231f2 to
857a88e
Compare
- responses-text: remove unsafe `any` cast for call_id, fix overly restrictive item.id guard on function_call handling - transcription: add ensureFileSupport() and decodeBase64() helpers for cross-environment safety (Node < 20, missing atob) - schema-converter.test: rename misleading oneOf test name - env.test: use unique generated keys for deterministic assertions
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/typescript/ai-openai/src/adapters/video.ts (1)
65-80:⚠️ Potential issue | 🟡 MinorValidate
modelOptions.sizebefore using it.
packages/typescript/openai-base/src/adapters/video.ts:60-71only callsvalidateVideoSize(model, size)beforebuildRequest(). Whensizeis omitted and this method falls back tomodelOptions.size, that value skips validation and can reach the API with an unsupported size.🛠️ Suggested fix
- if (size) { - request.size = size as OpenAI_SDK.Videos.VideoCreateParams['size'] - } else if (modelOptions?.size) { - request.size = modelOptions.size - } + if (size) { + request.size = size as OpenAI_SDK.Videos.VideoCreateParams['size'] + } else if (modelOptions?.size) { + this.validateVideoSize(model, modelOptions.size) + request.size = modelOptions.size + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/adapters/video.ts` around lines 65 - 80, The buildRequest method currently assigns modelOptions.size to request.size without validation; ensure you validate modelOptions.size with the same validation used for explicit size by calling validateVideoSize(model, modelOptions.size) (or equivalent) before setting request.size in the fallback branch inside buildRequest, and only assign request.size if validation passes (or normalize/throw if invalid) so that modelOptions.size cannot reach the API unvalidated.packages/typescript/ai-openai/src/utils/schema-converter.ts (1)
17-21:⚠️ Potential issue | 🟠 MajorChange
originalRequiredfrom default[]to optionalundefined.The wrapper's default parameter
originalRequired: Array<string> = []prevents the fallback logic inmakeStructuredOutputCompatiblefrom working. When callers omit the second argument, the helper receives[]instead ofundefined, so it cannot fall back toschema.required. This silently treats all properties as optional, incorrectly widening required fields to includenull.While all current tests pass the second argument explicitly, the wrapper's signature invites single-argument calls that would trigger this bug. Change the default to
originalRequired?: Array<string>to preserve the fallback behavior.Suggested fix
export function makeOpenAIStructuredOutputCompatible( schema: Record<string, any>, - originalRequired: Array<string> = [], + originalRequired?: Array<string>, ): Record<string, any> { return makeStructuredOutputCompatible(schema, originalRequired) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-openai/src/utils/schema-converter.ts` around lines 17 - 21, The wrapper makeOpenAIStructuredOutputCompatible currently sets originalRequired: Array<string> = [] which prevents makeStructuredOutputCompatible from using its fallback (schema.required) when callers omit the second argument; change the signature to make originalRequired optional (originalRequired?: Array<string> or originalRequired: Array<string> | undefined) so undefined is passed through, preserving fallback behavior in makeStructuredOutputCompatible and keeping existing callers intact.
🧹 Nitpick comments (1)
packages/typescript/ai-grok/src/utils/client.ts (1)
27-34: Consider removing the double cast for better type safety.Both
GrokClientConfigandOpenAICompatibleClientConfigextendClientOptionswith a requiredapiKey: string, so they should be structurally compatible. Theas unknown aspattern bypasses TypeScript's type checking entirely and could mask future type mismatches.♻️ Suggested improvement
export function toCompatibleConfig( config: GrokClientConfig, ): OpenAICompatibleClientConfig { return { ...config, baseURL: config.baseURL || 'https://api.x.ai/v1', - } as unknown as OpenAICompatibleClientConfig + } }If TypeScript complains, a single
as OpenAICompatibleClientConfigwould be preferable to the double cast, as it still performs some structural checking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-grok/src/utils/client.ts` around lines 27 - 34, The return value in toCompatibleConfig currently uses a double-cast which bypasses TypeScript checks; change the return to produce a properly typed OpenAICompatibleClientConfig by removing "as unknown as" and using a single cast or ensuring the object shape matches OpenAICompatibleClientConfig (e.g., return { ...config, baseURL: config.baseURL || 'https://api.x.ai/v1' } as OpenAICompatibleClientConfig) so the compiler verifies structural compatibility between GrokClientConfig and OpenAICompatibleClientConfig; update the toCompatibleConfig signature if needed to help the type checker locate mismatches.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@packages/typescript/ai-openai/src/adapters/video.ts`:
- Around line 65-80: The buildRequest method currently assigns modelOptions.size
to request.size without validation; ensure you validate modelOptions.size with
the same validation used for explicit size by calling validateVideoSize(model,
modelOptions.size) (or equivalent) before setting request.size in the fallback
branch inside buildRequest, and only assign request.size if validation passes
(or normalize/throw if invalid) so that modelOptions.size cannot reach the API
unvalidated.
In `@packages/typescript/ai-openai/src/utils/schema-converter.ts`:
- Around line 17-21: The wrapper makeOpenAIStructuredOutputCompatible currently
sets originalRequired: Array<string> = [] which prevents
makeStructuredOutputCompatible from using its fallback (schema.required) when
callers omit the second argument; change the signature to make originalRequired
optional (originalRequired?: Array<string> or originalRequired: Array<string> |
undefined) so undefined is passed through, preserving fallback behavior in
makeStructuredOutputCompatible and keeping existing callers intact.
---
Nitpick comments:
In `@packages/typescript/ai-grok/src/utils/client.ts`:
- Around line 27-34: The return value in toCompatibleConfig currently uses a
double-cast which bypasses TypeScript checks; change the return to produce a
properly typed OpenAICompatibleClientConfig by removing "as unknown as" and
using a single cast or ensuring the object shape matches
OpenAICompatibleClientConfig (e.g., return { ...config, baseURL: config.baseURL
|| 'https://api.x.ai/v1' } as OpenAICompatibleClientConfig) so the compiler
verifies structural compatibility between GrokClientConfig and
OpenAICompatibleClientConfig; update the toCompatibleConfig signature if needed
to help the type checker locate mismatches.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 32f81338-3bf4-43ab-8170-9f36f9066912
📒 Files selected for processing (56)
.changeset/add-ai-utils-package.md.changeset/add-openai-base-package.md.changeset/refactor-providers-to-shared-packages.mdpackages/typescript/ai-anthropic/package.jsonpackages/typescript/ai-anthropic/src/utils/client.tspackages/typescript/ai-elevenlabs/package.jsonpackages/typescript/ai-elevenlabs/src/realtime/token.tspackages/typescript/ai-fal/package.jsonpackages/typescript/ai-fal/src/utils/client.tspackages/typescript/ai-gemini/package.jsonpackages/typescript/ai-gemini/src/utils/client.tspackages/typescript/ai-grok/package.jsonpackages/typescript/ai-grok/src/adapters/image.tspackages/typescript/ai-grok/src/adapters/summarize.tspackages/typescript/ai-grok/src/adapters/text.tspackages/typescript/ai-grok/src/text/text-provider-options.tspackages/typescript/ai-grok/src/tools/function-tool.tspackages/typescript/ai-grok/src/tools/index.tspackages/typescript/ai-grok/src/tools/tool-converter.tspackages/typescript/ai-grok/src/utils/client.tspackages/typescript/ai-grok/src/utils/index.tspackages/typescript/ai-grok/src/utils/schema-converter.tspackages/typescript/ai-grok/tests/grok-adapter.test.tspackages/typescript/ai-groq/package.jsonpackages/typescript/ai-groq/src/utils/client.tspackages/typescript/ai-groq/src/utils/schema-converter.tspackages/typescript/ai-groq/tests/groq-adapter.test.tspackages/typescript/ai-groq/tests/schema-converter.test.tspackages/typescript/ai-ollama/package.jsonpackages/typescript/ai-ollama/src/utils/client.tspackages/typescript/ai-openai/package.jsonpackages/typescript/ai-openai/src/adapters/image.tspackages/typescript/ai-openai/src/adapters/summarize.tspackages/typescript/ai-openai/src/adapters/text.tspackages/typescript/ai-openai/src/adapters/transcription.tspackages/typescript/ai-openai/src/adapters/tts.tspackages/typescript/ai-openai/src/adapters/video.tspackages/typescript/ai-openai/src/tools/apply-patch-tool.tspackages/typescript/ai-openai/src/tools/code-interpreter-tool.tspackages/typescript/ai-openai/src/tools/computer-use-tool.tspackages/typescript/ai-openai/src/tools/custom-tool.tspackages/typescript/ai-openai/src/tools/file-search-tool.tspackages/typescript/ai-openai/src/tools/function-tool.tspackages/typescript/ai-openai/src/tools/image-generation-tool.tspackages/typescript/ai-openai/src/tools/index.tspackages/typescript/ai-openai/src/tools/local-shell-tool.tspackages/typescript/ai-openai/src/tools/mcp-tool.tspackages/typescript/ai-openai/src/tools/shell-tool.tspackages/typescript/ai-openai/src/tools/tool-choice.tspackages/typescript/ai-openai/src/tools/tool-converter.tspackages/typescript/ai-openai/src/tools/web-search-preview-tool.tspackages/typescript/ai-openai/src/tools/web-search-tool.tspackages/typescript/ai-openai/src/utils/client.tspackages/typescript/ai-openai/src/utils/schema-converter.tspackages/typescript/ai-openrouter/package.jsonpackages/typescript/ai-openrouter/src/utils/client.ts
💤 Files with no reviewable changes (4)
- packages/typescript/ai-grok/src/tools/tool-converter.ts
- packages/typescript/ai-grok/src/text/text-provider-options.ts
- packages/typescript/ai-grok/src/tools/index.ts
- packages/typescript/ai-grok/src/tools/function-tool.ts
✅ Files skipped from review due to trivial changes (13)
- packages/typescript/ai-ollama/package.json
- packages/typescript/ai-groq/package.json
- packages/typescript/ai-fal/package.json
- packages/typescript/ai-grok/package.json
- packages/typescript/ai-gemini/package.json
- .changeset/add-ai-utils-package.md
- packages/typescript/ai-openrouter/package.json
- packages/typescript/ai-openai/package.json
- .changeset/add-openai-base-package.md
- .changeset/refactor-providers-to-shared-packages.md
- packages/typescript/ai-openrouter/src/utils/client.ts
- packages/typescript/ai-groq/tests/groq-adapter.test.ts
- packages/typescript/ai-openai/src/tools/index.ts
🚧 Files skipped from review as they are similar to previous changes (22)
- packages/typescript/ai-grok/src/utils/index.ts
- packages/typescript/ai-anthropic/package.json
- packages/typescript/ai-elevenlabs/package.json
- packages/typescript/ai-groq/src/utils/client.ts
- packages/typescript/ai-openai/src/tools/tool-choice.ts
- packages/typescript/ai-ollama/src/utils/client.ts
- packages/typescript/ai-groq/tests/schema-converter.test.ts
- packages/typescript/ai-fal/src/utils/client.ts
- packages/typescript/ai-gemini/src/utils/client.ts
- packages/typescript/ai-openai/src/tools/custom-tool.ts
- packages/typescript/ai-openai/src/tools/shell-tool.ts
- packages/typescript/ai-openai/src/tools/file-search-tool.ts
- packages/typescript/ai-openai/src/tools/computer-use-tool.ts
- packages/typescript/ai-openai/src/tools/code-interpreter-tool.ts
- packages/typescript/ai-openai/src/tools/web-search-preview-tool.ts
- packages/typescript/ai-openai/src/tools/tool-converter.ts
- packages/typescript/ai-openai/src/adapters/transcription.ts
- packages/typescript/ai-groq/src/utils/schema-converter.ts
- packages/typescript/ai-openai/src/adapters/text.ts
- packages/typescript/ai-openai/src/tools/local-shell-tool.ts
- packages/typescript/ai-openai/src/tools/apply-patch-tool.ts
- packages/typescript/ai-openai/src/utils/client.ts
- tool-choice: widen MCPToolChoice.server_label from 'deepwiki' literal to string - gemini client: restore error message mentioning both GOOGLE_API_KEY and GEMINI_API_KEY - chat-completions-text/responses-text: use console.error for stream errors - chat-completions-text: spread modelOptions first so explicit fields win - package.json: remove duplicate workspace deps from devDependencies
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
packages/typescript/openai-base/src/adapters/chat-completions-text.ts (2)
435-437: KeepTProviderOptionson the protected request mapper.
mapOptionsToRequest()takes bareTextOptions, which erases provider-specificmodelOptionstyping at the main subclass extension point. Making thisTextOptions<TProviderOptions>keeps provider adapters type-safe when they override request construction.Based on learnings: "Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts` around lines 435 - 437, The protected mapper mapOptionsToRequest currently accepts a bare TextOptions which strips provider-specific modelOptions typing; change its signature to accept TextOptions<TProviderOptions> (i.e., mapOptionsToRequest(options: TextOptions<TProviderOptions>)) so subclasses can preserve provider-specific modelOptions types when overriding request construction—update any related generics on the adapter class or method declaration to propagate TProviderOptions through mapOptionsToRequest while keeping the same returned type OpenAI_SDK.Chat.Completions.ChatCompletionCreateParamsStreaming.
51-58: Preserve the constructor config on the base adapter.
super({}, model)guaranteesthis.configis always{}in subclasses, so any overridden hook that relies on the base adapter's protected config loses the constructor input. Pass the compatible subset through toBaseTextAdapteror store the original config on this class explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts` around lines 51 - 58, The constructor currently calls super({}, model) which forces BaseTextAdapter's protected config to be empty; preserve the incoming OpenAICompatibleClientConfig by passing the compatible subset into the BaseTextAdapter or by storing it on this instance: update the constructor for the class in chat-completions-text.ts to forward the relevant config object to super(...) instead of {} (or assign this.config = config after super if BaseTextAdapter allows), and keep creating the client with createOpenAICompatibleClient(config); reference the constructor, BaseTextAdapter, super({}, model), OpenAICompatibleClientConfig, and createOpenAICompatibleClient when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts`:
- Around line 530-534: The current loop over contentParts drops parts that
convertContentPart() returns falsy for, allowing a message to become empty;
instead, after mapping contentParts to parts (the array filled in the loop that
uses convertContentPart), check if parts.length === 0 and throw (or return) a
clear capability error indicating unsupported ContentPart types so we fail fast;
apply the same guard wherever the code similarly builds parts/content (including
the other conversions around the blocks that reference convertContentPart and
content fallback) so you don't send an empty prompt.
- Around line 76-80: The stream handler currently drops terminal chunks that
have no choices but contain usage (due to the `if (!choice) continue` guard when
iterating responses from `this.client.chat.completions.create`), causing
RUN_FINISHED to be emitted before the iterator is fully consumed and before
token counts are available; update the loop(s) that process the streaming
response (the chat completions create stream handling and the similar blocks
around the other processing loops) to: 1) detect and handle terminal chunks that
contain `usage` even when `choices` is empty instead of skipping them, 2)
aggregate and expose usage/token counts from that final chunk to the same state
that triggers RUN_FINISHED, and 3) only emit RUN_FINISHED (and close/resolve the
iterator) after the stream is fully exhausted and that final usage chunk has
been processed. Ensure the change is applied to all analogous loops that
currently `continue` on missing `choices`.
---
Nitpick comments:
In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts`:
- Around line 435-437: The protected mapper mapOptionsToRequest currently
accepts a bare TextOptions which strips provider-specific modelOptions typing;
change its signature to accept TextOptions<TProviderOptions> (i.e.,
mapOptionsToRequest(options: TextOptions<TProviderOptions>)) so subclasses can
preserve provider-specific modelOptions types when overriding request
construction—update any related generics on the adapter class or method
declaration to propagate TProviderOptions through mapOptionsToRequest while
keeping the same returned type
OpenAI_SDK.Chat.Completions.ChatCompletionCreateParamsStreaming.
- Around line 51-58: The constructor currently calls super({}, model) which
forces BaseTextAdapter's protected config to be empty; preserve the incoming
OpenAICompatibleClientConfig by passing the compatible subset into the
BaseTextAdapter or by storing it on this instance: update the constructor for
the class in chat-completions-text.ts to forward the relevant config object to
super(...) instead of {} (or assign this.config = config after super if
BaseTextAdapter allows), and keep creating the client with
createOpenAICompatibleClient(config); reference the constructor,
BaseTextAdapter, super({}, model), OpenAICompatibleClientConfig, and
createOpenAICompatibleClient when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ddc97292-0225-4c8e-85ef-29bdbe352faf
📒 Files selected for processing (10)
packages/typescript/ai-anthropic/package.jsonpackages/typescript/ai-elevenlabs/package.jsonpackages/typescript/ai-fal/package.jsonpackages/typescript/ai-gemini/package.jsonpackages/typescript/ai-gemini/src/utils/client.tspackages/typescript/ai-grok/package.jsonpackages/typescript/ai-openai/package.jsonpackages/typescript/openai-base/src/adapters/chat-completions-text.tspackages/typescript/openai-base/src/adapters/responses-text.tspackages/typescript/openai-base/src/tools/tool-choice.ts
✅ Files skipped from review due to trivial changes (5)
- packages/typescript/ai-fal/package.json
- packages/typescript/ai-anthropic/package.json
- packages/typescript/ai-gemini/package.json
- packages/typescript/ai-elevenlabs/package.json
- packages/typescript/ai-openai/package.json
🚧 Files skipped from review as they are similar to previous changes (4)
- packages/typescript/ai-grok/package.json
- packages/typescript/openai-base/src/tools/tool-choice.ts
- packages/typescript/ai-gemini/src/utils/client.ts
- packages/typescript/openai-base/src/adapters/responses-text.ts
| const stream = await this.client.chat.completions.create( | ||
| { | ||
| ...requestParams, | ||
| stream: true, | ||
| stream_options: { include_usage: true }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In the OpenAI Chat Completions streaming API, when stream_options.include_usageis enabled, can the terminal chunk containchoices: []and onlyusage?
💡 Result:
Yes, when stream_options.include_usage is enabled in the OpenAI Chat Completions streaming API, the terminal chunk (the final usage chunk before data: [DONE]) contains choices: [] (an empty array) and the usage field with the token usage statistics for the entire request.
Citations:
- 1: https://developers.openai.com/api/reference/resources/chat/subresources/completions/streaming-events/
- 2: https://developers.openai.com/cookbook/examples/how_to_stream_completions/
- 3: https://developers.openai.com/api/reference/typescript/resources/chat/subresources/completions
- 4: https://community.openai.com/t/issue-with-token-usage-in-streaming-responses/892449
Delay RUN_FINISHED until the stream is actually exhausted.
The OpenAI Chat Completions API sends a final chunk with choices: [] and only usage when stream_options.include_usage is enabled. The current code skips any chunk without choices[0] (if (!choice) continue), so this terminal usage chunk is dropped, causing RUN_FINISHED to emit before token counts are available and before the iterator is fully consumed.
Also applies to: 249-252, 396-408
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts` around
lines 76 - 80, The stream handler currently drops terminal chunks that have no
choices but contain usage (due to the `if (!choice) continue` guard when
iterating responses from `this.client.chat.completions.create`), causing
RUN_FINISHED to be emitted before the iterator is fully consumed and before
token counts are available; update the loop(s) that process the streaming
response (the chat completions create stream handling and the similar blocks
around the other processing loops) to: 1) detect and handle terminal chunks that
contain `usage` even when `choices` is empty instead of skipping them, 2)
aggregate and expose usage/token counts from that final chunk to the same state
that triggers RUN_FINISHED, and 3) only emit RUN_FINISHED (and close/resolve the
iterator) after the stream is fully exhausted and that final usage chunk has
been processed. Ensure the change is applied to all analogous loops that
currently `continue` on missing `choices`.
| for (const part of contentParts) { | ||
| const converted = this.convertContentPart(part) | ||
| if (converted) { | ||
| parts.push(converted) | ||
| } |
There was a problem hiding this comment.
Fail fast on unsupported content parts instead of sending an empty prompt.
The default path drops any ContentPart that convertContentPart() cannot map and then falls back to content: ''. A message made entirely of unsupported parts becomes an empty user message rather than a clear capability error.
Suggested guard
for (const part of contentParts) {
const converted = this.convertContentPart(part)
- if (converted) {
- parts.push(converted)
- }
+ if (!converted) {
+ throw new Error(
+ `Unsupported content part type for ${this.name}: ${part.type}`,
+ )
+ }
+ parts.push(converted)
}Also applies to: 537-540, 547-576
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/openai-base/src/adapters/chat-completions-text.ts` around
lines 530 - 534, The current loop over contentParts drops parts that
convertContentPart() returns falsy for, allowing a message to become empty;
instead, after mapping contentParts to parts (the array filled in the loop that
uses convertContentPart), check if parts.length === 0 and throw (or return) a
clear capability error indicating unsupported ContentPart types so we fail fast;
apply the same guard wherever the code similarly builds parts/content (including
the other conversions around the blocks that reference convertContentPart and
content fallback) so you don't send an empty prompt.
…ase-and-ai-utils # Conflicts: # pnpm-lock.yaml
Resolve conflicts from main's PR #466 (ProviderTool branding + 5th TToolCapabilities generic) by threading TToolCapabilities through @tanstack/openai-base's OpenAICompatibleChatCompletionsTextAdapter and OpenAICompatibleResponsesTextAdapter. Each provider wrapper (ai-openai, ai-grok, etc.) now passes its own tool-capability map to the base class. Branded OpenAI tool types (OpenAIWebSearchTool, OpenAIMCPTool, etc.) remain in ai-openai while converters live in openai-base. openai-base's tool types keep main's Config-suffix rename with @deprecated aliases; ai-openai re-exports those and wraps factories to return branded types.
Post-merge fixes landed on top of the rebase onto main: - Thread TToolCapabilities generic through OpenAICompatibleChatCompletionsTextAdapter and OpenAICompatibleResponsesTextAdapter so ai-openai / ai-grok / ai-groq / ai-openrouter pass their own tool-capability maps while the base stays brand-agnostic. - Align AG-UI event yields and adapter signatures with main's EventType-enum-typed StreamChunk: wrap event yields through an asChunk helper; align BaseImage/TTS/ Transcription super() calls with (model, config) signature; switch GeneratedImage construction to the url-xor-b64Json discriminated shape; propagate options.logger through summarize → chatStream. - Responses API tool-call events now use item.id as toolCallId and emit both toolCallName and toolName, matching main's contract. - Prefer videos.retrieve URL for video content (retrieve-returns-url path) — aligns with main's behavior and works against providers that don't implement /content. - ai-utils: preserve the full random suffix in generateId (pre-rebase regression would have failed ai-fal's entropy test). - Recreate ai-grok/src/tools/index.ts as a thin re-export of openai-base's Chat Completions tool converter (vite entry expects it).
b82e790 to
3965b88
Compare
Resolves conflicts after main brought in: - ai-elevenlabs: official @elevenlabs/elevenlabs-js SDK migration (#504), audio/speech/transcription adapters - ai-grok: audio/speech/realtime adapters (#506) that re-export generateId from ../utils - ai-openai: audit remediation hardening of chatStream (#465) Resolutions: - packages/typescript/ai-elevenlabs/package.json: combine main's new @elevenlabs/{client,elevenlabs-js} deps with branch's @tanstack/ai-utils - packages/typescript/ai-elevenlabs/src/realtime/token.ts: take main's signed-URL flow via @elevenlabs/elevenlabs-js (the branch's getApiKeyFromEnv path was orphaned by the SDK migration) - packages/typescript/ai-openai/src/adapters/text.ts: take HEAD delegation to OpenAICompatibleResponsesTextAdapter (PR's whole point) - packages/typescript/ai-grok/src/utils/index.ts: re-export generateId from @tanstack/ai-utils so main's new transcription/tts adapters that import it from ../utils still resolve NOTE: openai-base/src/adapters/{responses-text,chat-completions-text}.ts still log raw error objects via console.error in stream catch blocks. Main's #465 already removed this from ai-openai/text.ts before the extraction; the same hygiene needs to be ported into the base classes. Tracked for the cr-loop fix phase.
CodeRabbit findings:
- ai-groq schema-converter: recursively normalise nested empty-object
schemas in properties, items, anyOf/oneOf/allOf, additionalProperties.
Top-level patch missed nested z.object({}) branches that openai-base's
transformer skipped, leading to Groq strict-mode validation failures.
Adds 3 regression tests.
- openai-base chat-completions-text: capture usage from any chunk
including the terminal usage-only chunk emitted with
stream_options.include_usage so RUN_FINISHED carries accurate token
counts. Previously the terminal chunk was skipped via
'if (!choice) continue' and RUN_FINISHED emitted earlier without usage.
Also emit a synthetic RUN_FINISHED if the stream ended without a
finish_reason chunk.
- openai-base chat-completions-text: fail fast on unsupported content
parts in convertMessage instead of silently dropping them and sending
an empty user prompt that masks a capability mismatch.
Audit-hygiene regression carried forward from main #465:
- openai-base {chat-completions-text,responses-text}: replace seven
console.error blocks across chatStream / structuredOutput /
processStreamChunks catch sites with logger.errors(...) using
toRunErrorPayload. Raw SDK errors can carry request metadata
(including auth headers); main #465 had already removed this from
ai-openai/text.ts before the extraction but the base classes lifted
the unsafe form. Tests updated to inject testLogger via
resolveDebugOption(false) on every chatStream / structuredOutput call,
matching the pattern from ai-ollama tests after #467 made logger
required on TextOptions.
Verification:
- pnpm test:lib : 31 projects pass
- pnpm test:types: 32 projects pass
- pnpm test:eslint: 31 projects pass
- pnpm build : 33 projects pass
Six-agent unbiased review surfaced behavioural and audit-hygiene
regressions across the extracted base classes. Fixes:
openai-base/adapters/chat-completions-text.ts
- Defer RUN_FINISHED until end-of-stream so the trailing usage-only
chunk that OpenAI emits AFTER finish_reason (when
stream_options.include_usage is on) is captured. Previous fix emitted
RUN_FINISHED with usage: undefined and dropped the trailing chunk.
- Forward upstream finish_reason ('length', 'content_filter', etc.)
instead of collapsing all non-tool reasons to 'stop'.
- Surface tool-args JSON parse failures via logger.errors so a model
emitting malformed JSON for tool arguments is debuggable instead of
silently invoking the tool with input: {}.
- Fix mapOptionsToRequest precedence: `{...modelOptions, temperature:
options.temperature, ...}` clobbered `modelOptions.temperature` with
undefined whenever the caller didn't set the top-level option. Now
spreads top-level fields only when defined.
- Throw on user message with zero content parts instead of silently
sending content: '' (a paid request with no input).
openai-base/adapters/responses-text.ts
- Only reset accumulators on response.created. Previous code reset on
response.failed/incomplete too, which left TEXT_MESSAGE_START events
unbalanced when a response failed mid-stream. On terminal failure
events we now emit TEXT_MESSAGE_END before RUN_ERROR.
- Synthesize a terminal RUN_FINISHED if the stream ends without
response.completed (truncated upstream connection), matching the
chat-completions adapter's behavior so consumers always see a
terminal event for every started run.
- Surface tool-args JSON parse failures via logger.errors (parallel to
chat-completions fix).
- Distinguish refusals from unsupported content_part types in
handleContentPart instead of always reporting 'Unknown refusal'.
- Fix mapOptionsToRequest precedence: previously `...modelOptions` was
spread LAST, silently shadowing the canonical top-level fields. Now
matches the chat-completions adapter (modelOptions first, then
defined top-level options), so callers tuning either backend see
identical behaviour.
- extractTextFromResponse now throws a distinct refusal error instead
of returning '' and letting JSON.parse('') produce a confusing
'Failed to parse structured output as JSON. Content: ' message.
openai-base/adapters/summarize.ts
- generateId(this.name) for the result id (was hard-coded to '').
- Throw when chatStream emits RUN_ERROR instead of pretending a failed
run succeeded with summary: ''.
openai-base/adapters/{image,transcription,tts}.ts
- Wrap SDK calls in try/catch + logger.errors with toRunErrorPayload
so raw SDK errors (which can carry request metadata including auth
headers) never reach user-supplied loggers. Matches the audit
hygiene main #465 applied to ai-openai/text.ts before the
extraction.
- tts.ts: cross-runtime ArrayBuffer→base64 helper so the adapter works
in browser/edge runtimes that lack Buffer.
- transcription.ts: confidence calculation no longer treats
avg_logprob === 0 (perfect-confidence) as missing.
openai-base/tools/file-search-tool.ts
- validateMaxNumResults(0) now correctly throws (was skipped because
`if (maxNumResults && ...)` short-circuited on falsy zero).
ai-openai/adapters/transcription.ts
- shouldDefaultToVerbose returns true ONLY for whisper-1. The
gpt-4o-transcribe* models reject 'verbose_json' with HTTP 400, so
defaulting them to verbose was a guaranteed-failure regression. The
previous logic was inverted.
ai-openai/tools/{file-search,image-generation}-tool.ts
- Drop locally duplicated validators; import from openai-base. The
local copy of validateMaxNumResults had the same falsy-zero bug.
ai-openai/tools/computer-use-tool.ts
- Document that the brand discriminator ('computer_use') intentionally
differs from the runtime tool name ('computer_use_preview'). The
brand matches the model-meta capability union; the runtime name
matches the OpenAI SDK literal that the special-tool dispatcher
switches on.
ai-utils/src/transforms.ts
- Document transformNullsToUndefined's actual behavior (object keys
are removed, top-level null becomes undefined, arrays preserve
positional null). Restrict scope to JSON-shaped values; class
instances/Date/Map/Set are not preserved by Object.entries recursion.
ai-elevenlabs/src/utils/client.ts
- Wire getElevenLabsApiKeyFromEnv and generateId through
@tanstack/ai-utils so the dependency added during the merge is
actually consumed (was unused after main's #504 SDK migration).
ai-openrouter/package.json
- Move @tanstack/ai from dependencies to devDependencies (it's
already declared as a peerDependency). Avoids dual-instance hazards
where the workspace package gets installed twice. Also bump the
peer range to workspace:^ to match every other adapter.
.changeset/refactor-providers-to-shared-packages.md
- Correct the description: only ai-openai/ai-grok/ai-groq inherit
from @tanstack/openai-base; the other six providers only consume
@tanstack/ai-utils. Previous wording implied all nine adapters
delegated to openai-base.
Verification:
- pnpm test:lib : 31 projects pass (116+ unit tests)
- pnpm test:types : 32 projects pass
- pnpm test:eslint : 31 projects pass
- pnpm build : 33 projects pass
Round 2 confirmation surfaced bucket-(a) issues across multiple
agents — many on code paths I had missed in Round 1.
ai-openai/src/adapters/text.ts
- mapOptionsToRequest: restructure to mirror the base
OpenAICompatibleResponsesTextAdapter precedence (modelOptions first,
then conditional spreads of explicit top-level options when defined).
The previous override spread `...modelOptions` LAST and wrote
`temperature: options.temperature` unconditionally, which both
silently shadowed canonical fields with modelOptions values AND
clobbered modelOptions.temperature with undefined whenever the caller
didn't set the top-level option. Five separate review agents
flagged this as a real regression vs. the base class fix.
openai-base/src/adapters/responses-text.ts
- response.failed / response.incomplete now sets runFinishedEmitted
after RUN_ERROR so the post-loop synthetic terminal block doesn't
emit a duplicate RUN_FINISHED on top of an error event. Also
coalesces the (error + incomplete_details) double-emit into a
single RUN_ERROR with merged message + code.
- response.completed now forwards `incomplete_details.reason` (length /
content_filter / etc.) as finishReason instead of collapsing to
'stop', mirroring the chat-completions adapter's preservation of
upstream finish reasons.
- Top-level error event sets runFinishedEmitted (terminal).
- structuredOutput strips streaming-only fields (stream / stream_options)
from a subclass override of mapOptionsToRequest before the
non-streaming call, parallel to chat-completions's cleanup. A
subclass returning stream_options on the streaming-shaped result
would otherwise 4xx the structured-output call.
- mapOptionsToRequest: `tools` is now a conditional spread so a caller's
modelOptions.tools survives when options.tools is undefined.
- convertContentPartToInput audio branch wraps raw base64 in a data
URL using part.source.mimeType, mirroring the image branch.
`input_file` rejects bare base64 strings.
- convertMessagesToInput throws on a user message with no content
parts instead of injecting `text: ''` (parallel to chat-completions
fail-loud behavior — paid request with no input mask intent).
openai-base/src/adapters/chat-completions-text.ts
- mapOptionsToRequest: `tools` conditional spread (same fix as
responses-text). Drop the now-redundant `as Array<...>` cast that
ESLint correctly flagged as unnecessary.
openai-base/src/adapters/summarize.ts
- TEXT_MESSAGE_CONTENT branch only appends delta when defined,
preventing a content-less chunk with an undefined delta from
concatenating literal 'undefined' to the summary.
openai-base/src/adapters/video.ts
- Replace both `Buffer.from(buffer).toString('base64')` call sites with
a cross-runtime `arrayBufferToBase64` helper (atob/btoa fallback for
browser/edge). Buffer is Node-only — the previous code threw
ReferenceError on Workers / Edge / browsers.
openai-base/src/tools/{image-generation,web-search,web-search-preview}-tool.ts
- Convert tool functions: spread `metadata` first, then force the
literal `type` constant last. The previous order let a hand-built
tool with the wrong metadata.type override the discriminator while
the dispatcher had already routed by tool.name, producing a payload
whose runtime type didn't match the route. Mirrors the existing
mcpTool pattern.
openai-base/tests/{chat-completions,responses}-text.test.ts
- Drop the extra outer `logger: testLogger` from the
'throws on invalid JSON response' structuredOutput tests; logger is
supposed to live on chatOptions only and the outer copy was a Round 1
patch-script artifact.
ai-utils/src/transforms.ts
- transformNullsToUndefined doc: clarify that array elements recurse
via this same function, so a top-level null element becomes
undefined (not null). Correct the class-instance description: native
built-ins like Date/Map/Set become {}, but arbitrary class instances
produce a plain-object snapshot of own enumerable string properties,
not a uniform empty object.
.changeset/refactor-providers-to-shared-packages.md
- Correct the description: ai-groq does NOT inherit from
@tanstack/openai-base text adapter (it speaks groq-sdk and keeps its
own BaseTextAdapter-derived adapter); it only consumes the schema
converter and tool converters.
Verification:
- pnpm test:types : 32 projects pass
- pnpm test:lib : 31 projects pass (116+ unit tests)
- pnpm test:eslint : 31 projects pass
- pnpm build : 33 projects pass
Round 3 confirmation surfaced subtler tool-call lifecycle and
terminal-event idempotency gaps that earlier rounds missed.
openai-base/adapters/chat-completions-text.ts
- Move RUN_STARTED emission BEFORE the `if (!choice) continue` guard so
a stream that arrives entirely as usage-only chunks (no choices) still
emits RUN_STARTED. Without this the post-loop synthetic terminal block
(which gates on hasEmittedRunStarted) skipped emission entirely,
producing zero events for the run.
- Skip non-started entries in the finish_reason TOOL_CALL_END loop and
in the post-loop synthetic close. A delta with index but no id/name
populates toolCallsInProgress without ever emitting TOOL_CALL_START;
emitting TOOL_CALL_END for those entries violated AG-UI lifecycle
(END without matching START) and produced empty toolCallId payloads
consumers couldn't pair.
- Clear toolCallsInProgress after emitting TOOL_CALL_END, and gate the
synthetic finishReason on a separate pendingToolCount counter rather
than `toolCallsInProgress.size > 0`. Previously, a tool-then-clean-stop
run reported finishReason: 'tool_calls' because the map was never
cleared after the in-loop emission.
- parsedInput now coerces non-object JSON to {} (parallel to the
Responses adapter's existing guard). A bare string/number/null in the
arguments wire would otherwise propagate to tool execution as the
TOOL_CALL_END.input.
- Close any started tool calls in the post-loop synthetic block on
truncated streams (no finish_reason ever arrives) so consumers see a
matching TOOL_CALL_END for every TOOL_CALL_START.
- convertMessage: throw on a single-text user message whose content is
empty (paid request with no input). Previously the early-return
bypassed the multipart fail-loud guard.
- convertMessage: assistant tool-only messages now use `content: null`
instead of `content: ''`, matching the OpenAI Chat Completions
contract; empty-string content interacts oddly with tokenization on
some backends.
- convertContentPart image branch defaults the data-URI MIME to
`application/octet-stream` when source.mimeType is undefined, instead
of interpolating literal "undefined" into the URI.
openai-base/adapters/responses-text.ts
- response.failed / response.incomplete now ALWAYS emit RUN_ERROR (even
when both `error` and `incomplete_details` are missing) and `return`
out of the loop. Previously, an incomplete event with no detail
silently coerced the run to RUN_FINISHED { finishReason: 'stop' } via
the post-loop synthetic block, masking the failure. The early return
also stops further chunks from being processed after a terminal event.
- response.output_item.added: emit TOOL_CALL_START only when the
metadata's started flag is false. A duplicate output_item.added for
the same item id (retried streams / replay) no longer emits a second
TOOL_CALL_START.
- response.function_call_arguments.delta: drop the `args` field. Its
`metadata ? undefined : chunk.delta` polarity was inverted (populated
only on orphan-tool-call paths), the chat-completions adapter never
emitted it, and consumers should accumulate `delta` themselves.
- response.function_call_arguments.done: skip TOOL_CALL_END (and log)
when the metadata's started flag is false, mirroring the chat-
completions adapter's started-guard. Emitting END without a matching
START broke AG-UI consumers' lifecycle pairing.
- handleContentPart RUN_ERROR is now treated as terminal: set
runFinishedEmitted = true and `return` out of the loop. Previously
the loop kept processing and the synthetic terminal block could emit
a second RUN_FINISHED after a refusal.
- Mark hasStreamedContentDeltas / hasStreamedReasoningDeltas in the
content_part.added handler so a subsequent matching content_part.done
doesn't re-emit the same text (the existing skip guard for done
events depended on these flags being true).
- handleContentPart's reasoning_text branch caches the fallback stepId
(`if (!stepId) stepId = generateId(...)`) instead of generating a
fresh id on each call. Multiple reasoning content parts arriving
without a preceding STEP_STARTED no longer emit different stepIds.
- convertContentPartToInput image and audio: default the data-URI MIME
to application/octet-stream when source.mimeType is undefined.
openai-base/adapters/{chat-completions,responses}-tool-converter.ts
- Shallow-copy the schemaConverter result before mutating
additionalProperties. A subclass-supplied passthrough converter
could otherwise have its caller's schema mutated by the
`jsonSchema.additionalProperties = false` assignment.
openai-base/adapters/video.ts
- getVideoUrl SDK fall-through (content/getContent/download): convert
the binary response to a data URL like the downloadContent path
already does. Previously the bottom `return { url: response.url }`
returned the API endpoint URL (or undefined for non-Response shapes)
in place of a usable video URL.
openai-base/package.json
- Drop `zod` from peerDependencies and devDependencies. Grep confirms
no zod imports anywhere in the package; the peer dep produced a
spurious peer-warning on every consumer.
Verification:
- pnpm test:types : 32 projects pass
- pnpm test:lib : 31 projects pass (116+ unit tests)
- pnpm test:eslint : 31 projects pass
- pnpm build : 33 projects pass
Deferred bucket (b/c) follow-ups (not in this commit):
- Include tests/ in ai-utils + openai-base tsconfig.json (requires
fixing ~20 pre-existing test-file TS issues: EventType enum literals
vs string literals, possibly-undefined narrowing, mock typing).
- ai-fal peer-dep workspace:* → workspace:^ (pre-existing on main).
- defineModelMeta gaps (output.cached, NaN, integer context_window).
- transformNullsToUndefined runtime guard for non-plain objects.
Update MODEL_OPTIONS with current flagship models: - OpenAI: GPT-5.2, 5.2 Pro, 5.1, 5/Mini/Nano, 4.1 - Anthropic: Claude Opus 4.7, 4.6, Sonnet 4.6/4.5, Haiku 4.5 (cleaner non-dated form matching the model-meta canonical names) - OpenRouter: now spans openai/, anthropic/, google/, x-ai/, and meta-llama/ slugs so the OpenRouter adapter can demonstrate routing across multiple upstream providers from one API - Default model is now OpenAI GPT-5.2 (was GPT-4o)
tombeckenham
left a comment
There was a problem hiding this comment.
This is a good move moving common code to base. In future, it would be useful if you'd re-organised in one PR, then made changes in another PR. It took quite a long time to review as I was comparing the base to the original and there were changes as well as reorganisation. It was a good exercise to go through though as it gave me a chance to properly go through all of the code.
Please take a look through the suggestions. There are some changes you should make
| const env = | ||
| typeof globalThis !== 'undefined' && (globalThis as any).window?.env | ||
| ? (globalThis as any).window.env | ||
| : typeof process !== 'undefined' |
There was a problem hiding this comment.
Install types/node as a dev dependency to avoid the lint
| const { model, prompt, numberOfImages, size } = options | ||
|
|
||
| // Validate inputs | ||
| this.validatePrompt({ prompt, model }) |
There was a problem hiding this comment.
These were inside the try previously and there was a logger.request
| ): void { | ||
| if (numberOfImages === undefined) return | ||
|
|
||
| if (numberOfImages < 1 || numberOfImages > 10) { |
There was a problem hiding this comment.
Why bother having a max here? Won't the adapters decide?
| new OpenAITextAdapter( | ||
| config, | ||
| model, | ||
| ) as unknown as ChatStreamCapable<OpenAISummarizeProviderOptions>, |
There was a problem hiding this comment.
This cast is unnecessary and show be removed
|
|
||
| async summarize(options: SummarizationOptions): Promise<SummarizationResult> { | ||
| const systemPrompt = this.buildSummarizationPrompt(options) | ||
|
|
There was a problem hiding this comment.
missing logger.request.
| string, | ||
| { index: number; name: string; started: boolean } | ||
| >, | ||
| options: TextOptions, |
There was a problem hiding this comment.
This should be TextOptions
| options: TextOptions, | |
| options: TextOptions<TProviderOptions>, |
| * Override this in subclasses to add provider-specific options. | ||
| */ | ||
| protected mapOptionsToRequest( | ||
| options: TextOptions, |
There was a problem hiding this comment.
Should be TextOptions
| options: TextOptions, | |
| options: TextOptions<TProviderOptions>, |
| { index: number; name: string; started: boolean } | ||
| >, | ||
| protected override mapOptionsToRequest( | ||
| options: TextOptions, |
There was a problem hiding this comment.
Should be TextOptions,
| options: TextOptions, | |
| options: TextOptions<InternalTextProviderOptions>, |
| */ | ||
| private mapTextOptionsToOpenAI(options: TextOptions) { | ||
| ): Omit<OpenAI_SDK.Responses.ResponseCreateParams, 'stream'> { | ||
| const modelOptions = options.modelOptions as |
There was a problem hiding this comment.
once you fix the options type you'll no longer need to cast
| @@ -0,0 +1,919 @@ | |||
| import { describe, it, expect, vi, afterEach, beforeEach } from 'vitest' | |||
There was a problem hiding this comment.
There's no tests for any of the media adapters in base
|
@Alem — flagging a question about migration scope, specifically around openrouter. On this branch, only For groq the migration looks mechanical (Chat Completions wire format, OpenAI SDK works fine pointed at Groq's endpoint). openrouter is the interesting one — wanted to flag the design question before assuming it slots in. Wire format: identical. SDK shape: not.OpenRouter's HTTP API is OpenAI-compatible Chat Completions, so on the wire the base's SSE accumulator, partial-JSON, tool-call buffers, and RUN_ERROR taxonomy all apply unchanged. But the TypeScript SDK (
Grok gets away with a 60-line subclass because xAI's "OpenAI-compatible" endpoint accepts the OpenAI SDK verbatim — OpenRouter-specific surfaceThese are real user-facing features, not incidental:
Considered and ruled out: dropping
|
Summary
Extracts shared logic into two new packages to eliminate code duplication across all adapter packages:
@tanstack/ai-utils— Zero-dependency, provider-agnostic utilities:generateId,getApiKeyFromEnv,transformNullsToUndefined,ModelMetatypes +defineModelMeta. Used by all 9 adapter packages.@tanstack/openai-base— OpenAI-compatible adapter base classes and utilities: Chat Completions text adapter, Responses API text adapter, image/summarize/transcription/TTS/video adapters, schema converter (makeStructuredOutputCompatible), 15 tool converters, shared types. Used byai-openaiandai-grok(full adapter inheritance) andai-groq/ai-openrouter/ai-ollama(selective utility imports).Key results
ai-openai: ~1,800 lines removed, extendsOpenAICompatibleResponsesTextAdapterai-grok: ~500 lines removed, extendsOpenAICompatibleChatCompletionsTextAdapterai-groq,ai-openrouter,ai-ollama: utility dedup via importsai-anthropic,ai-gemini,ai-fal,ai-elevenlabs:generateId/getApiKeyFromEnvdedup viaai-utilsNew package architecture
Test plan
pnpm build— 28/28 projects passpnpm test:lib— all unit tests pass across all packagespnpm test:types— all type checks passpnpm test:build— publint validates all packagespnpm format— all files formattedSummary by CodeRabbit
New Features
Refactor
Tests