Skip to content

feat: generation hooks and streaming across all frameworks#327

Open
AlemTuzlak wants to merge 12 commits intomainfrom
feat/generation-hooks-and-streaming
Open

feat: generation hooks and streaming across all frameworks#327
AlemTuzlak wants to merge 12 commits intomainfrom
feat/generation-hooks-and-streaming

Conversation

@AlemTuzlak
Copy link
Contributor

@AlemTuzlak AlemTuzlak commented Mar 3, 2026

Summary

  • Adds streamGenerationResult() and streamVideoGeneration() server-side helpers to @tanstack/ai for wrapping one-shot generation tasks as SSE-compatible StreamChunk iterables
  • Adds GenerationClient and VideoGenerationClient state machines to @tanstack/ai-client with support for both streaming (ConnectionAdapter) and direct (fetcher) transport modes
  • Adds framework hooks across React, Solid, Svelte, and Vue: useGenerateImage, useGenerateSpeech, useGenerateVideo, useTranscription, useSummarize (Svelte uses create* naming)
  • Adds InferGenerationOutput conditional type for type-safe onResult transforms without overloads
  • Adds a new "Generations" overview doc with Mermaid architecture diagram showing the server-to-client flow
  • Adds "Full-Stack Usage" sections to all 4 existing generation guide docs (image, speech, transcription, video)
  • Adds 5 demo pages + 5 server API routes to the React example app with sidebar navigation and index page quick links

Test plan

  • All 858+ unit tests pass across ai, ai-client, ai-react, ai-solid, ai-svelte, ai-vue
  • Type checks pass for all packages (pnpm test:types)
  • Full pnpm test suite passes (31 projects)
  • Build succeeds for all 23 projects
  • Manually verified image generation demo works end-to-end in React example app

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features
    • Full generation suite: image, text‑to‑speech, transcription, summarization, and video with streaming and direct modes, result transformation, job/status lifecycle, and playback/UI controls.
  • Documentation
    • Comprehensive guides on generations pattern, streaming vs direct modes, hook API, event lifecycle, and examples.
  • Examples
    • Interactive demo pages and updated app navigation showcasing all generation workflows.
  • Tests
    • Extensive unit and integration tests covering generation flows and edge cases.
  • Chores
    • Minor config and ignore-file adjustments.

…anscription, and summarize

Adds a complete client-side generation layer across all framework integrations
(React, Solid, Svelte, Vue) for one-shot generation tasks: image generation,
text-to-speech, audio transcription, text summarization, and video generation.

Core changes:
- `streamGenerationResult()` and `streamVideoGeneration()` server helpers in `@tanstack/ai`
- `GenerationClient` and `VideoGenerationClient` state machines in `@tanstack/ai-client`
- Framework hooks: `useGenerateImage`, `useGenerateSpeech`, `useGenerateVideo`,
  `useTranscription`, `useSummarize` (and Svelte `create*` equivalents)
- `InferGenerationOutput` type for type-safe `onResult` transforms
- New "Generations" overview doc with Mermaid architecture diagram
- Full-stack usage sections added to all 4 generation guide docs
- React example app with 5 demo pages and server API routes
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 3, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds full "generations" support: typed generation clients (image, speech, transcription, summarize, video) with streaming and fetch transports; streaming-capable core activities and wrappers; framework bindings (React, Solid, Vue, Svelte); example server routes/UI; docs; .claude config; and large test suites.

Changes

Cohort / File(s) Summary
Config & Tooling
.claude/settings.json, .gitignore, knip.json, nx.json
Adds Claude permissions file; minor .gitignore whitespace edit; ignores .claude/worktrees/**; increases Nx parallel from 5→15.
Documentation
docs/guides/generations.md, docs/guides/image-generation.md, docs/guides/text-to-speech.md, docs/guides/transcription.md, docs/guides/video-generation.md, docs/guides/*
New "generations" guide and expanded Full‑Stack Usage sections across image/TTS/transcription/video docs; front‑matter order adjustments and duplicated insertions.
Client Library (types & classes)
packages/typescript/ai-client/src/generation-types.ts, packages/typescript/ai-client/src/generation-client.ts, packages/typescript/ai-client/src/video-generation-client.ts, packages/typescript/ai-client/src/index.ts
Adds GenerationClient and VideoGenerationClient, new generation types/events, transports (ConnectionAdapter vs fetcher), lifecycle callbacks, and public exports.
Core Activities & Streaming
packages/typescript/ai/src/activities/*, packages/typescript/ai/src/activities/stream-generation-result.ts
Makes image/speech/transcription/summarize/video activities streaming-capable; adds streamGenerationResult wrapper, streaming video job/polling implementation, and related type changes.
Core Tests
packages/typescript/ai-client/tests/*, packages/typescript/ai/tests/stream-generation.test.ts
Adds extensive tests for GenerationClient, VideoGenerationClient, and streaming activity behaviors (success, error, progress, abort, edge cases).
React bindings & examples
packages/typescript/ai-react/src/*, packages/typescript/ai-react/tests/*, examples/ts-react-chat/src/routes/generations.*.tsx, examples/ts-react-chat/src/components/Header.tsx, examples/ts-react-chat/src/routeTree.gen.ts, examples/ts-react-chat/src/routes/*.ts
Adds useGeneration and useGenerate* hooks, typed options/returns, tests, example pages and SSE API routes, header nav updates, and route tree additions.
Solid bindings & tests
packages/typescript/ai-solid/src/*, packages/typescript/ai-solid/tests/*
Adds Solid equivalents (useGeneration, useGenerateImage/Speech/Video/Transcription/Summarize) with signal/accessor APIs and tests.
Vue bindings & tests
packages/typescript/ai-vue/src/*, packages/typescript/ai-vue/tests/*
Adds Vue composables and types for generation flows, including useGenerateVideo and tests; updates index exports.
Svelte bindings & tests
packages/typescript/ai-svelte/src/*, packages/typescript/ai-svelte/tests/*
Adds Svelte 5 factories (createGeneration, createGenerate*), typed options/returns, and tests.
Example server functions & API routes
examples/ts-react-chat/src/lib/server-fns.ts, examples/ts-react-chat/src/routes/api.generate.*.ts, examples/ts-react-chat/src/routes/api.summarize.ts, examples/ts-react-chat/src/routes/api.transcribe.ts
Adds server functions with input validation and SSE API POST endpoints that invoke adapters and stream generation results; video server function polls job status.
Examples: UI & route wiring
examples/ts-react-chat/src/routes/generations.*.tsx, examples/ts-react-chat/src/components/Header.tsx, examples/ts-react-chat/src/routeTree.gen.ts, examples/ts-react-chat/src/routes/index.tsx
Adds generation demo pages (image, speech, transcription, summarize, video), integrates nav links and welcome dashboard, and extends generated route tree.

Sequence Diagram(s)

sequenceDiagram
    participant UI as UI Component
    participant Hook as useGeneration Hook
    participant GenClient as GenerationClient
    participant Connection as ConnectionAdapter
    participant Server as Server/API
    UI->>Hook: generate(input)
    Hook->>GenClient: generate(input)
    GenClient->>GenClient: setStatus("generating"), setIsLoading(true)
    alt streaming mode
        GenClient->>Connection: connect(input)
        Connection->>Server: open SSE
        Server-->>Connection: RUN_STARTED
        loop stream chunks
            Connection-->>GenClient: StreamChunk (onChunk)
            GenClient->>GenClient: onProgress / onChunk callbacks
            alt result chunk
                Connection-->>GenClient: CUSTOM (generation:result)
                GenClient->>GenClient: setResult(...)
            end
        end
        Server-->>Connection: RUN_FINISHED
    else fetcher mode
        GenClient->>Server: fetcher(input)
        Server-->>GenClient: Result
        GenClient->>GenClient: setResult(...)
    end
    GenClient->>GenClient: setIsLoading(false), setStatus("success")
    GenClient-->>Hook: update state
    Hook-->>UI: new result / status
Loading
sequenceDiagram
    participant UI as Video UI
    participant Hook as useGenerateVideo Hook
    participant VideoClient as VideoGenerationClient
    participant Connection as ConnectionAdapter
    participant Server as Video API
    UI->>Hook: generate(prompt)
    Hook->>VideoClient: generate(input)
    VideoClient->>VideoClient: setIsLoading(true)
    VideoClient->>Connection: connect(input)
    Connection->>Server: open SSE / create job
    Server-->>Connection: video:job:created (jobId)
    Connection-->>VideoClient: onJobCreated(jobId)
    loop status polling/events
        Server-->>Connection: video:status
        Connection-->>VideoClient: onStatusUpdate(status)
        VideoClient->>VideoClient: setVideoStatus(...)
    end
    Server-->>Connection: generation:result (url)
    Connection-->>VideoClient: RESULT -> setResult(url)
    Server-->>Connection: RUN_FINISHED
    VideoClient->>VideoClient: setIsLoading(false), setStatus("success")
    VideoClient-->>Hook: result, jobId, videoStatus
    Hook-->>UI: update UI (progress, video)
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐰 I hopped through streams of chunk and light,

Prompts turned to pixels, speech, and sight,
Jobs polled and finished, hooks danced in line—
Five frameworks hum; the generation's fine! 🥕✨

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/generation-hooks-and-streaming

@nx-cloud
Copy link

nx-cloud bot commented Mar 3, 2026

View your CI Pipeline Execution ↗ for commit aab2d43

Command Status Duration Result
nx affected --targets=test:sherif,test:knip,tes... ✅ Succeeded 8s View ↗
nx run-many --targets=build --exclude=examples/** ✅ Succeeded 2s View ↗

☁️ Nx Cloud last updated this comment at 2026-03-09 15:17:50 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Mar 3, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@327

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@327

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@327

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@327

@tanstack/ai-fal

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-fal@327

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@327

@tanstack/ai-grok

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-grok@327

@tanstack/ai-groq

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-groq@327

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@327

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@327

@tanstack/ai-openrouter

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openrouter@327

@tanstack/ai-preact

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-preact@327

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@327

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@327

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@327

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@327

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@327

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@327

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@327

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/preact-ai-devtools@327

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@327

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@327

commit: 90212da

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

🧹 Nitpick comments (6)
packages/typescript/ai/tests/stream-generation.test.ts (2)

319-329: Remove unused pollCount variable.

The variable pollCount is declared and incremented but never read. This appears to be leftover from development.

🧹 Suggested fix
   it('should propagate error message from failed status', async () => {
     const adapter = createMockVideoAdapter()
-    let pollCount = 0
     adapter.getVideoStatus = vi.fn(async () => {
-      pollCount++
       return {
         jobId: 'job-123',
         status: 'failed' as const,
         error: 'Content policy violation',
       }
     })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 319 -
329, Remove the unused pollCount variable and its increment in the test "should
propagate error message from failed status": delete the declaration "let
pollCount = 0" and the "pollCount++" inside the mocked adapter.getVideoStatus
implementation (the mock created via createMockVideoAdapter and vi.fn), as
pollCount is never read.

1-6: Sort imports alphabetically per ESLint configuration.

🔧 Suggested fix
-import { describe, it, expect, vi } from 'vitest'
+import { describe, expect, it, vi } from 'vitest'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 1 - 6,
Reorder the import statements and their specifiers to be alphabetically sorted
per ESLint: sort the three import lines by their module specifier (e.g.,
'../src/stream-generation', '../src/types', 'vitest' in alphabetical order) and
also alphabetize the named imports inside each import (ensure
streamGenerationResult comes before streamVideoGeneration and StreamChunk before
VideoStatusResult). Update the import block in the test file to reflect this
ordering so it satisfies the lint rule.
examples/ts-react-chat/src/routes/api.generate.image.ts (1)

2-6: Sort imports alphabetically per ESLint configuration.

The static analysis flags that generateImage should come after streamGenerationResult alphabetically.

🔧 Suggested fix
 import {
-  streamGenerationResult,
   generateImage,
+  streamGenerationResult,
   toServerSentEventsResponse,
 } from '@tanstack/ai'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/api.generate.image.ts` around lines 2 - 6,
Reorder the named imports in the import statement so they are alphabetized:
place generateImage after streamGenerationResult and before
toServerSentEventsResponse (i.e., import streamGenerationResult, generateImage,
toServerSentEventsResponse from '@tanstack/ai'); update the import line that
currently lists streamGenerationResult, generateImage,
toServerSentEventsResponse to match ESLint's alphabetical ordering.
examples/ts-react-chat/src/routes/generations.image.tsx (1)

91-101: Consider adding unique keys if image IDs are available.

Using array index as key works here since images are only appended and not reordered. However, if img.id or similar unique identifier is available from the API response, using that would be more robust.

💡 Optional improvement if images have unique IDs
           {result.images.map((img, i) => (
             <img
-              key={i}
+              key={img.id ?? i}
               src={img.url || `data:image/png;base64,${img.b64Json}`}
               alt={img.revisedPrompt || prompt}
               className="w-full rounded-lg border border-gray-700"
             />
           ))}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.image.tsx` around lines 91 -
101, The map uses the array index as the React key which is fragile; update the
key in the images mapping to use a unique identifier from the image object when
available (e.g., img.id or img.uuid) instead of i; locate the JSX that renders
result.images inside the component (the <img ... key={i} ... /> line) and
replace the index-based key with a conditional unique key such as key={img.id ??
img.uuid ?? i} so it falls back to index only if no unique id exists.
packages/typescript/ai-react/tests/use-generation.test.ts (1)

185-201: Avoid dangling async work in stop/cleanup tests

These tests start generate() but don’t await its completion after resolving the deferred promise. That can leave async work running past test boundaries and introduce flaky behavior.

✅ Suggested test hardening
-      act(() => {
-        result.current.generate({ prompt: 'test' })
-      })
+      let generatePromise!: Promise<void>
+      act(() => {
+        generatePromise = result.current.generate({ prompt: 'test' })
+      })
@@
-      resolvePromise!({ id: '1' })
+      resolvePromise!({ id: '1' })
+      await act(async () => {
+        await generatePromise
+      })

Also applies to: 240-254

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-react/tests/use-generation.test.ts` around lines 185 -
201, The test starts an async generation with result.current.generate({ prompt:
'test' }) and resolves its deferred promise via resolvePromise({ id: '1' }) but
does not wait for the generate() flow to finish, leaving async work dangling;
update the test to await the generation completing after calling resolvePromise
(for example, await waitFor(() => expect(result.current.isLoading).toBe(false))
or await act(async () => resolvePromise!({ id: '1' })) so that the generate/stop
lifecycle (generate(), stop(), isLoading, status) fully settles before the test
ends; ensure you apply the same change to the other occurrence around lines
240-254.
examples/ts-react-chat/src/routes/generations.video.tsx (1)

95-100: Clamp progress before applying it to CSS width

Line 99 applies videoStatus.progress directly into width. If the server emits out-of-range values, the bar can overflow/underflow visually. Clamp to [0, 100] before rendering.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.video.tsx` around lines 95 -
100, The progress value from videoStatus.progress is used directly in the style
width which can overflow if the server sends values <0 or >100; clamp it to the
[0,100] range before rendering (e.g., compute const clamped = Math.min(100,
Math.max(0, videoStatus.progress ?? 0)) or create a clamp helper) and use that
clamped value in the style template string for the inner progress div so the CSS
width is always valid.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/settings.json:
- Around line 7-9: The .claude/settings.json contains machine-specific allowlist
entries and a high-risk execution permission; remove entries such as
"Read(//c/Users/AlemTuzlak/.claude/**)",
"Read(//c/Users/AlemTuzlak/.agents/skills/frontend-design//**)", the absolute
repo worktree grep pattern, and any "Bash(pnpm install)" rule from the committed
file, and instead add those developer-specific paths/commands to
.claude/settings.local.json (which is in .gitignore); keep the shared
.claude/settings.json minimal and portable (only repo-relative, non-identifying
Read/Query rules) and ensure no broad Bash execution rules remain in the
committed file so it adheres to least-privilege.

In `@docs/guides/generations.md`:
- Around line 123-137: Add explicit language identifiers to the two fenced code
blocks that show the event sequences (the block containing "RUN_STARTED → ...
RUN_FINISHED" and the block containing "RUN_STARTED → ... RUN_ERROR") by
changing the opening triple-backtick to include "text" (e.g., ```text) for both
blocks so markdownlint MD040 is satisfied; update both occurrences in the
docs/guides/generations.md content where those event sequence blocks appear.

In `@docs/guides/transcription.md`:
- Around line 308-309: The example destructures the JSON root (const { audio,
language, model } = await request.json()) but the actual route handler in
api.transcribe.ts expects the payload under body.data; update the snippet to
extract these values from request.json().body.data (or adjust to read
request.json().body then access .data) so the variables audio, language, and
model come from the same shape the route implementation expects, and add a
simple existence check for body.data before using it.

In `@examples/ts-react-chat/src/routes/api.generate.video.ts`:
- Around line 9-15: Validate the incoming JSON before using request.json()
results: define a Zod schema for the expected payload (shape containing data: {
prompt: string, size?: string, duration?: number, model?: string }), use
schema.parse or safeParse to validate the parsed body, and if validation fails
return a 400 response; then destructure validated.data and pass those validated
values into openaiVideo(...) and streamVideoGeneration(...). Ensure you
reference the existing symbols (request.json(), body.data,
streamVideoGeneration, openaiVideo, and the fields model/prompt/size/duration)
so the adapter only receives validated inputs.

In `@examples/ts-react-chat/src/routes/api.transcribe.ts`:
- Around line 2-6: The import specifiers from '@tanstack/ai' are not
alphabetically ordered; reorder them to satisfy the sort-imports rule by listing
the named imports in alphabetical order: generateTranscription,
streamGenerationResult, toServerSentEventsResponse (i.e., change the import line
to import { generateTranscription, streamGenerationResult,
toServerSentEventsResponse } from '@tanstack/ai'). Ensure there are no trailing
commas or duplicate imports after the change.

In `@examples/ts-react-chat/src/routes/generations.speech.tsx`:
- Around line 43-56: The labels are not associated with their form controls
which breaks accessibility; add explicit associations by giving the textarea
used in setText a unique id (e.g., textToSpeakId) and set the corresponding Text
label's htmlFor to that id, and likewise assign ids to the Voice control(s)
rendered in the "Voice" group (or the containing select/radio input) and set the
Voice label's htmlFor to match; update references around the textarea that uses
setText and isLoading and the Voice control rendering code so screen readers and
keyboard navigation correctly map labels to controls.
- Around line 14-23: The onResult handler creates blob URLs but never revokes
them, causing memory leaks; update the logic in onResult and the
reset()/component unmount path to call URL.revokeObjectURL on any previously
created audio URL before assigning a new one, store the current URL in the
component state so reset() can revoke it, and add a useEffect cleanup that
revokes the stored URL on unmount; reference the onResult function, the
generate() flow that triggers it, and the reset() method to ensure all created
URLs are revoked.

In `@examples/ts-react-chat/src/routes/generations.transcription.tsx`:
- Around line 17-21: The current conversion uses file.arrayBuffer(), then builds
a base64 string via new Uint8Array(buffer).reduce(...) and assigns dataUrl —
this causes O(n²) string concatenation; replace that whole path with
FileReader.readAsDataURL(file) (awaiting a Promise around the onload/onerror
events) to obtain the dataUrl directly and remove buffer/base64/reduce usage,
then use the resulting dataUrl wherever dataUrl is referenced.

In `@examples/ts-react-chat/src/routes/generations.video.tsx`:
- Around line 46-54: Add an accessible label association: give the textarea an
explicit id (e.g., id="prompt") and update the <label> to use htmlFor="prompt"
so screen readers and form controls are correctly linked; keep the existing
value={prompt}, onChange={(e) => setPrompt(e.target.value)},
disabled={isLoading} and other props on the textarea (adjust the id string as
needed to avoid collisions).

In `@packages/typescript/ai-client/src/video-generation-client.ts`:
- Around line 155-167: The fetcher never gets the AbortSignal so long-running
fetches can't be cancelled; update the fetcher API and call site: change the
fetcher signature (where declared/typed) to accept an optional AbortSignal (e.g.
fetcher(input, signal?: AbortSignal)) and pass the signal from
generateWithFetcher into the call (replace this.fetcher(input) with
this.fetcher(input, signal)), then update any affected types/usages of fetcher
and handle errors/abort behavior in generateWithFetcher (check signal.aborted
before setResult and handle thrown AbortError accordingly).

In `@packages/typescript/ai-client/tests/generation-client.test.ts`:
- Line 1: Reorder the named imports from 'vitest' to satisfy the sort-imports
rule: change the import line in generation-client.test.ts (currently "import {
describe, it, expect, vi } from 'vitest'") to alphabetically ordered members
"import { describe, expect, it, vi } from 'vitest'".

In `@packages/typescript/ai-client/tests/video-generation-client.test.ts`:
- Line 1: The named imports in the top-level import statement (describe, it,
expect, vi) are not alphabetized; update the import from "vitest" so the members
are in sorted order (describe, expect, it, vi) to satisfy the sort-imports lint
rule and re-run linting/tests to confirm the fix.

In `@packages/typescript/ai-react/src/use-generate-video.ts`:
- Around line 126-165: The useMemo inside useGenerateVideo currently throws
during render and only depends on clientId, so it won't recreate the
VideoGenerationClient when options.connection or options.fetcher change;
instead, stop throwing in the memo and either (A) create a safe
no-op/placeholder client inside useMemo when neither connection nor fetcher are
present, or (B) move the validation and throwing into the generate() method so
render never throws; also update the memo dependency array to include the
transport inputs used to build the client (e.g., reference
optionsRef.current.connection and optionsRef.current.fetcher or include
options.connection/options.fetcher) so a new VideoGenerationClient<TOutput> is
created when the transport changes (identify VideoGenerationClient, useMemo in
useGenerateVideo, optionsRef, and generate when making edits).

In `@packages/typescript/ai-react/src/use-generation.ts`:
- Around line 50-53: The UseGenerationReturn interface currently widens
generate's input to Record<string, any>, losing compile-time safety; update the
interface to accept a TInput generic (e.g., UseGenerationReturn<TOutput,
TInput>) and change the generate signature to generate: (input: TInput) =>
Promise<void> so it matches the internal generate implementation (the generate
function used in the hook at line ~156) and restores correct typing for wrappers
like useSummarize/useGenerateImage without casts.

In `@packages/typescript/ai-solid/src/use-generate-video.ts`:
- Around line 133-168: The createMemo call in useGenerateVideo incorrectly
passes a dependency array ([clientId]) as the second argument; remove that
second argument so the call is just createMemo(() => { ... }) and let Solid
track dependencies by reading clientId and options inside the memo; ensure the
function that constructs VideoGenerationClient (references: createMemo,
clientId, options, VideoGenerationClient) remains unchanged otherwise.

In `@packages/typescript/ai-solid/src/use-generation.ts`:
- Around line 111-142: The createMemo call for the client variable incorrectly
passes a dependency array as a second argument; Solid's createMemo tracks
dependencies automatically and accepts an optional initial value (not a deps
array). Remove the second argument ([clientId]) from the createMemo invocation
so it only receives the factory function that builds client using clientOptions
and branches for options.connection/options.fetcher (leaving the
GenerationClient construction and error throw unchanged).

In `@packages/typescript/ai-svelte/tests/create-generation.test.ts`:
- Around line 304-308: The speech test fixtures use mockResult objects that
don't include the required TTSResult.id property; update each test fixture (the
mockResult declarations around create-generation.test.ts lines where mockResult
is defined and the other occurrences at the noted ranges) to include a valid id
string (e.g., id: 'mock-tts-id') so the mocked payload matches the full
TTSResult shape used by functions under test (ensure all three
occurrences—initial mockResult and the other two fixtures—are updated).

In `@packages/typescript/ai-vue/src/use-generation.ts`:
- Around line 51-54: Update the UseGenerationReturn interface to preserve the
TInput generic so callers keep compile-time safety: add a TInput type parameter
to UseGenerationReturn and change the generate signature from generate: (input:
Record<string, any>) => Promise<void> to generate: (input: TInput) =>
Promise<void>; then adjust the useGeneration function's return type to
UseGenerationReturn<TInput, TOutput, TParams> (or matching generic arity) and
remove the unsafe cast at the site where generate is returned so the real TInput
flows through without widening to Record<string, any>.

---

Nitpick comments:
In `@examples/ts-react-chat/src/routes/api.generate.image.ts`:
- Around line 2-6: Reorder the named imports in the import statement so they are
alphabetized: place generateImage after streamGenerationResult and before
toServerSentEventsResponse (i.e., import streamGenerationResult, generateImage,
toServerSentEventsResponse from '@tanstack/ai'); update the import line that
currently lists streamGenerationResult, generateImage,
toServerSentEventsResponse to match ESLint's alphabetical ordering.

In `@examples/ts-react-chat/src/routes/generations.image.tsx`:
- Around line 91-101: The map uses the array index as the React key which is
fragile; update the key in the images mapping to use a unique identifier from
the image object when available (e.g., img.id or img.uuid) instead of i; locate
the JSX that renders result.images inside the component (the <img ... key={i}
... /> line) and replace the index-based key with a conditional unique key such
as key={img.id ?? img.uuid ?? i} so it falls back to index only if no unique id
exists.

In `@examples/ts-react-chat/src/routes/generations.video.tsx`:
- Around line 95-100: The progress value from videoStatus.progress is used
directly in the style width which can overflow if the server sends values <0 or
>100; clamp it to the [0,100] range before rendering (e.g., compute const
clamped = Math.min(100, Math.max(0, videoStatus.progress ?? 0)) or create a
clamp helper) and use that clamped value in the style template string for the
inner progress div so the CSS width is always valid.

In `@packages/typescript/ai-react/tests/use-generation.test.ts`:
- Around line 185-201: The test starts an async generation with
result.current.generate({ prompt: 'test' }) and resolves its deferred promise
via resolvePromise({ id: '1' }) but does not wait for the generate() flow to
finish, leaving async work dangling; update the test to await the generation
completing after calling resolvePromise (for example, await waitFor(() =>
expect(result.current.isLoading).toBe(false)) or await act(async () =>
resolvePromise!({ id: '1' })) so that the generate/stop lifecycle (generate(),
stop(), isLoading, status) fully settles before the test ends; ensure you apply
the same change to the other occurrence around lines 240-254.

In `@packages/typescript/ai/tests/stream-generation.test.ts`:
- Around line 319-329: Remove the unused pollCount variable and its increment in
the test "should propagate error message from failed status": delete the
declaration "let pollCount = 0" and the "pollCount++" inside the mocked
adapter.getVideoStatus implementation (the mock created via
createMockVideoAdapter and vi.fn), as pollCount is never read.
- Around line 1-6: Reorder the import statements and their specifiers to be
alphabetically sorted per ESLint: sort the three import lines by their module
specifier (e.g., '../src/stream-generation', '../src/types', 'vitest' in
alphabetical order) and also alphabetize the named imports inside each import
(ensure streamGenerationResult comes before streamVideoGeneration and
StreamChunk before VideoStatusResult). Update the import block in the test file
to reflect this ordering so it satisfies the lint rule.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 228e0e3 and 1fc91d7.

📒 Files selected for processing (65)
  • .claude/settings.json
  • .gitignore
  • docs/guides/generations.md
  • docs/guides/image-generation.md
  • docs/guides/migration.md
  • docs/guides/text-to-speech.md
  • docs/guides/transcription.md
  • docs/guides/tree-shaking.md
  • docs/guides/video-generation.md
  • examples/ts-react-chat/src/components/Header.tsx
  • examples/ts-react-chat/src/routeTree.gen.ts
  • examples/ts-react-chat/src/routes/api.generate.image.ts
  • examples/ts-react-chat/src/routes/api.generate.speech.ts
  • examples/ts-react-chat/src/routes/api.generate.video.ts
  • examples/ts-react-chat/src/routes/api.summarize.ts
  • examples/ts-react-chat/src/routes/api.transcribe.ts
  • examples/ts-react-chat/src/routes/generations.image.tsx
  • examples/ts-react-chat/src/routes/generations.speech.tsx
  • examples/ts-react-chat/src/routes/generations.summarize.tsx
  • examples/ts-react-chat/src/routes/generations.transcription.tsx
  • examples/ts-react-chat/src/routes/generations.video.tsx
  • examples/ts-react-chat/src/routes/index.tsx
  • knip.json
  • nx.json
  • packages/typescript/ai-client/src/generation-client.ts
  • packages/typescript/ai-client/src/generation-types.ts
  • packages/typescript/ai-client/src/index.ts
  • packages/typescript/ai-client/src/video-generation-client.ts
  • packages/typescript/ai-client/tests/generation-client.test.ts
  • packages/typescript/ai-client/tests/video-generation-client.test.ts
  • packages/typescript/ai-react/src/index.ts
  • packages/typescript/ai-react/src/use-generate-image.ts
  • packages/typescript/ai-react/src/use-generate-speech.ts
  • packages/typescript/ai-react/src/use-generate-video.ts
  • packages/typescript/ai-react/src/use-generation.ts
  • packages/typescript/ai-react/src/use-summarize.ts
  • packages/typescript/ai-react/src/use-transcription.ts
  • packages/typescript/ai-react/tests/use-generation.test.ts
  • packages/typescript/ai-solid/src/index.ts
  • packages/typescript/ai-solid/src/use-generate-image.ts
  • packages/typescript/ai-solid/src/use-generate-speech.ts
  • packages/typescript/ai-solid/src/use-generate-video.ts
  • packages/typescript/ai-solid/src/use-generation.ts
  • packages/typescript/ai-solid/src/use-summarize.ts
  • packages/typescript/ai-solid/src/use-transcription.ts
  • packages/typescript/ai-solid/tests/use-generation.test.ts
  • packages/typescript/ai-svelte/src/create-generate-image.svelte.ts
  • packages/typescript/ai-svelte/src/create-generate-speech.svelte.ts
  • packages/typescript/ai-svelte/src/create-generate-video.svelte.ts
  • packages/typescript/ai-svelte/src/create-generation.svelte.ts
  • packages/typescript/ai-svelte/src/create-summarize.svelte.ts
  • packages/typescript/ai-svelte/src/create-transcription.svelte.ts
  • packages/typescript/ai-svelte/src/index.ts
  • packages/typescript/ai-svelte/tests/create-generation.test.ts
  • packages/typescript/ai-vue/src/index.ts
  • packages/typescript/ai-vue/src/use-generate-image.ts
  • packages/typescript/ai-vue/src/use-generate-speech.ts
  • packages/typescript/ai-vue/src/use-generate-video.ts
  • packages/typescript/ai-vue/src/use-generation.ts
  • packages/typescript/ai-vue/src/use-summarize.ts
  • packages/typescript/ai-vue/src/use-transcription.ts
  • packages/typescript/ai-vue/tests/use-generation.test.ts
  • packages/typescript/ai/src/index.ts
  • packages/typescript/ai/src/stream-generation.ts
  • packages/typescript/ai/tests/stream-generation.test.ts

Comment on lines +308 to +309
const { audio, language, model } = await request.json()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Server snippet payload shape doesn’t match route implementation

Line 308 destructures the JSON root directly, but the route in examples/ts-react-chat/src/routes/api.transcribe.ts expects values under body.data. The sample will fail as written against the documented route pattern.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/transcription.md` around lines 308 - 309, The example
destructures the JSON root (const { audio, language, model } = await
request.json()) but the actual route handler in api.transcribe.ts expects the
payload under body.data; update the snippet to extract these values from
request.json().body.data (or adjust to read request.json().body then access
.data) so the variables audio, language, and model come from the same shape the
route implementation expects, and add a simple existence check for body.data
before using it.

Comment on lines +111 to +142
const client = createMemo(() => {
const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = {
id: clientId,
body: options.body,
onResult: (r: TResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
}

if (options.connection) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
connection: options.connection,
})
}

if (options.fetcher) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
fetcher: options.fetcher,
})
}

throw new Error(
'useGeneration requires either a connection or fetcher option',
)
}, [clientId])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check current createMemo usage in the codebase
rg -n "createMemo\(" --type ts packages/typescript/ai-solid/src/ -A 1 | head -30

Repository: TanStack/ai

Length of output: 625


🌐 Web query:

Solid.js createMemo API signature documentation

💡 Result:

Solid’s official docs list createMemo with this TypeScript signature:

function createMemo<T>(
  fn: (v: T) => T,
  value?: T,
  options?: { equals?: false | ((prev: T, next: T) => boolean); name?: string }
): () => T;
  • fn is called to compute the value and receives the previous memo value as its argument.
  • Returns a read-only accessor () => T. [1]

Docs page (official):

https://docs.solidjs.com/reference/basic-reactivity/create-memo

[1]

Sources: [1] Solid Docs – “createMemo” API reference.


Remove the unnecessary second argument from createMemo.

Solid.js's createMemo does not accept a dependency array. The second argument, when provided, should be an initial value for the memo (of the same type as the computed value). Dependencies are automatically tracked. The [clientId] array here is incorrect syntax for Solid.js.

🔧 Proposed fix
   const client = createMemo(() => {
     const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = {
       id: clientId,
       body: options.body,
       onResult: (r: TResult) => options.onResult?.(r),
       onError: (e: Error) => options.onError?.(e),
       onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
       onChunk: (c: StreamChunk) => options.onChunk?.(c),
       onResultChange: setResult,
       onLoadingChange: setIsLoading,
       onErrorChange: setError,
       onStatusChange: setStatus,
     }

     if (options.connection) {
       return new GenerationClient<TInput, TResult, TOutput>({
         ...clientOptions,
         connection: options.connection,
       })
     }

     if (options.fetcher) {
       return new GenerationClient<TInput, TResult, TOutput>({
         ...clientOptions,
         fetcher: options.fetcher,
       })
     }

     throw new Error(
       'useGeneration requires either a connection or fetcher option',
     )
-  }, [clientId])
+  })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const client = createMemo(() => {
const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = {
id: clientId,
body: options.body,
onResult: (r: TResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
}
if (options.connection) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
connection: options.connection,
})
}
if (options.fetcher) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
fetcher: options.fetcher,
})
}
throw new Error(
'useGeneration requires either a connection or fetcher option',
)
}, [clientId])
const client = createMemo(() => {
const clientOptions: GenerationClientOptions<TInput, TResult, TOutput> = {
id: clientId,
body: options.body,
onResult: (r: TResult) => options.onResult?.(r),
onError: (e: Error) => options.onError?.(e),
onProgress: (p: number, m?: string) => options.onProgress?.(p, m),
onChunk: (c: StreamChunk) => options.onChunk?.(c),
onResultChange: setResult,
onLoadingChange: setIsLoading,
onErrorChange: setError,
onStatusChange: setStatus,
}
if (options.connection) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
connection: options.connection,
})
}
if (options.fetcher) {
return new GenerationClient<TInput, TResult, TOutput>({
...clientOptions,
fetcher: options.fetcher,
})
}
throw new Error(
'useGeneration requires either a connection or fetcher option',
)
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-solid/src/use-generation.ts` around lines 111 - 142,
The createMemo call for the client variable incorrectly passes a dependency
array as a second argument; Solid's createMemo tracks dependencies automatically
and accepts an optional initial value (not a deps array). Remove the second
argument ([clientId]) from the createMemo invocation so it only receives the
factory function that builds client using clientOptions and branches for
options.connection/options.fetcher (leaving the GenerationClient construction
and error throw unchanged).

Move streaming logic from standalone `streamGenerationResult()` and
`streamVideoGeneration()` utilities into the activity functions behind
a `stream: true/false` flag. When `stream: true`, each function returns
`AsyncIterable<StreamChunk>` instead of `Promise<Result>`, with
TypeScript narrowing the return type via conditional types.

- generateImage, generateSpeech, generateTranscription, summarize:
  use shared internal `streamGenerationResult` helper
- generateVideo: inline streaming with job creation + polling
- Delete `stream-generation.ts` and remove public exports
- Update all examples, docs, and tests
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (1)
docs/guides/transcription.md (1)

306-316: ⚠️ Potential issue | 🟠 Major

Server example is still reading the wrong request shape.

useTranscription(...connection) and the actual example route in examples/ts-react-chat/src/routes/api.transcribe.ts read the payload from body.data at Lines 9-10. Destructuring the JSON root here will fail against the documented client flow; please update the snippet and guard body.data before using it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/transcription.md` around lines 306 - 316, The POST handler is
destructuring the JSON root but the client sends payload under body.data; update
the POST: async ({ request }) => handler to first await request.json() into a
variable (e.g., body), verify body && body.data exists, then read audio,
language, model from body.data before calling generateTranscription (which uses
openaiTranscription(...)) and returning toServerSentEventsResponse(stream); add
a defensive fallback (e.g., return 400 or default values) when body.data is
missing to avoid runtime errors.
🧹 Nitpick comments (5)
packages/typescript/ai/tests/stream-generation.test.ts (1)

1-8: ESLint flags unsorted imports.

The static analysis indicates that import members should be sorted alphabetically. Consider sorting expect before vi in line 1, and generateSpeech before generateVideo in lines 3-6.

🔧 Suggested fix
-import { describe, it, expect, vi } from 'vitest'
+import { describe, expect, it, vi } from 'vitest'
 import {
   generateImage,
+  generateSpeech,
+  generateTranscription,
   generateVideo,
-  generateSpeech,
-  generateTranscription,
 } from '../src/index'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 1 - 8,
Sort imported identifiers alphabetically to satisfy ESLint: in the first import
reorder members to "describe, expect, it, vi" and in the second import reorder
to "generateImage, generateSpeech, generateTranscription, generateVideo";
similarly ensure the type import lists "StreamChunk, VideoStatusResult" are
alphabetized if required. Update the import lines that reference these symbols
(describe, it, expect, vi; generateImage, generateSpeech, generateTranscription,
generateVideo; StreamChunk, VideoStatusResult) so the named imports are in
alphabetical order.
packages/typescript/ai/src/activities/summarize/index.ts (1)

243-244: Route the fallback through runSummarize.

Calling adapter.summarize(...) directly here duplicates the non-stream path and skips the summarize:request:* events emitted in runSummarize. Reusing the helper keeps the streaming fallback aligned with the normal execution path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/summarize/index.ts` around lines 243 -
244, The fallback currently calls adapter.summarize(summarizeOptions) directly,
which bypasses the normal event flow; change the fallback to call
runSummarize(...) so the non-stream path reuses the same helper and emits the
summarize:request:* events. Specifically, replace the
adapter.summarize(summarizeOptions) invocation inside the streamGenerationResult
fallback with a call to runSummarize(summarizeOptions) (using the same
summarizeOptions and context used elsewhere) so streamGenerationResult(() =>
runSummarize(summarizeOptions)) is used instead of calling adapter.summarize
directly.
packages/typescript/ai/src/activities/generateSpeech/index.ts (1)

154-172: Consider using async/await for consistency.

The runGenerateSpeech function uses .then() chaining (lines 154-171) while runGenerateTranscription uses await directly. Consider refactoring to async/await for consistency across activities:

♻️ Suggested refactor
 async function runGenerateSpeech<TAdapter extends TTSAdapter<string, object>>(
   options: TTSActivityOptions<TAdapter, boolean>,
 ): Promise<TTSResult> {
   const { adapter, stream: _stream, ...rest } = options
   const model = adapter.model
   const requestId = createId('speech')
   const startTime = Date.now()

   aiEventClient.emit('speech:request:started', {
     requestId,
     provider: adapter.name,
     model,
     text: rest.text,
     voice: rest.voice,
     format: rest.format,
     speed: rest.speed,
     modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
     timestamp: startTime,
   })

-  return adapter.generateSpeech({ ...rest, model }).then((result) => {
-    const duration = Date.now() - startTime
-
-    aiEventClient.emit('speech:request:completed', {
-      requestId,
-      provider: adapter.name,
-      model,
-      audio: result.audio,
-      format: result.format,
-      audioDuration: result.duration,
-      contentType: result.contentType,
-      duration,
-      modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
-      timestamp: Date.now(),
-    })
-
-    return result
-  })
+  const result = await adapter.generateSpeech({ ...rest, model })
+  const duration = Date.now() - startTime
+
+  aiEventClient.emit('speech:request:completed', {
+    requestId,
+    provider: adapter.name,
+    model,
+    audio: result.audio,
+    format: result.format,
+    audioDuration: result.duration,
+    contentType: result.contentType,
+    duration,
+    modelOptions: rest.modelOptions as Record<string, unknown> | undefined,
+    timestamp: Date.now(),
+  })
+
+  return result
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/generateSpeech/index.ts` around lines
154 - 172, Refactor runGenerateSpeech to use async/await instead of .then():
make the function async, await the call to adapter.generateSpeech(...) (the call
that currently uses .then()), assign the result to a variable, compute duration
using startTime, then call aiEventClient.emit('speech:request:completed', {...})
with the same fields (requestId, provider via adapter.name, model, result.audio,
result.format, result.duration, result.contentType, duration, modelOptions from
rest.modelOptions, timestamp) and finally return the result; keep the same
variable names (adapter.generateSpeech, result, startTime, aiEventClient.emit)
to minimize changes.
packages/typescript/ai/src/activities/generateVideo/index.ts (2)

51-53: Consider extracting shared createId utility.

The createId helper is duplicated across multiple activity files (generateSpeech, generateTranscription, generateVideo). Consider extracting it to a shared utilities module.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/generateVideo/index.ts` around lines 51
- 53, The createId function is duplicated across activity files (createId in
generateVideo and same helper in generateSpeech and generateTranscription);
extract it into a single shared utility module (e.g., export function
createId(prefix: string): string from a new utils/id or shared/utils module),
remove the local createId implementations from each activity, import the shared
createId where previously defined, and update any references in generateVideo,
generateSpeech, and generateTranscription to use the imported utility; ensure
the exported name matches existing calls and run type checks to fix any import
paths.

296-340: Consider emitting progress in intermediate status events.

The video:status event (lines 301-311) includes progress from statusResult.progress, but if the provider doesn't return progress, the client's onProgress callback won't fire. This is fine as-is since progress is optional in VideoStatusInfo, but worth noting for documentation.

Additionally, the streaming path doesn't emit telemetry events via aiEventClient like the non-streaming getVideoJobStatus does. If observability is important for streaming operations, you may want to add telemetry emissions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/generateVideo/index.ts` around lines
296 - 340, The streaming loop currently yields intermediate video:status and
generation:result events but doesn't emit telemetry via aiEventClient nor
normalize progress when providers omit it; update the loop in the function that
calls adapter.getVideoStatus/getVideoUrl to (1) include a normalized progress
value (e.g., default to 0 or null explicitly) on the video:status payload so
clients always get a consistent field, and (2) emit the same telemetry events to
aiEventClient (use the existing aiEventClient.emit or aiEventClient.track call
pattern used in getVideoJobStatus) for both status updates and the final
generation:result so streaming operations are observable; reference
adapter.getVideoStatus, adapter.getVideoUrl, aiEventClient, and the emitted
event names video:status and generation:result when making changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/guides/image-generation.md`:
- Around line 274-285: The documented POST handler destructures request.json()
directly but the client sends payload under body.data; update the POST handler
(the POST: async ({ request }) => { ... } block and any usages of generateImage)
to first await request.json() into a body variable and then pull prompt, size,
model, numberOfImages from body.data (instead of directly from request.json()),
ensuring the call to generateImage uses those values unchanged.

In `@docs/guides/text-to-speech.md`:
- Around line 320-340: The onResult example in useGenerateSpeech creates an
Audio with URL.createObjectURL(blob) but never revokes that object URL, causing
a memory leak; update the example or add a note to ensure the consumer revokes
the URL (e.g., call URL.revokeObjectURL(url) when the Audio is no longer needed
such as in audio.onended or in a cleanup/unmount handler) and mention that
responsibility in the docs so consumers of onResult (and the returned Audio)
know to free the object URL.

In `@docs/guides/video-generation.md`:
- Around line 160-173: The POST handler is reading fields from the JSON root but
the client sends payloads wrapped as { messages, data }, so update the request
parsing in the POST async handler to extract body = await request.json() and
then read prompt, size, duration, model from body.data before calling
generateVideo (the generateVideo call, adapter openaiVideo(...), and the return
via toServerSentEventsResponse remain the same); ensure any downstream
references in this handler use those variables from body.data.

In `@packages/typescript/ai/src/activities/stream-generation-result.ts`:
- Around line 51-58: The catch block in the generator that yields the RUN_ERROR
result should normalize the thrown value before reading properties; in the catch
for the stream generation (the catch that currently does `catch (error: any) {
yield { type: 'RUN_ERROR', ... } }`) replace direct access to
error.message/error.code with a normalized error object (e.g., coerce
null/undefined/primitive throws into an object with a safe message and optional
code) and use that safe object when building the yielded error payload so
accessing message/code cannot throw a secondary TypeError.

In `@packages/typescript/ai/tests/stream-generation.test.ts`:
- Around line 383-408: The tests call chunks.find(...) to locate the RUN_ERROR
chunk but then gate assertions behind an if, which can hide missing error
chunks; update both tests (the one checking 'Content policy violation' and the
'should use default message when failed status has no error') to explicitly
assert the error chunk exists (e.g., expect(error).toBeDefined() or
expect(error).not.toBeNull()) immediately after const error = chunks.find(...),
then proceed to assert error.error.message; reference the RUN_ERROR lookup and
the generateVideo/collectChunks flow when making the change.
- Around line 310-314: The test is missing a presence assertion for the found
error before accessing its properties; update the test that finds `error` via
`const error = chunks.find((c) => c.type === 'RUN_ERROR')` to assert
`expect(error).toBeDefined()` immediately after the find, then assert
`expect(error!.error.message).toBe('Video generation timed out')`; apply the
same fix to the other two occurrences that perform the same find/assert pattern
so each checks the `error` is defined before reading `error.error.message`.

---

Duplicate comments:
In `@docs/guides/transcription.md`:
- Around line 306-316: The POST handler is destructuring the JSON root but the
client sends payload under body.data; update the POST: async ({ request }) =>
handler to first await request.json() into a variable (e.g., body), verify body
&& body.data exists, then read audio, language, model from body.data before
calling generateTranscription (which uses openaiTranscription(...)) and
returning toServerSentEventsResponse(stream); add a defensive fallback (e.g.,
return 400 or default values) when body.data is missing to avoid runtime errors.

---

Nitpick comments:
In `@packages/typescript/ai/src/activities/generateSpeech/index.ts`:
- Around line 154-172: Refactor runGenerateSpeech to use async/await instead of
.then(): make the function async, await the call to adapter.generateSpeech(...)
(the call that currently uses .then()), assign the result to a variable, compute
duration using startTime, then call
aiEventClient.emit('speech:request:completed', {...}) with the same fields
(requestId, provider via adapter.name, model, result.audio, result.format,
result.duration, result.contentType, duration, modelOptions from
rest.modelOptions, timestamp) and finally return the result; keep the same
variable names (adapter.generateSpeech, result, startTime, aiEventClient.emit)
to minimize changes.

In `@packages/typescript/ai/src/activities/generateVideo/index.ts`:
- Around line 51-53: The createId function is duplicated across activity files
(createId in generateVideo and same helper in generateSpeech and
generateTranscription); extract it into a single shared utility module (e.g.,
export function createId(prefix: string): string from a new utils/id or
shared/utils module), remove the local createId implementations from each
activity, import the shared createId where previously defined, and update any
references in generateVideo, generateSpeech, and generateTranscription to use
the imported utility; ensure the exported name matches existing calls and run
type checks to fix any import paths.
- Around line 296-340: The streaming loop currently yields intermediate
video:status and generation:result events but doesn't emit telemetry via
aiEventClient nor normalize progress when providers omit it; update the loop in
the function that calls adapter.getVideoStatus/getVideoUrl to (1) include a
normalized progress value (e.g., default to 0 or null explicitly) on the
video:status payload so clients always get a consistent field, and (2) emit the
same telemetry events to aiEventClient (use the existing aiEventClient.emit or
aiEventClient.track call pattern used in getVideoJobStatus) for both status
updates and the final generation:result so streaming operations are observable;
reference adapter.getVideoStatus, adapter.getVideoUrl, aiEventClient, and the
emitted event names video:status and generation:result when making changes.

In `@packages/typescript/ai/src/activities/summarize/index.ts`:
- Around line 243-244: The fallback currently calls
adapter.summarize(summarizeOptions) directly, which bypasses the normal event
flow; change the fallback to call runSummarize(...) so the non-stream path
reuses the same helper and emits the summarize:request:* events. Specifically,
replace the adapter.summarize(summarizeOptions) invocation inside the
streamGenerationResult fallback with a call to runSummarize(summarizeOptions)
(using the same summarizeOptions and context used elsewhere) so
streamGenerationResult(() => runSummarize(summarizeOptions)) is used instead of
calling adapter.summarize directly.

In `@packages/typescript/ai/tests/stream-generation.test.ts`:
- Around line 1-8: Sort imported identifiers alphabetically to satisfy ESLint:
in the first import reorder members to "describe, expect, it, vi" and in the
second import reorder to "generateImage, generateSpeech, generateTranscription,
generateVideo"; similarly ensure the type import lists "StreamChunk,
VideoStatusResult" are alphabetized if required. Update the import lines that
reference these symbols (describe, it, expect, vi; generateImage,
generateSpeech, generateTranscription, generateVideo; StreamChunk,
VideoStatusResult) so the named imports are in alphabetical order.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e500efe1-0757-4c84-ac14-4da8077e3c5b

📥 Commits

Reviewing files that changed from the base of the PR and between 1fc91d7 and ea42f6e.

📒 Files selected for processing (17)
  • docs/guides/generations.md
  • docs/guides/image-generation.md
  • docs/guides/text-to-speech.md
  • docs/guides/transcription.md
  • docs/guides/video-generation.md
  • examples/ts-react-chat/src/routes/api.generate.image.ts
  • examples/ts-react-chat/src/routes/api.generate.speech.ts
  • examples/ts-react-chat/src/routes/api.generate.video.ts
  • examples/ts-react-chat/src/routes/api.summarize.ts
  • examples/ts-react-chat/src/routes/api.transcribe.ts
  • packages/typescript/ai/src/activities/generateImage/index.ts
  • packages/typescript/ai/src/activities/generateSpeech/index.ts
  • packages/typescript/ai/src/activities/generateTranscription/index.ts
  • packages/typescript/ai/src/activities/generateVideo/index.ts
  • packages/typescript/ai/src/activities/stream-generation-result.ts
  • packages/typescript/ai/src/activities/summarize/index.ts
  • packages/typescript/ai/tests/stream-generation.test.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • examples/ts-react-chat/src/routes/api.generate.video.ts
  • examples/ts-react-chat/src/routes/api.generate.speech.ts
  • examples/ts-react-chat/src/routes/api.summarize.ts

Comment on lines +320 to +340
**Example: Convert base64 audio to a playable Audio element**

```tsx
import { useGenerateSpeech, fetchServerSentEvents } from '@tanstack/ai-react'

function SpeechPlayer() {
const { generate, result, isLoading } = useGenerateSpeech({
connection: fetchServerSentEvents('/api/generate/speech'),
onResult: (raw) => {
const audioData = atob(raw.audio)
const bytes = new Uint8Array(audioData.length)
for (let i = 0; i < audioData.length; i++) {
bytes[i] = audioData.charCodeAt(i)
}
const blob = new Blob([bytes], { type: raw.contentType })
const url = URL.createObjectURL(blob)
return {
audio: new Audio(url),
duration: raw.duration,
}
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Memory leak in Result Transform example.

The onResult callback creates an object URL via URL.createObjectURL(blob) but never revokes it. Unlike the earlier playAudio example (lines 225-237) which properly cleans up with audio.onended, this transform stores the Audio element indefinitely without cleanup.

Consider documenting the cleanup responsibility:

📝 Suggested documentation improvement
       const url = URL.createObjectURL(blob)
       return {
         audio: new Audio(url),
         duration: raw.duration,
+        // Remember to call URL.revokeObjectURL(url) when done
+        cleanup: () => URL.revokeObjectURL(url),
       }
     },
   })

Or add a note after the example explaining that consumers should revoke the URL when the audio is no longer needed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/text-to-speech.md` around lines 320 - 340, The onResult example
in useGenerateSpeech creates an Audio with URL.createObjectURL(blob) but never
revokes that object URL, causing a memory leak; update the example or add a note
to ensure the consumer revokes the URL (e.g., call URL.revokeObjectURL(url) when
the Audio is no longer needed such as in audio.onended or in a cleanup/unmount
handler) and mention that responsibility in the docs so consumers of onResult
(and the returned Audio) know to free the object URL.

Comment on lines +51 to +58
} catch (error: any) {
yield {
type: 'RUN_ERROR',
runId,
error: {
message: error.message || 'Generation failed',
code: error.code,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd /repo && find . -type f -name "stream-generation-result.ts" | head -20

Repository: TanStack/ai

Length of output: 110


🏁 Script executed:

cat -n packages/typescript/ai/src/activities/stream-generation-result.ts

Repository: TanStack/ai

Length of output: 2145


Normalize the caught value before reading message/code.

throw null or throw undefined is legal in JavaScript, and the direct error.message access will throw a secondary TypeError instead of emitting RUN_ERROR.

💡 Proposed fix
-  } catch (error: any) {
+  } catch (error: unknown) {
+    const message =
+      error instanceof Error
+        ? error.message
+        : typeof error === 'string'
+          ? error
+          : 'Generation failed'
+
     yield {
       type: 'RUN_ERROR',
       runId,
       error: {
-        message: error.message || 'Generation failed',
-        code: error.code,
+        message,
+        code:
+          typeof error === 'object' && error != null && 'code' in error
+            ? (error as { code?: string }).code
+            : undefined,
       },
       timestamp: Date.now(),
     }
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error: any) {
yield {
type: 'RUN_ERROR',
runId,
error: {
message: error.message || 'Generation failed',
code: error.code,
},
} catch (error: unknown) {
const message =
error instanceof Error
? error.message
: typeof error === 'string'
? error
: 'Generation failed'
yield {
type: 'RUN_ERROR',
runId,
error: {
message,
code:
typeof error === 'object' && error != null && 'code' in error
? (error as { code?: string }).code
: undefined,
},
timestamp: Date.now(),
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/src/activities/stream-generation-result.ts` around
lines 51 - 58, The catch block in the generator that yields the RUN_ERROR result
should normalize the thrown value before reading properties; in the catch for
the stream generation (the catch that currently does `catch (error: any) { yield
{ type: 'RUN_ERROR', ... } }`) replace direct access to error.message/error.code
with a normalized error object (e.g., coerce null/undefined/primitive throws
into an object with a safe message and optional code) and use that safe object
when building the yielded error payload so accessing message/code cannot throw a
secondary TypeError.

AlemTuzlak and others added 5 commits March 9, 2026 13:50
Add a tab UI to each generation showcase page (image, speech, summarize,
transcription, video) allowing users to switch between streaming (SSE
connection) and direct (server function fetcher) transport modes.

- Create server-fns.ts with createServerFn + zod validators for each
  generation type
- Use library-exported hook return types (UseGenerateImageReturn, etc.)
  for component props
- React key-based remounting ensures clean hook state on tab switch
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (4)
examples/ts-react-chat/src/routes/generations.video.tsx (1)

61-70: ⚠️ Potential issue | 🟡 Minor

Bind the prompt label to the textarea.

The standalone <label> still doesn't point at the prompt field, so assistive tech won't announce it when the textarea receives focus.

♿ Suggested change
-        <label className="text-sm text-gray-400">Prompt</label>
+        <label htmlFor="video-prompt" className="text-sm text-gray-400">
+          Prompt
+        </label>
         <textarea
+          id="video-prompt"
           value={prompt}
           onChange={(e) => setPrompt(e.target.value)}
           placeholder="Describe the video you want to generate..."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.video.tsx` around lines 61 -
70, The <label> for the prompt isn't associated with the <textarea>, breaking
accessibility; add a unique id (e.g., "prompt") to the textarea and set the
label's htmlFor to that id so assistive tech announces the textarea when
focused—update the label element and the textarea element (which uses
value={prompt}, onChange={setPrompt}, rows and disabled={isLoading}) to include
the matching htmlFor/id attributes.
examples/ts-react-chat/src/routes/generations.speech.tsx (2)

12-31: ⚠️ Potential issue | 🟠 Major

Revoke old audio object URLs.

Every result allocates a fresh blob URL, but neither reset() nor unmount revokes the previous one. Re-running the demo leaks one blob per generation.

♻️ Suggested change
+// Also import `useEffect` and `useRef` from React.
+
 function SpeechGenerationUI({
   text,
   setText,
   voice,
@@
   setVoice: (v: string) => void
 }) {
+  const audioUrlRef = useRef<string | null>(null)
+
+  useEffect(() => {
+    if (result?.audioUrl && result.audioUrl !== audioUrlRef.current) {
+      if (audioUrlRef.current) {
+        URL.revokeObjectURL(audioUrlRef.current)
+      }
+      audioUrlRef.current = result.audioUrl
+    }
+  }, [result?.audioUrl])
+
+  useEffect(() => {
+    return () => {
+      if (audioUrlRef.current) {
+        URL.revokeObjectURL(audioUrlRef.current)
+      }
+    }
+  }, [])
+
   const handleGenerate = () => {
     if (!text.trim()) return
     generate({ text: text.trim(), voice })
   }
+
+  const handleClear = () => {
+    if (audioUrlRef.current) {
+      URL.revokeObjectURL(audioUrlRef.current)
+      audioUrlRef.current = null
+    }
+    reset()
+  }
@@
         {result && (
           <button
-            onClick={reset}
+            onClick={handleClear}
             className="px-6 py-2 bg-gray-700 hover:bg-gray-600 text-white rounded-lg text-sm font-medium transition-colors"
           >
             Clear

Also applies to: 73-83, 135-142

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.speech.tsx` around lines 12 -
31, The code creates new blob URLs via URL.createObjectURL(blob) in
toSpeechOutput and never revokes them, leaking blobs; modify the logic so any
existing object URL is revoked before creating a new one and also revoke on
reset() and on component unmount. Concretely: track the current audio URL (e.g.,
a module/private variable or component state like currentAudioUrl), call
URL.revokeObjectURL(currentAudioUrl) before assigning a new URL in
toSpeechOutput (where URL.createObjectURL is called), clear that tracker in
reset(), and add a cleanup in the componentWillUnmount/useEffect cleanup to
revoke any remaining URL. Ensure you reference the toSpeechOutput function, the
reset() routine, and the component unmount cleanup when making the changes.

96-123: ⚠️ Potential issue | 🟡 Minor

Give the speech controls proper accessibility semantics.

The text label is not bound to the textarea, and the voice picker is just a visual button group with no exposed selected state. Screen readers currently can't announce the field label or which voice is active.

♿ Suggested change
-        <label className="text-sm text-gray-400">Text</label>
+        <label htmlFor="speech-text" className="text-sm text-gray-400">
+          Text
+        </label>
         <textarea
+          id="speech-text"
           value={text}
           onChange={(e) => setText(e.target.value)}
           placeholder="Enter text to convert to speech..."
@@
-        <label className="text-sm text-gray-400">Voice</label>
-        <div className="flex flex-wrap gap-2">
+        <span id="speech-voice-label" className="text-sm text-gray-400">
+          Voice
+        </span>
+        <div
+          role="group"
+          aria-labelledby="speech-voice-label"
+          className="flex flex-wrap gap-2"
+        >
           {VOICES.map((v) => (
             <button
               key={v}
               onClick={() => setVoice(v)}
+              aria-pressed={voice === v}
               className={`px-4 py-2 rounded-lg text-sm font-medium transition-colors ${
                 voice === v
                   ? 'bg-orange-600 text-white'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.speech.tsx` around lines 96 -
123, Add proper accessibility semantics: give the textarea an id and attach the
label via htmlFor (use the existing state setter setText and
disabled={isLoading} as-is), and convert the voice picker into an accessible
radio group by adding role="radiogroup" to the wrapper and making each voice
button a role="radio" with aria-checked={voice === v}, an accessible name
(aria-label or by including visible text), and keyboard support (tabIndex=0 and
handling Enter/Space to call setVoice(v)). Ensure focus and selected styling
still reflect the voice state so screen readers and keyboard users can perceive
and change the selection.
examples/ts-react-chat/src/routes/generations.transcription.tsx (1)

40-44: ⚠️ Potential issue | 🟠 Major

Replace the reduce-based base64 conversion.

new Uint8Array(buffer).reduce(...) builds the binary string quadratically, so large audio uploads get disproportionately expensive in the browser. FileReader.readAsDataURL() gives you the same payload without the concatenation churn.

♻️ Suggested change
-    const buffer = await file.arrayBuffer()
-    const base64 = btoa(
-      new Uint8Array(buffer).reduce((s, b) => s + String.fromCharCode(b), ''),
-    )
-    const dataUrl = `data:${file.type};base64,${base64}`
+    const dataUrl = await new Promise<string>((resolve, reject) => {
+      const reader = new FileReader()
+      reader.onload = () => resolve(String(reader.result))
+      reader.onerror = () =>
+        reject(reader.error ?? new Error('Failed to read audio file'))
+      reader.readAsDataURL(file)
+    })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.transcription.tsx` around lines
40 - 44, The current conversion builds a binary string with new
Uint8Array(buffer).reduce(...) which is O(n^2) and will be slow for large files;
replace this with FileReader.readAsDataURL to produce dataUrl directly from the
File object. Change the logic that computes buffer/base64/dataUrl to use a
FileReader (or use file.stream() with a blob -> base64 helper) and await the
reader.onload result to set dataUrl (referencing the existing variables file and
dataUrl) instead of creating base64 via reduce.
🧹 Nitpick comments (2)
examples/ts-react-chat/src/lib/server-fns.ts (2)

93-102: Same as any pattern for video size parameter.

Similar to the image function, line 100 uses as any for the size parameter. Apply the same Zod enum approach if the adapter has known valid values.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/lib/server-fns.ts` around lines 93 - 102, The
video generation call is using a loose cast for the size parameter; replace the
`as any` on the `size` passed to generateVideo by validating/typing it with a
Zod enum (same pattern used for images). Define a Zod.enum (e.g., VideoSizeEnum)
listing the adapter's allowed sizes, use VideoSizeEnum.parse or z.infer for the
input type, and pass the parsed/typed value into generateVideo (update any
related typedefs for openaiVideo/generateVideo to accept that enum type). Ensure
the validation occurs before calling generateVideo so only known enum values are
forwarded.

11-17: Fix import sorting order.

The ESLint rule flags that openaiSummarize should be sorted alphabetically within the import declaration.

🔧 Proposed fix
 import {
   openaiImage,
+  openaiSummarize,
   openaiSpeech,
   openaiTranscription,
-  openaiSummarize,
   openaiVideo,
 } from '@tanstack/ai-openai'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/lib/server-fns.ts` around lines 11 - 17, Import
specifiers are not alphabetically ordered; reorder the imported symbols so
ESLint's sort rule is satisfied. Update the import from '@tanstack/ai-openai' to
alphabetically sort the names: openaiImage, openaiSpeech, openaiSummarize,
openaiTranscription, openaiVideo (referencing the import declaration that
currently lists openaiImage, openaiSpeech, openaiTranscription, openaiSummarize,
openaiVideo) so the list is in correct alphabetical order.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/ts-react-chat/src/lib/server-fns.ts`:
- Around line 104-114: The current synchronous polling loop in the server
function (using MAX_POLLS, polls, and repeated calls to getVideoJobStatus with
jobId) can block the HTTP request for up to 10 minutes and hit upstream
timeouts; change the function to return immediately with the generated jobId and
status “pending” (or the last known status) so the client can poll or subscribe
separately, or switch to the existing SSE/streaming implementation for
long-running jobs; specifically, remove or guard the blocking while loop that
polls getVideoJobStatus and instead persist/return jobId and initial status,
then implement client-side polling or use the server-side stream endpoint to
track completion.
- Around line 27-34: The handler is using an unsafe cast for image size; replace
the generic string validator for data.size with a Zod enum of the allowed values
('1024x1024','1536x1024','1024x1536','auto') in the input schema that feeds
.handler, remove the "as any" cast in the call to
generateImage(openaiImage('gpt-image-1')), and pass data.size directly so types
line up with generateImage/openaiImage; update any associated types/imports to
use z.enum and ensure the handler signature (data.prompt, data.numberOfImages,
data.size) uses the new enum type.

In `@examples/ts-react-chat/src/routes/generations.image.tsx`:
- Around line 78-83: The onKeyDown handler currently calls handleGenerate() when
Enter is pressed, but doesn't guard against IME composition; update the handler
(the inline onKeyDown function in the input element) to also check
e.nativeEvent?.isComposing and only submit when false (i.e., require e.key ===
'Enter' && !e.shiftKey && !e.nativeEvent?.isComposing && prompt.trim()), keeping
the existing e.preventDefault() and handleGenerate() behavior otherwise.

In `@examples/ts-react-chat/src/routes/generations.summarize.tsx`:
- Around line 79-115: The textarea and style buttons are not accessible: bind
the label to the textarea and expose the selected style to assistive tech. Add a
htmlFor/id pair for the textarea (referencing the textarea that uses
value={text} and onChange={(e) => setText(e.target.value)} and the "Use sample
text" button that calls setText(SAMPLE_TEXT)), and make each style button (the
map over ['concise','bullet-points','paragraph'] that calls setStyle(s) and
checks style === s) act like a real radio group by adding appropriate ARIA/role
attributes (role="radiogroup" on the container, role="radio" and aria-checked on
each button), unique ids, and visually-hidden labels so screen readers can
announce the active style; ensure keyboard focus/activation still calls
setStyle(s).

In `@examples/ts-react-chat/src/routes/generations.transcription.tsx`:
- Around line 36-46: The handleFileUpload function currently reads and encodes
any selected file; reject files larger than the advertised 25 MB before
allocating or calling file.arrayBuffer(). Add a size check in handleFileUpload
using a MAX_BYTES constant (e.g., MAX_BYTES = 25 * 1024 * 1024) that inspects
file.size, and if file.size > MAX_BYTES immediately return (or show an error to
the user) instead of calling file.arrayBuffer(), then only proceed to create the
base64 dataUrl and call generate when the file is within the allowed size.

---

Duplicate comments:
In `@examples/ts-react-chat/src/routes/generations.speech.tsx`:
- Around line 12-31: The code creates new blob URLs via
URL.createObjectURL(blob) in toSpeechOutput and never revokes them, leaking
blobs; modify the logic so any existing object URL is revoked before creating a
new one and also revoke on reset() and on component unmount. Concretely: track
the current audio URL (e.g., a module/private variable or component state like
currentAudioUrl), call URL.revokeObjectURL(currentAudioUrl) before assigning a
new URL in toSpeechOutput (where URL.createObjectURL is called), clear that
tracker in reset(), and add a cleanup in the componentWillUnmount/useEffect
cleanup to revoke any remaining URL. Ensure you reference the toSpeechOutput
function, the reset() routine, and the component unmount cleanup when making the
changes.
- Around line 96-123: Add proper accessibility semantics: give the textarea an
id and attach the label via htmlFor (use the existing state setter setText and
disabled={isLoading} as-is), and convert the voice picker into an accessible
radio group by adding role="radiogroup" to the wrapper and making each voice
button a role="radio" with aria-checked={voice === v}, an accessible name
(aria-label or by including visible text), and keyboard support (tabIndex=0 and
handling Enter/Space to call setVoice(v)). Ensure focus and selected styling
still reflect the voice state so screen readers and keyboard users can perceive
and change the selection.

In `@examples/ts-react-chat/src/routes/generations.transcription.tsx`:
- Around line 40-44: The current conversion builds a binary string with new
Uint8Array(buffer).reduce(...) which is O(n^2) and will be slow for large files;
replace this with FileReader.readAsDataURL to produce dataUrl directly from the
File object. Change the logic that computes buffer/base64/dataUrl to use a
FileReader (or use file.stream() with a blob -> base64 helper) and await the
reader.onload result to set dataUrl (referencing the existing variables file and
dataUrl) instead of creating base64 via reduce.

In `@examples/ts-react-chat/src/routes/generations.video.tsx`:
- Around line 61-70: The <label> for the prompt isn't associated with the
<textarea>, breaking accessibility; add a unique id (e.g., "prompt") to the
textarea and set the label's htmlFor to that id so assistive tech announces the
textarea when focused—update the label element and the textarea element (which
uses value={prompt}, onChange={setPrompt}, rows and disabled={isLoading}) to
include the matching htmlFor/id attributes.

---

Nitpick comments:
In `@examples/ts-react-chat/src/lib/server-fns.ts`:
- Around line 93-102: The video generation call is using a loose cast for the
size parameter; replace the `as any` on the `size` passed to generateVideo by
validating/typing it with a Zod enum (same pattern used for images). Define a
Zod.enum (e.g., VideoSizeEnum) listing the adapter's allowed sizes, use
VideoSizeEnum.parse or z.infer for the input type, and pass the parsed/typed
value into generateVideo (update any related typedefs for
openaiVideo/generateVideo to accept that enum type). Ensure the validation
occurs before calling generateVideo so only known enum values are forwarded.
- Around line 11-17: Import specifiers are not alphabetically ordered; reorder
the imported symbols so ESLint's sort rule is satisfied. Update the import from
'@tanstack/ai-openai' to alphabetically sort the names: openaiImage,
openaiSpeech, openaiSummarize, openaiTranscription, openaiVideo (referencing the
import declaration that currently lists openaiImage, openaiSpeech,
openaiTranscription, openaiSummarize, openaiVideo) so the list is in correct
alphabetical order.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a8511bae-57b7-4744-87bd-56eea9c39ded

📥 Commits

Reviewing files that changed from the base of the PR and between ea42f6e and 8288bf0.

📒 Files selected for processing (7)
  • examples/ts-react-chat/src/lib/server-fns.ts
  • examples/ts-react-chat/src/routes/generations.image.tsx
  • examples/ts-react-chat/src/routes/generations.speech.tsx
  • examples/ts-react-chat/src/routes/generations.summarize.tsx
  • examples/ts-react-chat/src/routes/generations.transcription.tsx
  • examples/ts-react-chat/src/routes/generations.video.tsx
  • examples/ts-react-chat/src/routes/index.tsx

Comment on lines +27 to +34
.handler(async ({ data }) => {
return generateImage({
adapter: openaiImage('gpt-image-1'),
prompt: data.prompt,
numberOfImages: data.numberOfImages,
size: data.size as any,
})
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

What are the valid size values for OpenAI image generation API gpt-image-1 model?

💡 Result:

For the gpt-image-1 model, the size parameter accepts these values:

  • 1024x1024
  • 1536x1024
  • 1024x1536
  • auto (default) [1]

Source: OpenAI “Image generation” guide → “Size and quality options”. [1]


🏁 Script executed:

cat -n examples/ts-react-chat/src/lib/server-fns.ts

Repository: TanStack/ai

Length of output: 4194


🏁 Script executed:

find . -type f -name "*.ts" -o -name "*.tsx" | xargs grep -l "openaiImage" | head -5

Repository: TanStack/ai

Length of output: 321


🏁 Script executed:

cat -n packages/typescript/ai-openai/src/adapters/image.ts

Repository: TanStack/ai

Length of output: 6299


🏁 Script executed:

cat -n packages/typescript/ai-openai/src/image/image-provider-options.ts

Repository: TanStack/ai

Length of output: 9789


Use Zod enum for size parameter to ensure type safety.

The as any cast on line 32 bypasses type safety. The adapter expects specific size values for gpt-image-1: 1024x1024, 1536x1024, 1024x1536, or auto. Replace the generic string validator with a Zod enum to enforce this at runtime.

♻️ Example improvement
   .inputValidator(
     z.object({
       prompt: z.string(),
       numberOfImages: z.number().optional(),
-      size: z.string().optional(),
+      size: z.enum(['1024x1024', '1536x1024', '1024x1536', 'auto']).optional(),
     }),
   )
   .handler(async ({ data }) => {
     return generateImage({
       adapter: openaiImage('gpt-image-1'),
       prompt: data.prompt,
       numberOfImages: data.numberOfImages,
-      size: data.size as any,
+      size: data.size,
     })
   })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/lib/server-fns.ts` around lines 27 - 34, The
handler is using an unsafe cast for image size; replace the generic string
validator for data.size with a Zod enum of the allowed values
('1024x1024','1536x1024','1024x1536','auto') in the input schema that feeds
.handler, remove the "as any" cast in the call to
generateImage(openaiImage('gpt-image-1')), and pass data.size directly so types
line up with generateImage/openaiImage; update any associated types/imports to
use z.enum and ensure the handler signature (data.prompt, data.numberOfImages,
data.size) uses the new enum type.

Comment on lines +104 to +114
// Poll until complete (max 10 minutes)
const MAX_POLLS = 120
let polls = 0
let status = await getVideoJobStatus({ adapter, jobId })
while (status.status !== 'completed' && status.status !== 'failed') {
if (++polls > MAX_POLLS) {
throw new Error('Video generation timed out')
}
await new Promise((r) => setTimeout(r, 5000))
status = await getVideoJobStatus({ adapter, jobId })
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Long-blocking polling may cause HTTP timeouts.

The polling loop can block the request for up to 10 minutes (120 polls × 5 seconds). Many HTTP servers, proxies, and load balancers have default timeouts of 30–60 seconds, which would terminate this request before completion.

For a demo showing "direct" mode this may be acceptable, but consider:

  1. Documenting the timeout requirements for deployments using this function.
  2. Returning the jobId immediately and letting the client poll separately.
  3. Using the streaming SSE approach (already available in the PR) for production use cases.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/lib/server-fns.ts` around lines 104 - 114, The
current synchronous polling loop in the server function (using MAX_POLLS, polls,
and repeated calls to getVideoJobStatus with jobId) can block the HTTP request
for up to 10 minutes and hit upstream timeouts; change the function to return
immediately with the generated jobId and status “pending” (or the last known
status) so the client can poll or subscribe separately, or switch to the
existing SSE/streaming implementation for long-running jobs; specifically,
remove or guard the blocking while loop that polls getVideoJobStatus and instead
persist/return jobId and initial status, then implement client-side polling or
use the server-side stream endpoint to track completion.

Comment on lines +78 to +83
onKeyDown={(e) => {
if (e.key === 'Enter' && !e.shiftKey && prompt.trim()) {
e.preventDefault()
handleGenerate()
}
}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n examples/ts-react-chat/src/routes/generations.image.tsx | sed -n '70,90p'

Repository: TanStack/ai

Length of output: 1078


🏁 Script executed:

# Search for any existing IME/composition handling in the codebase
rg -i "iscomposing|composition" --type ts --type tsx

Repository: TanStack/ai

Length of output: 82


🌐 Web query:

React event handler isComposing nativeEvent IME composition keyboard

💡 Result:

In React, your onKeyDown/onKeyUp handlers receive a SyntheticEvent that wraps the browser’s native KeyboardEvent as e.nativeEvent. React doesn’t “invent” IME state—IME composition is exposed by the platform via KeyboardEvent.isComposing, which is true between compositionstart and compositionend. [1]

Typical fix: ignore Enter (or other shortcuts) while composing

function onKeyDown(e: React.KeyboardEvent<HTMLInputElement>) {
  // IME in-progress: don't treat keys (esp. Enter) as "submit"/"send"
  if (e.nativeEvent.isComposing) return;

  if (e.key === "Enter") {
    // submit / send
  }
}

isComposing is a native KeyboardEvent property, so e.nativeEvent.isComposing is the right place to read it in React. [1]

Safari caveat (common IME bug)

In Safari, when the user presses Enter to commit an IME composition, keydown for Enter can see isComposing === false (even though the user is still effectively “in IME commit flow”). A common workaround is also guarding on keyCode 229 during IME-related key events. [2] This behavior/order issue is widely noted in IME handling discussions (composition vs key/input event ordering). [3]

Practical guard (more robust cross-browser):

function onKeyDown(e: React.KeyboardEvent<HTMLInputElement>) {
  const ne = e.nativeEvent;

  // Guard IME: standard flag + common IME keyCode fallback
  // (keyCode is deprecated but still used for this IME workaround)
  if (ne.isComposing || (ne as any).keyCode === 229) return;

  if (e.key === "Enter") { /* submit */ }
}

Sources:
[1] MDN: KeyboardEvent.isComposing definition and meaning (composition session flag).
[2] DEV Community: Safari IME caveat + keyCode 229 workaround discussion.
[3] Mozilla dev-platform thread on composition/input event ordering and isComposing.


Guard against submitting while IME composition is active.

Pressing Enter to confirm Japanese/Chinese/Korean input composition will currently trigger handleGenerate() and submit a half-finished prompt. Check e.nativeEvent.isComposing to prevent submission during active IME composition.

Suggested change
           onKeyDown={(e) => {
+            if (e.nativeEvent.isComposing) return
             if (e.key === 'Enter' && !e.shiftKey && prompt.trim()) {
               e.preventDefault()
               handleGenerate()
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
onKeyDown={(e) => {
if (e.key === 'Enter' && !e.shiftKey && prompt.trim()) {
e.preventDefault()
handleGenerate()
}
}}
onKeyDown={(e) => {
if (e.nativeEvent.isComposing) return
if (e.key === 'Enter' && !e.shiftKey && prompt.trim()) {
e.preventDefault()
handleGenerate()
}
}}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.image.tsx` around lines 78 -
83, The onKeyDown handler currently calls handleGenerate() when Enter is
pressed, but doesn't guard against IME composition; update the handler (the
inline onKeyDown function in the input element) to also check
e.nativeEvent?.isComposing and only submit when false (i.e., require e.key ===
'Enter' && !e.shiftKey && !e.nativeEvent?.isComposing && prompt.trim()), keeping
the existing e.preventDefault() and handleGenerate() behavior otherwise.

Comment on lines +79 to +115
<div className="space-y-3">
<div className="flex items-center justify-between">
<label className="text-sm text-gray-400">Text to Summarize</label>
<button
onClick={() => setText(SAMPLE_TEXT)}
className="text-xs text-orange-400 hover:text-orange-300 transition-colors"
>
Use sample text
</button>
</div>
<textarea
value={text}
onChange={(e) => setText(e.target.value)}
placeholder="Paste or type text to summarize..."
className="w-full rounded-lg border border-orange-500/20 bg-gray-800/50 px-4 py-3 text-sm text-white placeholder-gray-500 focus:outline-none focus:ring-2 focus:ring-orange-500/50 resize-none"
rows={8}
disabled={isLoading}
/>
</div>

<div className="space-y-3">
<label className="text-sm text-gray-400">Style</label>
<div className="flex flex-wrap gap-2">
{(['concise', 'bullet-points', 'paragraph'] as const).map((s) => (
<button
key={s}
onClick={() => setStyle(s)}
className={`px-4 py-2 rounded-lg text-sm font-medium transition-colors ${
style === s
? 'bg-orange-600 text-white'
: 'bg-gray-800 text-gray-300 hover:bg-gray-700'
}`}
>
{s}
</button>
))}
</div>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Expose the summary inputs as real labeled controls.

The textarea label is not bound to the <textarea>, and the style picker only exposes its selected state through color. Screen-reader users currently lose both the field label and the active style.

♿ Suggested change
-        <div className="flex items-center justify-between">
-          <label className="text-sm text-gray-400">Text to Summarize</label>
+        <div className="flex items-center justify-between">
+          <label htmlFor="summary-text" className="text-sm text-gray-400">
+            Text to Summarize
+          </label>
           <button
             onClick={() => setText(SAMPLE_TEXT)}
             className="text-xs text-orange-400 hover:text-orange-300 transition-colors"
           >
             Use sample text
           </button>
         </div>
         <textarea
+          id="summary-text"
           value={text}
           onChange={(e) => setText(e.target.value)}
           placeholder="Paste or type text to summarize..."
           className="w-full rounded-lg border border-orange-500/20 bg-gray-800/50 px-4 py-3 text-sm text-white placeholder-gray-500 focus:outline-none focus:ring-2 focus:ring-orange-500/50 resize-none"
           rows={8}
           disabled={isLoading}
         />
@@
-        <label className="text-sm text-gray-400">Style</label>
-        <div className="flex flex-wrap gap-2">
+        <span id="summary-style-label" className="text-sm text-gray-400">
+          Style
+        </span>
+        <div
+          role="group"
+          aria-labelledby="summary-style-label"
+          className="flex flex-wrap gap-2"
+        >
           {(['concise', 'bullet-points', 'paragraph'] as const).map((s) => (
             <button
               key={s}
               onClick={() => setStyle(s)}
+              aria-pressed={style === s}
               className={`px-4 py-2 rounded-lg text-sm font-medium transition-colors ${
                 style === s
                   ? 'bg-orange-600 text-white'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<div className="space-y-3">
<div className="flex items-center justify-between">
<label className="text-sm text-gray-400">Text to Summarize</label>
<button
onClick={() => setText(SAMPLE_TEXT)}
className="text-xs text-orange-400 hover:text-orange-300 transition-colors"
>
Use sample text
</button>
</div>
<textarea
value={text}
onChange={(e) => setText(e.target.value)}
placeholder="Paste or type text to summarize..."
className="w-full rounded-lg border border-orange-500/20 bg-gray-800/50 px-4 py-3 text-sm text-white placeholder-gray-500 focus:outline-none focus:ring-2 focus:ring-orange-500/50 resize-none"
rows={8}
disabled={isLoading}
/>
</div>
<div className="space-y-3">
<label className="text-sm text-gray-400">Style</label>
<div className="flex flex-wrap gap-2">
{(['concise', 'bullet-points', 'paragraph'] as const).map((s) => (
<button
key={s}
onClick={() => setStyle(s)}
className={`px-4 py-2 rounded-lg text-sm font-medium transition-colors ${
style === s
? 'bg-orange-600 text-white'
: 'bg-gray-800 text-gray-300 hover:bg-gray-700'
}`}
>
{s}
</button>
))}
</div>
<div className="space-y-3">
<div className="flex items-center justify-between">
<label htmlFor="summary-text" className="text-sm text-gray-400">
Text to Summarize
</label>
<button
onClick={() => setText(SAMPLE_TEXT)}
className="text-xs text-orange-400 hover:text-orange-300 transition-colors"
>
Use sample text
</button>
</div>
<textarea
id="summary-text"
value={text}
onChange={(e) => setText(e.target.value)}
placeholder="Paste or type text to summarize..."
className="w-full rounded-lg border border-orange-500/20 bg-gray-800/50 px-4 py-3 text-sm text-white placeholder-gray-500 focus:outline-none focus:ring-2 focus:ring-orange-500/50 resize-none"
rows={8}
disabled={isLoading}
/>
</div>
<div className="space-y-3">
<span id="summary-style-label" className="text-sm text-gray-400">
Style
</span>
<div
role="group"
aria-labelledby="summary-style-label"
className="flex flex-wrap gap-2"
>
{(['concise', 'bullet-points', 'paragraph'] as const).map((s) => (
<button
key={s}
onClick={() => setStyle(s)}
aria-pressed={style === s}
className={`px-4 py-2 rounded-lg text-sm font-medium transition-colors ${
style === s
? 'bg-orange-600 text-white'
: 'bg-gray-800 text-gray-300 hover:bg-gray-700'
}`}
>
{s}
</button>
))}
</div>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.summarize.tsx` around lines 79
- 115, The textarea and style buttons are not accessible: bind the label to the
textarea and expose the selected style to assistive tech. Add a htmlFor/id pair
for the textarea (referencing the textarea that uses value={text} and
onChange={(e) => setText(e.target.value)} and the "Use sample text" button that
calls setText(SAMPLE_TEXT)), and make each style button (the map over
['concise','bullet-points','paragraph'] that calls setStyle(s) and checks style
=== s) act like a real radio group by adding appropriate ARIA/role attributes
(role="radiogroup" on the container, role="radio" and aria-checked on each
button), unique ids, and visually-hidden labels so screen readers can announce
the active style; ensure keyboard focus/activation still calls setStyle(s).

Comment on lines +36 to +46
const handleFileUpload = async (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0]
if (!file) return

const buffer = await file.arrayBuffer()
const base64 = btoa(
new Uint8Array(buffer).reduce((s, b) => s + String.fromCharCode(b), ''),
)
const dataUrl = `data:${file.type};base64,${base64}`

await generate({ audio: dataUrl, language: 'en' })
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reject files above the advertised 25 MB limit before reading them.

Line 76 promises a 25 MB cap, but handleFileUpload will still buffer and encode anything the user selects. That can blow up memory in the tab long before the backend has a chance to reject the upload.

Also applies to: 75-76

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/ts-react-chat/src/routes/generations.transcription.tsx` around lines
36 - 46, The handleFileUpload function currently reads and encodes any selected
file; reject files larger than the advertised 25 MB before allocating or calling
file.arrayBuffer(). Add a size check in handleFileUpload using a MAX_BYTES
constant (e.g., MAX_BYTES = 25 * 1024 * 1024) that inspects file.size, and if
file.size > MAX_BYTES immediately return (or show an error to the user) instead
of calling file.arrayBuffer(), then only proceed to create the base64 dataUrl
and call generate when the file is within the allowed size.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/guides/image-generation.md`:
- Around line 375-383: Update the docs table row for the onResult option to
match the actual TypeScript signature in UseGenerateImageOptions<TOutput>:
change the entry from `(result) => void` to `(result) => TOutput | null | void`
(or explicitly `(result: ImageGenerationResult) => TOutput | null | void`) so it
reflects that the callback may transform and return a value or null/void; update
the `onResult` description only and keep the rest of the table unchanged.

In `@docs/guides/transcription.md`:
- Around line 387-396: The example's fetcher passes `input` directly to
`transcribeFn` but `transcribeFn` expects an object with an `audio: string`;
update the `fetcher` inside `AudioTranscriber` to convert/narrow the input
before calling `transcribeFn` (e.g., wrap input into an object with the `audio`
property or use a type assertion to `string`) so the call signature matches
`transcribeFn({ audio: string })`; ensure the `useTranscription` `fetcher`
parameter and `transcribeFn` invocation use consistent types.
- Around line 399-409: Update the Hook API table for useTranscription to match
the exported UseTranscriptionOptions: add rows for `id` (string | undefined) and
`body` (any | undefined) and `onChunk` ((chunk) => void) with brief
descriptions, and change the `onResult` signature to reflect it may return a
transformed value, `null` to preserve previous result, or `void` to store the
raw TranscriptionResult (e.g., `(result) => TranscriptionResult |
TransformedType | null | void`); reference the useTranscription hook and the
UseTranscriptionOptions interface to ensure names and return semantics match the
implementation.

In `@packages/typescript/ai/tests/stream-generation.test.ts`:
- Around line 252-271: The test currently only asserts that a RUN_ERROR chunk
exists and has the expected message but doesn't assert terminal stream behavior;
update the test around generateVideo/createMockVideoAdapter/collectChunks to
verify the stream ended on error by asserting either that no RUN_FINISHED chunk
is present or that the final emitted chunk's type is RUN_ERROR (and verify the
RUN_ERROR payload); apply the same change to the other video tests referencing
RUN_ERROR/RUN_FINISHED to ensure they fail if a RUN_FINISHED is emitted after an
error.
- Around line 1-8: Alphabetize the named import specifiers so ESLint
sort-imports passes: change the vitest import to order the specifiers as
describe, expect, it, vi and reorder the index exports to generateImage,
generateSpeech, generateTranscription, generateVideo (the type import
StreamChunk, VideoStatusResult can remain as-is); update the import lines that
reference describe/it/expect/vi,
generateImage/generateVideo/generateSpeech/generateTranscription accordingly.
- Around line 221-249: The test currently builds types from chunks and uses
expect(...).toContain which only checks membership; change the assertions to
verify ordering and counts explicitly: compute the ordered list (types) from
chunks and assert it equals the exact expected sequence (e.g., ['RUN_STARTED',
'CUSTOM:video:job:created', 'CUSTOM:video:status', ...,
'CUSTOM:generation:result', 'RUN_FINISHED']) or assert index ordering with
indexOf checks to ensure generation:result comes before RUN_FINISHED; also
assert the exact count of video:status events (by filtering chunks for
c.type==='CUSTOM' && c.name==='video:status' and checking its length) so
reorders or missing/extra status events will fail — update assertions around
variables/types, chunks, jobCreated, and result accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f309ac82-81e3-4b04-8a2b-ecdc0f23a6ed

📥 Commits

Reviewing files that changed from the base of the PR and between 8288bf0 and 127353e.

📒 Files selected for processing (6)
  • .claude/settings.json
  • docs/guides/image-generation.md
  • docs/guides/text-to-speech.md
  • docs/guides/transcription.md
  • docs/guides/video-generation.md
  • packages/typescript/ai/tests/stream-generation.test.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • docs/guides/video-generation.md
  • .claude/settings.json
  • docs/guides/text-to-speech.md

Comment on lines +375 to +383
| Option | Type | Description |
|--------|------|-------------|
| `connection` | `ConnectionAdapter` | Streaming transport (SSE, HTTP stream, custom) |
| `fetcher` | `(input) => Promise<ImageGenerationResult>` | Direct async function (no streaming) |
| `id` | `string` | Unique identifier for this instance |
| `body` | `Record<string, any>` | Additional body parameters (connection mode) |
| `onResult` | `(result) => void` | Callback when images are generated |
| `onError` | `(error) => void` | Callback on error |
| `onProgress` | `(progress, message?) => void` | Progress updates (0-100) |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the shared generation-hook docs against the exported image hook interface.
sed -n '375,383p' docs/guides/image-generation.md
rg -n -A12 -B3 'export interface UseGenerateImageOptions|onResult\?' packages/typescript/ai-react/src/use-generate-image.ts

Repository: TanStack/ai

Length of output: 2652


onResult callback signature in the table is incomplete.

The documentation shows (result) => void, but the actual UseGenerateImageOptions<TOutput> interface allows the callback to return TOutput | null | void to optionally transform and store the result. Update the table row to reflect the full signature.

Proposed fix
-| `onResult` | `(result) => void` | Callback when images are generated |
+| `onResult` | `(result) => TOutput \| null \| void` | Callback when images are generated; can transform the stored result or return `null` to keep the previous one |
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| Option | Type | Description |
|--------|------|-------------|
| `connection` | `ConnectionAdapter` | Streaming transport (SSE, HTTP stream, custom) |
| `fetcher` | `(input) => Promise<ImageGenerationResult>` | Direct async function (no streaming) |
| `id` | `string` | Unique identifier for this instance |
| `body` | `Record<string, any>` | Additional body parameters (connection mode) |
| `onResult` | `(result) => void` | Callback when images are generated |
| `onError` | `(error) => void` | Callback on error |
| `onProgress` | `(progress, message?) => void` | Progress updates (0-100) |
| Option | Type | Description |
|--------|------|-------------|
| `connection` | `ConnectionAdapter` | Streaming transport (SSE, HTTP stream, custom) |
| `fetcher` | `(input) => Promise<ImageGenerationResult>` | Direct async function (no streaming) |
| `id` | `string` | Unique identifier for this instance |
| `body` | `Record<string, any>` | Additional body parameters (connection mode) |
| `onResult` | `(result) => TOutput \| null \| void` | Callback when images are generated; can transform the stored result or return `null` to keep the previous one |
| `onError` | `(error) => void` | Callback on error |
| `onProgress` | `(progress, message?) => void` | Progress updates (0-100) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/image-generation.md` around lines 375 - 383, Update the docs
table row for the onResult option to match the actual TypeScript signature in
UseGenerateImageOptions<TOutput>: change the entry from `(result) => void` to
`(result) => TOutput | null | void` (or explicitly `(result:
ImageGenerationResult) => TOutput | null | void`) so it reflects that the
callback may transform and return a value or null/void; update the `onResult`
description only and keep the rest of the table unchanged.

Comment on lines +387 to +396
```tsx
import { useTranscription } from '@tanstack/ai-react'
import { transcribeFn } from '../lib/server-functions'

function AudioTranscriber() {
const { generate, result, isLoading } = useTranscription({
fetcher: (input) => transcribeFn({ data: input }),
})
// ... same UI as above
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the documented fetcher against the real server function and example usage.
sed -n '376,396p' docs/guides/transcription.md
sed -n '53,68p' examples/ts-react-chat/src/lib/server-fns.ts
sed -n '16,25p' examples/ts-react-chat/src/routes/generations.transcription.tsx

Repository: TanStack/ai

Length of output: 1337


Direct-mode fetcher example does not match the server function's input type.

The documented fetcher passes input directly, but transcribeFn expects audio as a string. Without type narrowing, this produces a TypeScript error. The working example in the codebase applies an explicit type assertion to resolve this.

Proposed fix
 function AudioTranscriber() {
   const { generate, result, isLoading } = useTranscription({
-    fetcher: (input) => transcribeFn({ data: input }),
+    fetcher: (input) =>
+      transcribeFn({
+        data: { ...input, audio: input.audio as string },
+      }),
   })
   // ... same UI as above
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```tsx
import { useTranscription } from '@tanstack/ai-react'
import { transcribeFn } from '../lib/server-functions'
function AudioTranscriber() {
const { generate, result, isLoading } = useTranscription({
fetcher: (input) => transcribeFn({ data: input }),
})
// ... same UI as above
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/transcription.md` around lines 387 - 396, The example's fetcher
passes `input` directly to `transcribeFn` but `transcribeFn` expects an object
with an `audio: string`; update the `fetcher` inside `AudioTranscriber` to
convert/narrow the input before calling `transcribeFn` (e.g., wrap input into an
object with the `audio` property or use a type assertion to `string`) so the
call signature matches `transcribeFn({ audio: string })`; ensure the
`useTranscription` `fetcher` parameter and `transcribeFn` invocation use
consistent types.

Comment on lines +399 to +409
### Hook API

The `useTranscription` hook accepts:

| Option | Type | Description |
|--------|------|-------------|
| `connection` | `ConnectionAdapter` | Streaming transport (SSE, HTTP stream, custom) |
| `fetcher` | `(input) => Promise<TranscriptionResult>` | Direct async function (no streaming) |
| `onResult` | `(result) => void` | Callback when transcription completes |
| `onError` | `(error) => void` | Callback on error |
| `onProgress` | `(progress, message?) => void` | Progress updates (0-100) |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the documented options against the exported interface.
sed -n '399,409p' docs/guides/transcription.md
sed -n '15,38p' packages/typescript/ai-react/src/use-transcription.ts

Repository: TanStack/ai

Length of output: 1716


Hook API table is incomplete and understates onResult.

The exported UseTranscriptionOptions interface includes three additional options—id, body, and onChunk—not documented in the table. Additionally, onResult can return a transformed value, null (to preserve the previous result), or nothing (void to store the raw result); the table documents only the return type as void, hiding the transform and null-preservation semantics.

Proposed fix
 | Option | Type | Description |
 |--------|------|-------------|
 | `connection` | `ConnectionAdapter` | Streaming transport (SSE, HTTP stream, custom) |
 | `fetcher` | `(input) => Promise<TranscriptionResult>` | Direct async function (no streaming) |
+| `id` | `string` | Unique identifier for this generation instance |
+| `body` | `Record<string, any>` | Additional body parameters to send in connection mode |
-| `onResult` | `(result) => void` | Callback when transcription completes |
+| `onResult` | `(result) => TOutput \| null \| void` | Callback when transcription completes; can transform the stored result or return `null` to keep the previous one |
 | `onError` | `(error) => void` | Callback on error |
 | `onProgress` | `(progress, message?) => void` | Progress updates (0-100) |
+| `onChunk` | `(chunk: StreamChunk) => void` | Callback for each stream chunk (connection mode only) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/guides/transcription.md` around lines 399 - 409, Update the Hook API
table for useTranscription to match the exported UseTranscriptionOptions: add
rows for `id` (string | undefined) and `body` (any | undefined) and `onChunk`
((chunk) => void) with brief descriptions, and change the `onResult` signature
to reflect it may return a transformed value, `null` to preserve previous
result, or `void` to store the raw TranscriptionResult (e.g., `(result) =>
TranscriptionResult | TransformedType | null | void`); reference the
useTranscription hook and the UseTranscriptionOptions interface to ensure names
and return semantics match the implementation.

Comment on lines +1 to +8
import { describe, it, expect, vi } from 'vitest'
import {
generateImage,
generateVideo,
generateSpeech,
generateTranscription,
} from '../src/index'
import type { StreamChunk, VideoStatusResult } from '../src/types'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the named import ordering so lint passes.

ESLint is already flagging sort-imports here, so this file will fail the current lint config until the specifiers are alphabetized.

Suggested fix
-import { describe, it, expect, vi } from 'vitest'
+import { describe, expect, it, vi } from 'vitest'
 import {
   generateImage,
-  generateVideo,
   generateSpeech,
   generateTranscription,
+  generateVideo,
 } from '../src/index'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { describe, it, expect, vi } from 'vitest'
import {
generateImage,
generateVideo,
generateSpeech,
generateTranscription,
} from '../src/index'
import type { StreamChunk, VideoStatusResult } from '../src/types'
import { describe, expect, it, vi } from 'vitest'
import {
generateImage,
generateSpeech,
generateTranscription,
generateVideo,
} from '../src/index'
import type { StreamChunk, VideoStatusResult } from '../src/types'
🧰 Tools
🪛 ESLint

[error] 1-1: Member 'expect' of the import declaration should be sorted alphabetically.

(sort-imports)


[error] 5-5: Member 'generateSpeech' of the import declaration should be sorted alphabetically.

(sort-imports)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 1 - 8,
Alphabetize the named import specifiers so ESLint sort-imports passes: change
the vitest import to order the specifiers as describe, expect, it, vi and
reorder the index exports to generateImage, generateSpeech,
generateTranscription, generateVideo (the type import StreamChunk,
VideoStatusResult can remain as-is); update the import lines that reference
describe/it/expect/vi,
generateImage/generateVideo/generateSpeech/generateTranscription accordingly.

Comment on lines +221 to +249
// RUN_STARTED, video:job:created, video:status (processing), video:status (completed), generation:result, RUN_FINISHED
const types = chunks.map((c) =>
c.type === 'CUSTOM' ? `CUSTOM:${c.name}` : c.type,
)

expect(types).toContain('RUN_STARTED')
expect(types).toContain('CUSTOM:video:job:created')
expect(types).toContain('CUSTOM:video:status')
expect(types).toContain('CUSTOM:generation:result')
expect(types).toContain('RUN_FINISHED')

// Check job created event
const jobCreated = chunks.find(
(c) => c.type === 'CUSTOM' && c.name === 'video:job:created',
)
if (jobCreated?.type === 'CUSTOM') {
expect(jobCreated.value).toEqual({ jobId: 'job-123' })
}

// Check result
const result = chunks.find(
(c) => c.type === 'CUSTOM' && c.name === 'generation:result',
)
if (result?.type === 'CUSTOM') {
const value = result.value as any
expect(value.url).toBe('https://example.com/video.mp4')
expect(value.jobId).toBe('job-123')
expect(value.status).toBe('completed')
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert the video lifecycle sequence explicitly.

This test title promises an ordered lifecycle, but toContain only checks membership. A regression that reorders generation:result/RUN_FINISHED or changes the number of status events would still pass.

Suggested tightening
-    expect(types).toContain('RUN_STARTED')
-    expect(types).toContain('CUSTOM:video:job:created')
-    expect(types).toContain('CUSTOM:video:status')
-    expect(types).toContain('CUSTOM:generation:result')
-    expect(types).toContain('RUN_FINISHED')
+    expect(types).toEqual([
+      'RUN_STARTED',
+      'CUSTOM:video:job:created',
+      'CUSTOM:video:status',
+      'CUSTOM:video:status',
+      'CUSTOM:generation:result',
+      'RUN_FINISHED',
+    ])
+
+    const started = chunks[0]
+    const finished = chunks[5]
+    expect(started?.type).toBe('RUN_STARTED')
+    expect(finished?.type).toBe('RUN_FINISHED')
+    if (started?.type === 'RUN_STARTED' && finished?.type === 'RUN_FINISHED') {
+      expect(finished.runId).toBe(started.runId)
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// RUN_STARTED, video:job:created, video:status (processing), video:status (completed), generation:result, RUN_FINISHED
const types = chunks.map((c) =>
c.type === 'CUSTOM' ? `CUSTOM:${c.name}` : c.type,
)
expect(types).toContain('RUN_STARTED')
expect(types).toContain('CUSTOM:video:job:created')
expect(types).toContain('CUSTOM:video:status')
expect(types).toContain('CUSTOM:generation:result')
expect(types).toContain('RUN_FINISHED')
// Check job created event
const jobCreated = chunks.find(
(c) => c.type === 'CUSTOM' && c.name === 'video:job:created',
)
if (jobCreated?.type === 'CUSTOM') {
expect(jobCreated.value).toEqual({ jobId: 'job-123' })
}
// Check result
const result = chunks.find(
(c) => c.type === 'CUSTOM' && c.name === 'generation:result',
)
if (result?.type === 'CUSTOM') {
const value = result.value as any
expect(value.url).toBe('https://example.com/video.mp4')
expect(value.jobId).toBe('job-123')
expect(value.status).toBe('completed')
}
// RUN_STARTED, video:job:created, video:status (processing), video:status (completed), generation:result, RUN_FINISHED
const types = chunks.map((c) =>
c.type === 'CUSTOM' ? `CUSTOM:${c.name}` : c.type,
)
expect(types).toEqual([
'RUN_STARTED',
'CUSTOM:video:job:created',
'CUSTOM:video:status',
'CUSTOM:video:status',
'CUSTOM:generation:result',
'RUN_FINISHED',
])
const started = chunks[0]
const finished = chunks[5]
expect(started?.type).toBe('RUN_STARTED')
expect(finished?.type).toBe('RUN_FINISHED')
if (started?.type === 'RUN_STARTED' && finished?.type === 'RUN_FINISHED') {
expect(finished.runId).toBe(started.runId)
}
// Check job created event
const jobCreated = chunks.find(
(c) => c.type === 'CUSTOM' && c.name === 'video:job:created',
)
if (jobCreated?.type === 'CUSTOM') {
expect(jobCreated.value).toEqual({ jobId: 'job-123' })
}
// Check result
const result = chunks.find(
(c) => c.type === 'CUSTOM' && c.name === 'generation:result',
)
if (result?.type === 'CUSTOM') {
const value = result.value as any
expect(value.url).toBe('https://example.com/video.mp4')
expect(value.jobId).toBe('job-123')
expect(value.status).toBe('completed')
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 221 -
249, The test currently builds types from chunks and uses expect(...).toContain
which only checks membership; change the assertions to verify ordering and
counts explicitly: compute the ordered list (types) from chunks and assert it
equals the exact expected sequence (e.g., ['RUN_STARTED',
'CUSTOM:video:job:created', 'CUSTOM:video:status', ...,
'CUSTOM:generation:result', 'RUN_FINISHED']) or assert index ordering with
indexOf checks to ensure generation:result comes before RUN_FINISHED; also
assert the exact count of video:status events (by filtering chunks for
c.type==='CUSTOM' && c.name==='video:status' and checking its length) so
reorders or missing/extra status events will fail — update assertions around
variables/types, chunks, jobCreated, and result accordingly.

Comment on lines +252 to +271
it('should emit RUN_ERROR when video generation fails', async () => {
const adapter = createMockVideoAdapter({ failOnPoll: 1 })

const chunks = await collectChunks(
generateVideo({
adapter,
prompt: 'test',
stream: true,
pollingInterval: 10,
}),
)

const types = chunks.map((c) => c.type)
expect(types).toContain('RUN_ERROR')

const error = chunks.find((c) => c.type === 'RUN_ERROR')
if (error?.type === 'RUN_ERROR') {
expect(error.error.message).toBe('Video processing error')
}
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Tighten the video error-path assertions to verify terminal behavior.

These cases currently prove that a RUN_ERROR exists and carries the right message, but they do not prove the stream terminated correctly. A broken implementation that emits RUN_FINISHED after the error would still pass.

Suggested tightening
     const error = chunks.find((c) => c.type === 'RUN_ERROR')
+    expect(chunks.at(-1)?.type).toBe('RUN_ERROR')
+    expect(chunks.map((c) => c.type)).not.toContain('RUN_FINISHED')
     if (error?.type === 'RUN_ERROR') {
       expect(error.error.message).toBe('Video processing error')
     }

Also applies to: 317-365, 367-412

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai/tests/stream-generation.test.ts` around lines 252 -
271, The test currently only asserts that a RUN_ERROR chunk exists and has the
expected message but doesn't assert terminal stream behavior; update the test
around generateVideo/createMockVideoAdapter/collectChunks to verify the stream
ended on error by asserting either that no RUN_FINISHED chunk is present or that
the final emitted chunk's type is RUN_ERROR (and verify the RUN_ERROR payload);
apply the same change to the other video tests referencing
RUN_ERROR/RUN_FINISHED to ensure they fail if a RUN_FINISHED is emitted after an
error.

…354)

* feat(ai-client): pass abort signal to fetcher in generation clients

The fetcher function in GenerationClient and VideoGenerationClient
now receives the AbortSignal as an optional second parameter, allowing
long-running fetcher calls to be cancelled mid-flight when stop() is
called.

Updated fetcher type signature across all framework integrations
(React, Solid, Vue, Svelte) — backwards-compatible since the options
parameter is optional.

* ci: apply automated fixes

* refactor: extract GenerationFetcher utility type to centralize fetcher signature

Replace inline fetcher type definitions across all framework hooks (React,
Solid, Vue, Svelte) with a shared GenerationFetcher<TInput, TResult> type
from ai-client. Future changes to the fetcher signature only need to update
generation-types.ts instead of ~28 files.

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
packages/typescript/ai-solid/src/use-generation.ts (1)

112-143: ⚠️ Potential issue | 🟡 Minor

Remove the invalid dependency array from createMemo.

Solid.js's createMemo does not accept a dependency array as its second argument. Dependencies are automatically tracked. The [clientId] array here is incorrect syntax—the second argument should be an initial value (of the same type as the computed value), not a deps array.

🔧 Proposed fix
   const client = createMemo(() => {
     // ... client creation logic ...
-  }, [clientId])
+  })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-solid/src/use-generation.ts` around lines 112 - 143,
The createMemo call for the client memo incorrectly passes a dependency array
“[clientId]” as a second argument; Solid tracks dependencies automatically and
the second parameter is not a deps array, so remove the second argument and call
createMemo with only the factory function that returns the GenerationClient
(i.e., update the client = createMemo(() => { ... }) invocation), leaving the
existing clientOptions/GenerationClient logic intact (references: createMemo,
client, clientId, GenerationClient).
🧹 Nitpick comments (4)
packages/typescript/ai-svelte/src/create-generation.svelte.ts (1)

50-67: CreateGenerationReturn interface widens generate input type.

Similar to the React and Vue hooks, the generate signature uses Record<string, any> instead of preserving TInput. Consider adding a TInput generic parameter for compile-time safety in direct usage scenarios.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-svelte/src/create-generation.svelte.ts` around lines
50 - 67, The CreateGenerationReturn interface currently widens generate to
Record<string, any>; add a TInput generic (e.g., CreateGenerationReturn<TInput,
TOutput>) and change generate to accept TInput instead of Record<string, any> to
preserve compile-time safety; also adjust updateBody to accept Partial<TInput>
(or another suitable TInput-derived shape) and update all call sites/usages of
CreateGenerationReturn, generate, and updateBody to pass/propagate the new
TInput generic.
packages/typescript/ai-svelte/src/create-generate-video.svelte.ts (1)

175-178: Document cleanup requirement in the JSDoc example.

The inline comment explains that users should call video.stop() for cleanup, but this isn't reflected in the example code (lines 82-103). Consider adding cleanup guidance to the example to prevent resource leaks.

Example addition to JSDoc
 * `@example`
 * ```svelte
 * <script>
 *   import { createGenerateVideo, fetchServerSentEvents } from '@tanstack/ai-svelte'
 *   import { onDestroy } from 'svelte'
 *
 *   const video = createGenerateVideo({
 *     connection: fetchServerSentEvents('/api/generate/video'),
 *   })
 *
 *   // Clean up on component unmount
 *   onDestroy(() => video.stop())
 * </script>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-svelte/src/create-generate-video.svelte.ts` around
lines 175 - 178, Add cleanup guidance to the JSDoc example for
createGenerateVideo: update the example block that shows createGenerateVideo and
fetchServerSentEvents to include Svelte's onDestroy and call video.stop() inside
it (i.e., import onDestroy, then onDestroy(() => video.stop())). This ensures
consumers see how to properly stop the generation and avoid resource leaks;
reference the createGenerateVideo example, fetchServerSentEvents usage, and the
video.stop() method so reviewers can find and update the example.
packages/typescript/ai-client/tests/generation-client.test.ts (1)

129-143: Consider importing GENERATION_EVENTS constants instead of hardcoding event names.

The test file hardcodes event name strings like 'generation:result' and 'generation:progress'. If the constant values in generation-types.ts are ever changed, these tests would still pass but wouldn't accurately reflect the production behavior.

Example refactor
+import { GENERATION_EVENTS } from '../src/generation-types'
 import { GenerationClient } from '../src/generation-client'

 // Then in tests:
-          name: 'generation:result',
+          name: GENERATION_EVENTS.RESULT,

Also applies to: 220-234

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-client/tests/generation-client.test.ts` around lines
129 - 143, Replace hardcoded event name strings in the tests with the shared
constants: import GENERATION_EVENTS (or the specific named exports like
GENERATION_RESULT and GENERATION_PROGRESS) from generation-types.ts and use
those constants instead of literal 'generation:result' and 'generation:progress'
in the createMockConnection calls (e.g., the event objects in
generation-client.test.ts where the CUSTOM events are constructed, including the
blocks around lines referencing mockResult and progress events).
packages/typescript/ai-client/tests/video-generation-client.test.ts (1)

141-185: Consider importing GENERATION_EVENTS constants instead of hardcoding video event names.

The test hardcodes 'video:job:created', 'video:status', and 'generation:result' strings. Using GENERATION_EVENTS.VIDEO_JOB_CREATED, GENERATION_EVENTS.VIDEO_STATUS, and GENERATION_EVENTS.RESULT would ensure tests stay in sync with production code if event names change.

Example refactor
+import { GENERATION_EVENTS } from '../src/generation-types'

 // Then in tests:
-          name: 'video:job:created',
+          name: GENERATION_EVENTS.VIDEO_JOB_CREATED,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-client/tests/video-generation-client.test.ts` around
lines 141 - 185, Replace the hardcoded event name strings in the test's mock
connection payload (the entries passed to createMockConnection) with the shared
constants from GENERATION_EVENTS (use GENERATION_EVENTS.VIDEO_JOB_CREATED,
GENERATION_EVENTS.VIDEO_STATUS, and GENERATION_EVENTS.RESULT), and add an import
for GENERATION_EVENTS at the top of the test file so the test uses the canonical
event names used by production code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-vue/src/use-generate-video.ts`:
- Around line 175-179: The watcher in use-generate-video.ts must react to deep
mutations and must never forward undefined so stale fields persist; change the
watch call to a deep watch (pass { deep: true }) and in the callback pass a full
snapshot (deep clone or toRaw-then-clone) to client.updateOptions({ body:
snapshot }) and if newBody is undefined pass an empty object ({}), ensuring
removed keys are cleared; reference the existing watcher, client.updateOptions,
and the behavior in video-generation-client.ts so the client always receives the
complete current body state rather than an undefined/partial patch.

---

Duplicate comments:
In `@packages/typescript/ai-solid/src/use-generation.ts`:
- Around line 112-143: The createMemo call for the client memo incorrectly
passes a dependency array “[clientId]” as a second argument; Solid tracks
dependencies automatically and the second parameter is not a deps array, so
remove the second argument and call createMemo with only the factory function
that returns the GenerationClient (i.e., update the client = createMemo(() => {
... }) invocation), leaving the existing clientOptions/GenerationClient logic
intact (references: createMemo, client, clientId, GenerationClient).

---

Nitpick comments:
In `@packages/typescript/ai-client/tests/generation-client.test.ts`:
- Around line 129-143: Replace hardcoded event name strings in the tests with
the shared constants: import GENERATION_EVENTS (or the specific named exports
like GENERATION_RESULT and GENERATION_PROGRESS) from generation-types.ts and use
those constants instead of literal 'generation:result' and 'generation:progress'
in the createMockConnection calls (e.g., the event objects in
generation-client.test.ts where the CUSTOM events are constructed, including the
blocks around lines referencing mockResult and progress events).

In `@packages/typescript/ai-client/tests/video-generation-client.test.ts`:
- Around line 141-185: Replace the hardcoded event name strings in the test's
mock connection payload (the entries passed to createMockConnection) with the
shared constants from GENERATION_EVENTS (use
GENERATION_EVENTS.VIDEO_JOB_CREATED, GENERATION_EVENTS.VIDEO_STATUS, and
GENERATION_EVENTS.RESULT), and add an import for GENERATION_EVENTS at the top of
the test file so the test uses the canonical event names used by production
code.

In `@packages/typescript/ai-svelte/src/create-generate-video.svelte.ts`:
- Around line 175-178: Add cleanup guidance to the JSDoc example for
createGenerateVideo: update the example block that shows createGenerateVideo and
fetchServerSentEvents to include Svelte's onDestroy and call video.stop() inside
it (i.e., import onDestroy, then onDestroy(() => video.stop())). This ensures
consumers see how to properly stop the generation and avoid resource leaks;
reference the createGenerateVideo example, fetchServerSentEvents usage, and the
video.stop() method so reviewers can find and update the example.

In `@packages/typescript/ai-svelte/src/create-generation.svelte.ts`:
- Around line 50-67: The CreateGenerationReturn interface currently widens
generate to Record<string, any>; add a TInput generic (e.g.,
CreateGenerationReturn<TInput, TOutput>) and change generate to accept TInput
instead of Record<string, any> to preserve compile-time safety; also adjust
updateBody to accept Partial<TInput> (or another suitable TInput-derived shape)
and update all call sites/usages of CreateGenerationReturn, generate, and
updateBody to pass/propagate the new TInput generic.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9728012a-d789-4c31-93cb-c10d1b4c411c

📥 Commits

Reviewing files that changed from the base of the PR and between 127353e and e4288b2.

📒 Files selected for processing (30)
  • packages/typescript/ai-client/src/generation-client.ts
  • packages/typescript/ai-client/src/generation-types.ts
  • packages/typescript/ai-client/src/index.ts
  • packages/typescript/ai-client/src/video-generation-client.ts
  • packages/typescript/ai-client/tests/generation-client.test.ts
  • packages/typescript/ai-client/tests/video-generation-client.test.ts
  • packages/typescript/ai-react/src/use-generate-image.ts
  • packages/typescript/ai-react/src/use-generate-speech.ts
  • packages/typescript/ai-react/src/use-generate-video.ts
  • packages/typescript/ai-react/src/use-generation.ts
  • packages/typescript/ai-react/src/use-summarize.ts
  • packages/typescript/ai-react/src/use-transcription.ts
  • packages/typescript/ai-solid/src/use-generate-image.ts
  • packages/typescript/ai-solid/src/use-generate-speech.ts
  • packages/typescript/ai-solid/src/use-generate-video.ts
  • packages/typescript/ai-solid/src/use-generation.ts
  • packages/typescript/ai-solid/src/use-summarize.ts
  • packages/typescript/ai-solid/src/use-transcription.ts
  • packages/typescript/ai-svelte/src/create-generate-image.svelte.ts
  • packages/typescript/ai-svelte/src/create-generate-speech.svelte.ts
  • packages/typescript/ai-svelte/src/create-generate-video.svelte.ts
  • packages/typescript/ai-svelte/src/create-generation.svelte.ts
  • packages/typescript/ai-svelte/src/create-summarize.svelte.ts
  • packages/typescript/ai-svelte/src/create-transcription.svelte.ts
  • packages/typescript/ai-vue/src/use-generate-image.ts
  • packages/typescript/ai-vue/src/use-generate-speech.ts
  • packages/typescript/ai-vue/src/use-generate-video.ts
  • packages/typescript/ai-vue/src/use-generation.ts
  • packages/typescript/ai-vue/src/use-summarize.ts
  • packages/typescript/ai-vue/src/use-transcription.ts
🚧 Files skipped from review as they are similar to previous changes (7)
  • packages/typescript/ai-react/src/use-generate-speech.ts
  • packages/typescript/ai-react/src/use-transcription.ts
  • packages/typescript/ai-solid/src/use-summarize.ts
  • packages/typescript/ai-svelte/src/create-generate-speech.svelte.ts
  • packages/typescript/ai-vue/src/use-transcription.ts
  • packages/typescript/ai-client/src/generation-types.ts
  • packages/typescript/ai-client/src/generation-client.ts

Comment on lines +175 to +179
watch(
() => options.body,
(newBody) => {
client.updateOptions({ body: newBody })
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Sync body updates deeply and clear removed values.

This watcher only reacts to reference replacement, so mutating a reactive body object in place never reaches the client. It also forwards undefined into client.updateOptions(), and packages/typescript/ai-client/src/video-generation-client.ts:246-259 treats that as “leave the old body alone”, so later requests can carry stale payload fields.

🛠️ Suggested fix
   watch(
     () => options.body,
     (newBody) => {
-      client.updateOptions({ body: newBody })
+      client.updateOptions({ body: newBody ?? {} })
     },
+    { deep: true },
   )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
watch(
() => options.body,
(newBody) => {
client.updateOptions({ body: newBody })
},
watch(
() => options.body,
(newBody) => {
client.updateOptions({ body: newBody ?? {} })
},
{ deep: true },
)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-vue/src/use-generate-video.ts` around lines 175 - 179,
The watcher in use-generate-video.ts must react to deep mutations and must never
forward undefined so stale fields persist; change the watch call to a deep watch
(pass { deep: true }) and in the callback pass a full snapshot (deep clone or
toRaw-then-clone) to client.updateOptions({ body: snapshot }) and if newBody is
undefined pass an empty object ({}), ensuring removed keys are cleared;
reference the existing watcher, client.updateOptions, and the behavior in
video-generation-client.ts so the client always receives the complete current
body state rather than an undefined/partial patch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant