You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Longer warm timeout means faster responses but more compute usage. Set to `0` to suspend immediately after each turn (minimum latency cost, slight delay on next message).
505
505
</Info>
506
506
507
+
#### Stream options
508
+
509
+
Control how `streamText` results are converted to the frontend stream via `toUIMessageStream()`. Set static defaults on the task, or override per-turn.
510
+
511
+
##### Error handling with onError
512
+
513
+
When `streamText` encounters an error mid-stream (rate limits, API failures, network errors), the `onError` callback converts it to a string that's sent to the frontend as an `{ type: "error", errorText }` chunk. The AI SDK's `useChat` receives this via its `onError` callback.
514
+
515
+
By default, the raw error message is sent to the frontend. Use `onError` to sanitize errors and avoid leaking internal details:
516
+
517
+
```ts
518
+
exportconst myChat =chat.task({
519
+
id: "my-chat",
520
+
uiMessageStreamOptions: {
521
+
onError: (error) => {
522
+
// Log the full error server-side for debugging
523
+
console.error("Stream error:", error);
524
+
// Return a sanitized message — this is what the frontend sees
525
+
if (errorinstanceofError&&error.message.includes("rate limit")) {
526
+
return"Rate limited — please wait a moment and try again.";
527
+
}
528
+
return"Something went wrong. Please try again.";
529
+
},
530
+
},
531
+
run: async ({ messages, signal }) => {
532
+
returnstreamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
533
+
},
534
+
});
535
+
```
536
+
537
+
`onError` is also called for tool execution errors, so a single handler covers both LLM errors and tool failures.
538
+
539
+
On the frontend, handle the error in `useChat`:
540
+
541
+
```tsx
542
+
const { messages, sendMessage } =useChat({
543
+
transport,
544
+
onError: (error) => {
545
+
// error.message contains the string returned by your onError handler
546
+
toast.error(error.message);
547
+
},
548
+
});
549
+
```
550
+
551
+
##### Reasoning and sources
552
+
553
+
Control which AI SDK features are forwarded to the frontend:
554
+
555
+
```ts
556
+
exportconst myChat =chat.task({
557
+
id: "my-chat",
558
+
uiMessageStreamOptions: {
559
+
sendReasoning: true, // Forward model reasoning (default: true)
returnstreamText({ model: openai("gpt-4o"), messages, abortSignal: signal });
564
+
},
565
+
});
566
+
```
567
+
568
+
##### Per-turn overrides
569
+
570
+
Override per-turn with `chat.setUIMessageStreamOptions()` — per-turn values merge with the static config (per-turn wins on conflicts). The override is cleared automatically after each turn.
571
+
572
+
```ts
573
+
run: async ({ messages, clientData, signal }) => {
returnstreamText({ model: openai(clientData.model??"gpt-4o"), messages, abortSignal: signal });
579
+
},
580
+
```
581
+
582
+
`chat.setUIMessageStreamOptions()` works across all abstraction levels — `chat.task()`, `chat.createSession()` / `turn.complete()`, and `chat.pipeAndCapture()`.
583
+
584
+
See [ChatUIMessageStreamOptions](/ai-chat/reference#chatuimessagestreamoptions) for the full reference.
585
+
586
+
<Note>
587
+
`onFinish` is managed internally for response capture and cannot be overridden here. Use `streamText`'s `onFinish` callback for custom finish handling, or use [raw task mode](#raw-task-with-primitives) for full control over `toUIMessageStream()`.
588
+
</Note>
589
+
507
590
### Manual mode with task()
508
591
509
592
If you need full control over task options, use the standard `task()` with `ChatTaskPayload` and `chat.pipe()`:
@@ -647,6 +730,8 @@ for await (const turn of session) {
647
730
648
731
For full control, use a standard `task()` with the composable primitives from the `chat` namespace. You manage everything: the turn loop, stop signals, message accumulation, and turn-complete signaling.
649
732
733
+
Raw task mode also lets you call `.toUIMessageStream()` yourself with any options — including `onFinish` and `originalMessages`. This is the right choice when you need complete control over the stream conversion beyond what `chat.setUIMessageStreamOptions()` provides.
|`chat.MessageAccumulator`| Class that accumulates conversation messages across turns |
164
166
167
+
## ChatUIMessageStreamOptions
168
+
169
+
Options for customizing `toUIMessageStream()`. Set as static defaults via `uiMessageStreamOptions` on `chat.task()`, or override per-turn via `chat.setUIMessageStreamOptions()`. See [Stream options](/ai-chat/backend#stream-options) for usage examples.
170
+
171
+
Derived from the AI SDK's `UIMessageStreamOptions` with `onFinish`, `originalMessages`, and `generateMessageId` omitted (managed internally).
172
+
173
+
| Option | Type | Default | Description |
174
+
|--------|------|---------|-------------|
175
+
|`onError`|`(error: unknown) => string`| Raw error message | Called on LLM errors and tool execution errors. Return a sanitized string — sent as `{ type: "error", errorText }` to the frontend. |
176
+
|`sendReasoning`|`boolean`|`true`| Send reasoning parts to the client |
177
+
|`sendSources`|`boolean`|`false`| Send source parts to the client |
178
+
|`sendFinish`|`boolean`|`true`| Send the finish event. Set to `false` when chaining multiple `streamText` calls. |
179
+
|`sendStart`|`boolean`|`true`| Send the message start event. Set to `false` when chaining. |
180
+
|`messageMetadata`|`(options: { part }) => metadata`| — | Extract message metadata to send to the client. Called on `start` and `finish` events. |
181
+
165
182
## TriggerChatTransport options
166
183
167
184
Options for the frontend transport constructor and `useTriggerChatTransport` hook.
0 commit comments