Skip to content
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
188 changes: 94 additions & 94 deletions src/pages/docs/ai-transport/framework-guides/vercel-ai-sdk.mdx
Original file line number Diff line number Diff line change
@@ -1,146 +1,146 @@
---
title: "Vercel AI SDK"
meta_description: "Understand how Ably AI Transport integrates with the Vercel AI SDK to add durable sessions, multi-device sync, and bidirectional control to your chat application."
meta_keywords: "AI Transport, Vercel AI SDK, useChat, ChatTransport, streamText, UIMessage, durable sessions, streaming, realtime"
meta_description: "Use Ably AI Transport with the Vercel AI SDK to add durable sessions, multi-device sync, and bidirectional control to your useChat application."
meta_keywords: "AI Transport, Vercel AI SDK, useChat, ChatTransport, durable sessions, streaming, realtime"
---

Ably AI Transport integrates with the Vercel AI SDK to add durable sessions, multi-device sync, and bidirectional control to your chat application. This guide explains what each SDK does, how they connect, and what capabilities the combination unlocks.
Vercel AI SDK with `useChat` handles model orchestration, tool calls, and React UI. By default, it delivers agent responses via HTTP streaming, a single, stateless connection that ends when the request completes. This is the fundamental constraint: not a limitation of the SDK, but of the HTTP transport it relies on. Vercel recognized this and built the `ChatTransport` interface, an explicit plug-in point for a better transport.

Ready to build? [Get started with Vercel AI SDK](/docs/ai-transport/getting-started/vercel-ai-sdk).
AI Transport implements `ChatTransport`, replacing HTTP streaming with a durable session layer. Your existing `useChat` code stays; only the transport changes.

## Understand the Vercel AI SDK <a id="vercel-ai-sdk"/>
[Get started with Vercel AI SDK and AI Transport](/docs/ai-transport/getting-started/vercel-ai-sdk).

The Vercel AI SDK is a toolkit for building AI-powered applications. It handles model interaction, streaming, and UI state management. The following concepts are the ones you need to understand for the AI Transport integration.
## What each layer covers <a id="layers"/>

### The provider system <a id="providers"/>
Vercel AI SDK and AI Transport are complementary. Vercel AI SDK covers model interaction and the React UI; AI Transport covers the session layer beneath them. The `ChatTransport` interface is the integration point Vercel built specifically so teams can swap in a more capable transport when they need it.

The AI SDK abstracts model providers behind a unified interface. You call `streamText()` with any provider (Anthropic, OpenAI, Google) and get the same API. Switching models is a one-line change. The provider handles the specifics of each model's API, authentication, and capabilities.
| Vercel AI SDK | Ably AI Transport |
| --- | --- |
| Model orchestration (`streamText`, `generateText`) | Durable sessions (persistent, resumable, accumulated, multi-device) |
| UI hooks (`useChat`, `useCompletion`) | Reconnection and recovery (hot, warm, cold) |
| Tool calls | Agent presence and health |
| `ChatTransport` interface (pluggable, this is where AI Transport connects) | Bidirectional control (cancel, barge-in, steer) |
| HTTP streaming (default) | Human handover |
| | Shared live state |

### streamText <a id="stream-text"/>
Vercel AI SDK is focused on making model interaction simple, and it does that well. The transport layer (HTTP streaming via `ChatTransport`) is pluggable *by design*. Vercel built this interface so teams can swap in a more capable transport when they need durability, multi-device, or bidirectional control. AI Transport implements `ChatTransport`.

[`streamText()`](https://ai-sdk.dev/docs/reference/ai-sdk-core/stream-text) is the core function for streaming AI responses. You pass it a model, a system prompt, and the conversation messages. It calls the model and returns a stream of events as the model generates its response token by token. The stream includes text deltas, tool call inputs, tool results, reasoning content, and lifecycle events (start, finish, error). On the server, `streamText()` is where the AI model interaction happens.
## How they fit together <a id="architecture"/>

### UIMessage and parts <a id="ui-message"/>
The architecture stacks four layers. AI Transport plugs into the `ChatTransport` interface that Vercel designed for custom transports, and is backed by Ably's global infrastructure.

[`UIMessage`](https://ai-sdk.dev/docs/reference/ai-sdk-core/ui-message) is Vercel's message model. Each message has a role (`user`, `assistant`, `system`) and an array of parts. Parts represent different types of content within a single message: text, reasoning, tool calls, tool results, files, and sources. Each part tracks its own streaming state (`streaming` or `done`), so the UI can show partial content as it arrives.

A `UIMessageChunk` is one streaming event that contributes to a `UIMessage`. As the model generates a response, it emits a series of chunks (text-start, text-delta, text-end, tool-input-start, finish, and so on). The client accumulates these chunks into complete `UIMessage` objects with fully populated parts.
<Code>
```
Vercel AI SDK UI
useChat() / useCompletion() / tool calls
|
ChatTransport interface <-- pluggable
|
Ably AI Transport
Implements ChatTransport
Adds: durable sessions, presence, recovery, control
|
Ably infrastructure
Global edge, ordering, persistence, replication
|
Every device, reliably
```
</Code>

### useChat <a id="use-chat"/>
The change to your application code is minimal. Your existing `useChat` integration, tool calls, and UI logic stay the same; only the transport configures differently.

[`useChat`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/use-chat) is the main React hook for building chat UIs. It manages the array of `UIMessage` objects, sends messages to the server via a transport, handles streaming updates as chunks arrive, and tracks the conversation status (submitted, streaming, ready, error). It provides helpers for common operations: sending messages, regenerating responses, stopping a stream, and submitting tool results.
<Code>
```javascript
// Before: default HTTP transport
const { messages } = useChat()

### ChatTransport <a id="chat-transport"/>
// After: durable transport (everything else stays)
const transport = useClientTransport({ channelName: 'my-session' })
const chatTransport = useChatTransport(transport)
const { messages } = useChat({ transport: chatTransport })
```
</Code>

[`ChatTransport`](https://ai-sdk.dev/docs/ai-sdk-ui/transport) is the interface that `useChat` calls to send and receive messages. It defines two methods: `sendMessages` (submit messages and receive a stream of chunks back) and `reconnectToStream` (resume an interrupted stream).
Not using React? See the [Core SDK getting started guide](/docs/ai-transport/getting-started/core-sdk) for the server-side integration path.

The default implementation sends an HTTP POST to your server endpoint and reads back a server-sent events (SSE) stream. This is where AI Transport plugs in: it provides an alternative `ChatTransport` implementation that routes messages through an Ably channel instead of a direct HTTP stream.
Client-side code uses a token endpoint for authentication, never an embedded API key. See [Authentication](/docs/ai-transport/concepts/authentication) for the recommended pattern.

### Tool calling <a id="tool-calling"/>
## Try it <a id="try-it"/>

Models can invoke tools that you define with a schema and an execute function. The model decides when to call a tool and generates the input parameters. The SDK executes the tool and feeds the result back to the model, which can then continue generating its response. Tool calls can require human approval before execution, creating approval gates in the conversation.
A working React app using `useChat` with AI Transport. See durable streaming, reconnection, and multi-device behavior in action; interact with it in the browser.

## Understand the default transport and its limitations <a id="default-transport"/>
{/* TODO: replace placeholder URL once the public live demo is published. */}
[Open live example](#live-example-placeholder).

Without AI Transport, the message flow in a Vercel AI SDK application works as follows:
## What AI Transport adds to your Vercel app <a id="capabilities"/>

1. `useChat` uses the default transport, which sends an HTTP POST to your server endpoint.
2. The server calls `streamText()`, which returns a stream of `UIMessageChunk` events.
3. The server converts this to an SSE response using [`createUIMessageStreamResponse()`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/create-ui-message-stream-response).
4. The client reads the SSE stream and reassembles the chunks into full `UIMessage` objects.
The capabilities below sit outside what HTTP streaming can deliver, not because Vercel AI SDK is incomplete, but because HTTP is stateless and unidirectional by design. Vercel focuses on model orchestration and UI; AI Transport handles the transport layer HTTP can't provide.

This is a direct, point-to-point HTTP connection. The stream is coupled to the connection. This works for simple interactions, but creates limitations in production:
### Durable streaming <a id="durable-streaming"/>

- Streams die on disconnection. When a phone switches from Wi-Fi to cellular, a user refreshes the page, or a laptop lid closes mid-response, the stream fails. The model continues generating tokens, but there is no way to deliver them.
- Sessions do not span devices. The connection is exclusively between the requesting client and the server. A second tab or a phone cannot access the same stream.
- Clients cannot signal the agent. SSE is one-way: server to client. The only way for the client to communicate is to close the connection, which kills the stream. Cancel and resume are mutually exclusive.
- No persistence beyond the connection. When the connection ends, the stream is gone. There is no way to replay what happened or resume from where it left off.
Streams persist, accumulate, and resume. Full state hydration on reconnect, so users see the same response regardless of how it was interrupted.

## Understand what AI Transport adds <a id="what-ait-adds"/>
[Learn more](/docs/ai-transport/features/reconnection-and-recovery).

AI Transport implements the `ChatTransport` interface and is a drop-in replacement for the default HTTP transport. You use it with `useChat` without changing your application code:
### Session continuity <a id="session-continuity"/>

<Code>
```javascript
// Before: default HTTP transport
const { messages } = useChat()
Switch devices, close tabs, go offline; the session follows with full state. The same conversation is accessible on phone, laptop, and tablet, and across multiple participants in the same thread.

// After: Ably transport
const transport = useClientTransport({ channelName: chatId })
const chatTransport = useChatTransport(transport)
const { messages } = useChat({ transport: chatTransport })
```
</Code>
[Learn more](/docs/ai-transport/features/multi-device).

Instead of SSE streaming between client and server, tokens flow through an Ably channel. The HTTP request triggers the server, but the response is decoupled from it.
### Bidirectional control <a id="bidirectional-control"/>

The integration has four parts:
Cancel, barge in, steer, and redirect mid-stream. Clients signal agents through the session, not through a separate channel.

1. `useChatTransport` uses a `UIMessageCodec` under the hood to encode Vercel's `UIMessageChunk` events to Ably messages. Every chunk type (text-delta, tool-input, finish, and others) maps to an Ably message with headers to track the metadata. The codec handles encoding on the server, decoding on the client, and reassembling the chunks into complete `UIMessage` objects.
2. `useChatTransport` is a wrapper that converts the Ably Core SDK [`ClientTransport`](/docs/ai-transport/api-reference/client-transport) object into the `ChatTransport` interface for use with `useChat`.
3. `useMessageSync` subscribes to the transport's conversation tree and pushes updates into `useChat`'s `setMessages`. This keeps Vercel's local state in sync with the authoritative state on the channel. This is required for features like multi-device sync and conversation branching, and is how the Ably SDK brings those features to Vercel's `useChat` without `useChat` natively supporting them.
4. On the server, `turn.streamResponse(result.toUIMessageStream())` pipes the model's output through the codec encoder to the Ably channel. The HTTP response returns immediately (status 200, empty body). The tokens are delivered to all connected clients through the channel, not through the HTTP response.
[Learn more](/docs/ai-transport/features/interruption).

## See how they fit together <a id="architecture"/>
### Agent presence <a id="agent-presence"/>

The architecture stacks four layers:
Streaming, thinking, idle, or crashed? Realtime, not polling. The UI reflects what the agent is actually doing.

1. Vercel AI SDK provides `useChat()`, `streamText()`, tool calls, and UI state management.
2. The `ChatTransport` interface is the plug-in point that Vercel designed for custom transports.
3. AI Transport implements `ChatTransport` and adds sessions, presence, recovery, and control.
4. Ably infrastructure provides the global edge network, ordering, and persistence.
[Learn more](/docs/ai-transport/features/agent-presence).

Vercel AI SDK provides:
### Human handover <a id="human-handover"/>

- Model orchestration (`streamText`, providers)
- UI state management (`useChat`, message arrays, status tracking)
- Tool calls and structured output
- The `ChatTransport` interface as the extension point
AI handles routine queries; a human takes the complex ones. The session stays seamless across the handoff.

Ably AI Transport provides:
[Learn more](/docs/ai-transport/features/human-in-the-loop).

- Durable sessions on Ably channels
- Multi-device fan-out
- Reconnection and recovery
- Active turn tracking
- Bidirectional control (cancel, steer, interrupt)
- Ordering and persistence
- History and replay
- Token compaction
### Shared live state <a id="shared-live-state"/>

## Choose an integration path <a id="integration-paths"/>
Continuous state sharing between agent and client, live, not per-request. Every connected participant sees the same state at the same time.

Both paths use the same server code. The difference is client-side only.
[Learn more](/docs/ai-transport/concepts/sessions).

### Use the Vercel useChat path <a id="use-chat-path"/>
[See all features](/docs/ai-transport/features/token-streaming).

The simplest path. `useChatTransport` wraps the core transport for direct use with Vercel's `useChat` hook. `useMessageSync` pushes other clients' messages into `useChat` state. You get Vercel's message management with AI Transport's durable delivery.
## What developers are building around <a id="building-around"/>

Use this path when you want the standard Vercel `useChat` developer experience with durable sessions added. The [Vercel AI SDK getting started guide](/docs/ai-transport/getting-started/vercel-ai-sdk) follows this path.
These are patterns teams encounter when scaling Vercel AI SDK applications. They aren't bugs; they're consequences of HTTP streaming being stateless and unidirectional:

### Use the Core SDK path <a id="core-sdk-path"/>
- **Reconnection.** Streams don't survive disconnects. Teams build custom Redis-backed resumption layers.
- **Multi-device.** Sessions tie to one HTTP connection. Teams build custom session stores and sync layers.
- **Agent health.** No signal whether the agent crashed or is still thinking. Teams build polling health checks.
- **Bidirectional control.** HTTP is one-way. Teams build separate WebSocket channels or queue systems for cancel, redirect, and approval flows.

Use AI Transport's React hooks (`useView`, `useSend`, `useRegenerate`, `useEdit`) directly instead of `useChat`. This gives you full access to the conversation tree, branch navigation, split-pane views, and custom message construction.
All of this infrastructure has nothing to do with your AI product. AI Transport handles it at the transport layer.

Use this path when you need branching UI, custom message rendering, or direct control over the conversation tree. The [Core SDK getting started guide](/docs/ai-transport/getting-started/core-sdk) follows this path.
## When you need AI Transport <a id="when-you-need-it"/>

## Discover what this unlocks <a id="what-this-unlocks"/>
Vercel AI SDK with default HTTP streaming is a great way to start. Teams typically reach for a durable transport when they hit one of the following:

With AI Transport, your Vercel AI SDK application gains capabilities that are not possible with the default HTTP transport:
- Users need to resume conversations across devices, tabs, or disconnects.
- The agent runs long tasks and the user navigates away.
- You need to show whether the agent is working, idle, or has crashed.
- Users need to interrupt, redirect, or approve agent actions mid-stream.
- Multiple users (or agents) observe or participate in the same session.
- Your session needs to reach clients beyond the React app: a mobile companion, a terminal tool, or an external observer. AI Transport's session is client-agnostic.

- Streams survive disconnection. The client reconnects and resumes from where it left off. The agent continues publishing regardless of client connectivity.
- Multi-device sync. The same conversation is accessible on phone, laptop, and tablet, all in realtime. Any device that subscribes to the session sees every message.
- Conversation branching. Edit and regenerate create forks in a conversation tree, not destructive replacements. The full history of every branch is preserved.
- Bidirectional control. Cancel, interrupt, and steer agents mid-stream through the same session. No separate control channel required.
- Approval gates reach the user on any device, even after reconnecting. A pending tool approval persists on the session until someone acts on it.
- Concurrent turns with independent cancel handles. Multiple requests can stream simultaneously on the same session.
- Agent presence. Real-time visibility into agent status: thinking, streaming, idle, or offline.
- Push notifications for completed background tasks. Reach users who have left the app.
- History and replay. Load the full conversation on reconnect, page refresh, or new device join.
If your application is simple, single-user, single-device, HTTP streaming works. Beyond that, transport matters.

## Read next <a id="next"/>
## What to read next <a id="next"/>

- [Get started with Vercel AI SDK](/docs/ai-transport/getting-started/vercel-ai-sdk): build your first AI Transport application.
- [Sessions](/docs/ai-transport/concepts/sessions): understand durable sessions and how the conversation persists.
- [Features](/docs/ai-transport/features/token-streaming): explore the full set of AI Transport features.
- [Vercel integration API reference](/docs/ai-transport/api-reference/vercel): the full API for the Vercel integration.
- [Get started](/docs/ai-transport/getting-started/vercel-ai-sdk): build with Vercel AI SDK in 5 minutes.
- [How it works](/docs/ai-transport/concepts/sessions): sessions, turns, transport, and infrastructure.
- [Browse features](/docs/ai-transport/features/token-streaming): multi-device, barge-in, presence, reconnection.
- [API reference](/docs/ai-transport/api-reference): the full API for the Vercel integration and Core SDK.