Skip to content

Conversation

@grabbou
Copy link
Contributor

@grabbou grabbou commented Jan 22, 2026

When streaming text responses, the doStream method was emitting text-start and text-delta synchronously in the same callback invocation (when the first token arrived). This caused the AI SDK to miss processing the first text-delta event because the initialization triggered by text-start wasn't complete yet.

The fix matches the behavior of the apple-llm provider, which emits text-start immediately when the stream is created, before any tokens arrive.

Test Plan

  • Test streamText with llama provider - first character should no longer be missing
  • Test responses that start with reasoning blocks (<think>) to ensure they still work correctly
  • Test responses that contain reasoning in the middle

Fixes #171

…opped

When streaming text responses, the first character or word was being dropped
because text-start and text-delta were emitted synchronously in the same
callback invocation. This caused the AI SDK to miss processing the first
text-delta event.

The fix emits text-start immediately after stream-start, before the completion
callback begins, ensuring there's always an async boundary between text-start
and the first text-delta. This matches the behavior of the apple-llm provider.

Fixes #171
@vercel
Copy link

vercel bot commented Jan 22, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
ai Ready Ready Preview, Comment Jan 22, 2026 7:47pm

Request Review

@grabbou grabbou changed the base branch from main to feat/upgrade-to-latest-provider January 22, 2026 19:46
@grabbou grabbou merged commit 0a251e9 into feat/upgrade-to-latest-provider Jan 24, 2026
2 checks passed
grabbou added a commit that referenced this pull request Jan 24, 2026
…opped (#178)

* fix: emit text-start before completion to prevent first char being dropped

When streaming text responses, the first character or word was being dropped
because text-start and text-delta were emitted synchronously in the same
callback invocation. This caused the AI SDK to miss processing the first
text-delta event.

The fix emits text-start immediately after stream-start, before the completion
callback begins, ensuring there's always an async boundary between text-start
and the first text-delta. This matches the behavior of the apple-llm provider.

Fixes #171

* chore: remove comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Always missing fist character or first word when get response from streamText

2 participants