Conversation
Reduce SDK log handler to only surface messages in --verbose mode — most SDK errors already propagate via promise rejections or state changes handled by fail()/setupChannelStateLogging(). Skip closing/closed connection states (CLI-initiated teardown). Strip hint newlines in JSON output. Migrate hint strings to single quotes for clean JSON. Add error hint for 93002 (mutableMessages).
Add --token-streaming flag that publishes an initial message then streams remaining text as appendMessage() calls over --stream-duration seconds. Includes text chunker utility for word-boundary-aware splitting and --token-size flag to control chunk granularity. Refactors publishWithRealtime to use setup*StateLogging helpers.
Add --token-streaming flag that accumulates message.append data for the same serial, displaying the growing response in-place on TTY terminals. Refactors message display into displayNormalMessage() and handleTokenStreamMessage() methods. Renames message 'event' field to 'name' in output helpers to align with SDK terminology.
Document error hint formatting conventions (single quotes, newline control) in AGENTS.md and skills. Update README with new flags and examples. Add text-chunker to Project-Structure.md.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThis PR introduces token-streaming support for publishing and subscribing to Ably messages, enabling chunked transmission of large message data. It adds a text-chunking utility, updates error hint documentation and handling, refactors message display fields to use optional Changes
Sequence DiagramssequenceDiagram
participant User as User/CLI
participant Publish as channels:publish
participant Chunker as chunkText()
participant Channel as Ably Channel
participant Output as Output Handler
User->>Publish: ably channels publish --token-streaming
Publish->>Publish: prepareMessageFromInput()
Publish->>Chunker: chunkText(messageData, tokenSize)
Chunker-->>Publish: string[]
Publish->>Channel: channel.publish(first chunk)
Channel-->>Publish: { versionSerial }
Output->>Output: emit create event
loop for each remaining chunk
Publish->>Channel: channel.appendMessage(serial, chunk, delay)
Channel-->>Publish: { versionSerial }
Output->>Output: emit append event
end
Publish->>Output: emit stream summary
Output-->>User: final result
sequenceDiagram
participant User as User/CLI
participant Subscribe as channels:subscribe
participant Channel as Ably Channel
participant Handler as Message Handler
participant Display as Display/Output
User->>Subscribe: ably channels subscribe --token-streaming
Channel->>Handler: message.create (serial=1)
Handler->>Display: output "Creating token stream"
Channel->>Handler: message.append (serial=1)
Handler->>Handler: accumulate data
Display->>Display: update terminal in-place
Channel->>Handler: message.append (serial=1)
Handler->>Handler: accumulate + append count
Display->>Display: refresh with appended text
Channel->>Handler: message.create (serial=2)
Handler->>Handler: finalize previous stream
Handler->>Display: output summary "2 appends"
Handler->>Handler: reset accumulation
Display->>User: stream data + new message
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Pull request overview
Adds “token streaming” support to the channels publish/subscribe commands, enabling message-per-response style streaming via message.create + message.append, with supporting utilities, tests, and documentation updates.
Changes:
- Introduced
--token-streamingmode forchannels:publish(stream outgoing text as initial publish + appends) andchannels:subscribe(accumulate and display appends per serial). - Added
chunkText()utility (and unit tests) to split text into token-like chunks. - Updated message output naming (
Event→Name) and extended friendly error hint mappings (incl. mutable messages hint), with associated doc/test updates.
Reviewed changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
src/commands/channels/publish.ts |
Implements token streaming publish path (--token-streaming, --token-size, --stream-duration) using initial publish + appendMessage() calls. |
src/commands/channels/subscribe.ts |
Adds token streaming subscribe mode that accumulates message.append data by serial and supports in-place terminal updates. |
src/utils/text-chunker.ts |
New helper to chunk text into ~N-character “tokens” with word-boundary preference. |
src/utils/output.ts |
Switches message display field from event to optional name and updates formatted output accordingly. |
src/utils/errors.ts |
Normalizes hint quoting and adds a new friendly hint for error code 93002 (mutable messages requirement). |
src/base-command.ts |
Adjusts SDK log handler behavior (now verbose-only) and tweaks connection state logging + JSON hint newline stripping. |
src/commands/channels/history.ts |
Updates history output to use name instead of event. |
test/unit/utils/text-chunker.test.ts |
Adds unit coverage for chunkText(). |
test/unit/commands/channels/publish.test.ts |
Adds unit coverage for token streaming publish behavior, JSON output, and failure cases. |
test/unit/commands/channels/subscribe.test.ts |
Adds unit coverage for token streaming subscribe accumulation/display + JSON shape changes; updates label expectations. |
test/unit/commands/channels/history.test.ts |
Updates expected output label (Name:). |
test/helpers/mock-ably-realtime.ts |
Extends mock channel with appendMessage() for streaming tests. |
README.md |
Documents new flags/usage/examples for token streaming publish/subscribe. |
docs/Project-Structure.md |
Documents new text-chunker.ts utility and updated errors util description. |
AGENTS.md |
Documents error hint formatting rules and JSON newline stripping behavior. |
.claude/skills/ably-review/SKILL.md |
Adds review checklist items for error hint formatting conventions. |
.claude/skills/ably-new-command/references/patterns.md |
Adds documented conventions for error hint formatting. |
.claude/skills/ably-codebase-review/SKILL.md |
Adds review checklist items for error hints and JSON newline stripping verification. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
test/unit/commands/channels/publish.test.ts (1)
547-568: Avoid real multi-second waits in the unit suite.This test hard-codes a ~2s wall-clock delay into
test:unit, which will slow the suite and make timing assertions noisier under CI load. Consider stubbing the stream-delay path or injecting a scheduler so the behavior is asserted without real sleeps.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/commands/channels/publish.test.ts` around lines 547 - 568, The test currently performs a real multi-second wait when invoking runCommand for the "channels:publish" flow (the token-streaming path mocked by getMockAblyRealtime), which slows CI; replace the real sleep by stubbing or injecting a fake scheduler into the token streaming path used by channels:publish so the stream-duration behavior is simulated instantly. Concretely, in the test setup mock the module/function that implements the streaming delay (the token-stream or delay helper used by the CLI command invoked via runCommand) to advance time instantly or use a fake timer (e.g., sinon.useFakeTimers or Jest timers) so you can assert that runCommand triggers the streaming logic without waiting real seconds, then restore timers/mocks after the test. Ensure you target the stream-delay implementation referenced by the channels:publish command and keep getMockAblyRealtime as-is.test/unit/commands/channels/subscribe.test.ts (1)
347-359: Minor: Typo in test message ID.The message ID
"msg-nosrial"appears to be a typo for"msg-noserial". This doesn't affect test functionality but could cause confusion when debugging.✏️ Fix typo
mockSubscribeCallback!({ name: "test-event", data: "no-serial-message", timestamp: Date.now(), - id: "msg-nosrial", + id: "msg-noserial", });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/commands/channels/subscribe.test.ts` around lines 347 - 359, Fix the typo in the test message ID used when invoking mockSubscribeCallback: change the id value from "msg-nosrial" to "msg-noserial" so it matches the intended naming and avoids confusion when debugging; update the literal in the mockSubscribeCallback call in the test that awaits commandPromise (the block referencing mockSubscribeCallback, commandPromise, and assertions checking stdout).src/commands/channels/publish.ts (1)
346-346: Consider validating that data is string-coercible for streaming.If
message.datais a complex object (e.g., from JSON input like{"data":{"nested":"value"}}),String()will produce"[object Object]", which is likely not the intended behavior for token streaming.Since token streaming is designed for text (LLM responses, etc.), consider adding validation or documentation that clarifies
--token-streamingexpects text data:🛡️ Optional: Add validation for string data
const text = String(message.data ?? args.message); + if (typeof message.data === 'object' && message.data !== null) { + this.fail( + 'Token streaming requires string data. Object data cannot be streamed as tokens.', + flags as BaseFlags, + 'channelPublish', + ); + } const tokenSize = flags["token-size"] as number;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/channels/publish.ts` at line 346, The current coercion const text = String(message.data ?? args.message) can turn objects into "[object Object]" which breaks token streaming; update the publish flow (around variables message.data, args.message and the token-streaming handling) to validate that message.data is text-compatible when --token-streaming is enabled: if message.data is a string use it, if it is JSON-like decide whether to JSON.stringify it (and document this behavior) or reject with a clear error, and also allow Buffer/Uint8Array handling if appropriate; ensure you emit a helpful validation error referencing token-streaming when non-text payloads are provided.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/skills/ably-review/SKILL.md:
- Line 166: The current check simply greps for a double-quote character and thus
false-positives; change the logic to parse the TypeScript/JSON AST and inspect
string literal values for hint text instead of matching raw `"` characters.
Locate the quote-check routine (e.g., the function handling "hint strings" / the
"quote check" validator) and replace the grep with AST traversal (TS parser or
JSON.parse) to read each hint's string content and then assert CLI references
use single quotes; only flag cases where the parsed string actually contains a
double quote inside the hint text.
In `@src/base-command.ts`:
- Around line 922-939: The SDK logHandler currently only emits logs when
flags.verbose && level <= 2 which hides errors; change options.logHandler (the
function assigned to options.logHandler) to always surface error-level messages
(level 0–1) by calling this.logCliEvent("ablySdk", ...) for those levels
regardless of flags.verbose, and keep the existing verbose guard for
lower-priority logs (e.g., info/debug level 2) so that only non-error messages
remain gated by flags.verbose; ensure you still pass the same arguments (flags,
"ablySdk", `LogLevel-${level}`, message, logData).
In `@src/commands/channels/subscribe.ts`:
- Around line 87-89: token streaming state (tokenStreamSerial,
tokenStreamAccumulatedData, tokenStreamAppendCount) is currently global and
never initialized on an append, causing interleaved channels to stomp each other
and finalizeTokenStream() to be a no-op; change the implementation to key the
state by channel+serial (e.g., a Map<string, {serial, accumulatedData,
appendCount}>) or alternatively reject --token-streaming when more than one
channel is subscribed, ensure you create/initialize the slot both when handling
message.create and when handling message.append (set the serial when the first
append for that stream is seen), and update finalizeTokenStream() and any
append/create handlers to read/write the keyed state; apply the same keyed-state
fix to the other token-streaming block referenced (lines ~292-345) so each
channel+serial is tracked independently.
---
Nitpick comments:
In `@src/commands/channels/publish.ts`:
- Line 346: The current coercion const text = String(message.data ??
args.message) can turn objects into "[object Object]" which breaks token
streaming; update the publish flow (around variables message.data, args.message
and the token-streaming handling) to validate that message.data is
text-compatible when --token-streaming is enabled: if message.data is a string
use it, if it is JSON-like decide whether to JSON.stringify it (and document
this behavior) or reject with a clear error, and also allow Buffer/Uint8Array
handling if appropriate; ensure you emit a helpful validation error referencing
token-streaming when non-text payloads are provided.
In `@test/unit/commands/channels/publish.test.ts`:
- Around line 547-568: The test currently performs a real multi-second wait when
invoking runCommand for the "channels:publish" flow (the token-streaming path
mocked by getMockAblyRealtime), which slows CI; replace the real sleep by
stubbing or injecting a fake scheduler into the token streaming path used by
channels:publish so the stream-duration behavior is simulated instantly.
Concretely, in the test setup mock the module/function that implements the
streaming delay (the token-stream or delay helper used by the CLI command
invoked via runCommand) to advance time instantly or use a fake timer (e.g.,
sinon.useFakeTimers or Jest timers) so you can assert that runCommand triggers
the streaming logic without waiting real seconds, then restore timers/mocks
after the test. Ensure you target the stream-delay implementation referenced by
the channels:publish command and keep getMockAblyRealtime as-is.
In `@test/unit/commands/channels/subscribe.test.ts`:
- Around line 347-359: Fix the typo in the test message ID used when invoking
mockSubscribeCallback: change the id value from "msg-nosrial" to "msg-noserial"
so it matches the intended naming and avoids confusion when debugging; update
the literal in the mockSubscribeCallback call in the test that awaits
commandPromise (the block referencing mockSubscribeCallback, commandPromise, and
assertions checking stdout).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 8378d54f-7124-478e-83e9-b9f8d19c4571
📒 Files selected for processing (18)
.claude/skills/ably-codebase-review/SKILL.md.claude/skills/ably-new-command/references/patterns.md.claude/skills/ably-review/SKILL.mdAGENTS.mdREADME.mddocs/Project-Structure.mdsrc/base-command.tssrc/commands/channels/history.tssrc/commands/channels/publish.tssrc/commands/channels/subscribe.tssrc/utils/errors.tssrc/utils/output.tssrc/utils/text-chunker.tstest/helpers/mock-ably-realtime.tstest/unit/commands/channels/history.test.tstest/unit/commands/channels/publish.test.tstest/unit/commands/channels/subscribe.test.tstest/unit/utils/text-chunker.test.ts
Summary by CodeRabbit
New Features
ably channels publishwith new flags:--token-streaming,--token-size(default 4), and--stream-duration(default 10) to publish chunked messages.--token-streamingflag toably channels subscribefor in-place terminal display of message appends.Documentation