Conversation
- Introduced MessagePageCursorSchema and ChatMessagePageResultSchema for handling message pagination. - Added sessionsListMessagesPageRoute to define the new API route for listing messages with pagination. - Updated sessionsRestoreRoute to include pagination parameters and response structure. - Enhanced IAgentImplementation and IAgentSessionPresenter interfaces to support message pagination. - Implemented unit tests for new message pagination functionality in messageStore and sessionService. - Updated existing tests to accommodate changes in message handling and pagination logic.
📝 WalkthroughWalkthroughThis PR implements SQLite schema normalization for the deepchat message storage layer by decomposing JSON-based message content into dedicated relational tables (user messages, assistant blocks, files, links, search documents, session skills/tools), introduces cursor-based paginated message retrieval with progressive loading on the renderer side, and adds a background idempotent backfill pipeline that runs asynchronously after application startup. ChangesSQLite Schema Normalization & Pagination
Message Persistence & Content Reconstruction
Presenter, Session, and Route APIs
Renderer Pagination & Scrolling
Tests & Validation
Architecture & Planning Documentation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/main/presenter/sqlitePresenter/index.ts (1)
509-527:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
clearNewAgentData()still leaves queued pending inputs behind.
deepchat_pending_inputsis session-scoped state, but this reset path deletes the sessions without clearing the queue table. That can leave stale queued items orphaned after an overwrite/reset and leak them into later restores.Suggested fix
this.db.exec(` DELETE FROM deepchat_message_search_results; DELETE FROM deepchat_search_documents; DELETE FROM deepchat_assistant_blocks; DELETE FROM deepchat_user_message_links; DELETE FROM deepchat_user_message_files; DELETE FROM deepchat_user_messages; DELETE FROM deepchat_message_traces; + DELETE FROM deepchat_pending_inputs; DELETE FROM deepchat_messages; DELETE FROM deepchat_usage_stats; DELETE FROM deepchat_sessions; DELETE FROM new_session_active_skills; DELETE FROM new_session_disabled_agent_tools;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/main/presenter/sqlitePresenter/index.ts` around lines 509 - 527, clearNewAgentData() fails to remove session-scoped queued items in deepchat_pending_inputs, leaving orphaned pending inputs after sessions are deleted; update the transaction in clearNewAgentData (method clearNewAgentData) to include a DELETE FROM deepchat_pending_inputs within the same runTransaction block (alongside the other DELETE statements such as deepchat_sessions, new_sessions, etc.) so queued pending inputs are cleared atomically with session and message data.src/main/presenter/agentRuntimePresenter/index.ts (1)
742-790:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftReturn the
MessageStartResultwhen the turn starts, not after the stream finishes.Because Line 742 waits for
runStreamForMessage()before any of these returns, callers ofsendMessage()won't get therequestId/messageIduntil the whole generation has completed or failed. That defeats the new “start result” contract and can keep route/IPC callers blocked for the full model run.Also applies to: 840-873
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/main/presenter/agentRuntimePresenter/index.ts` around lines 742 - 790, The code currently awaits runStreamForMessage() and only returns the MessageStartResult after the stream finishes; change it to return the start result immediately and handle the stream result asynchronously. Concretely: call runStreamForMessage(...) to get its promise (and extract runId if available from the start response), return {requestId: assistantMessageId, messageId: assistantMessageId} immediately, and move the existing post-stream logic (pendingQueue handling, steer handling, applyProcessResultStatus, clearActiveGeneration, settleSteerInterruptedAssistant, continueWithSteerInput, drainPendingQueueIfPossible, rollback/consumeQueuedInput) into an async continuation (e.g. an awaited promise.then or an async IIFE) that uses the runId/result when the stream completes; apply the same refactor to the analogous block around the later 840-873 code paths so callers receive the start result right away while the stream finishes in the background.test/main/presenter/agentSessionPresenter/agentSessionPresenter.test.ts (1)
411-421:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winWait for the asynchronous enqueue before asserting.
createSession()now queues the first input off the main path, so this assertion can run beforequeuePendingInputfires and intermittently fail. Add the same tick/wait you already use in the neighboring create-session tests before checking the mock.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/main/presenter/agentSessionPresenter/agentSessionPresenter.test.ts` around lines 411 - 421, The test asserts queuePendingInput too soon because createSession() enqueues asynchronously; after calling presenter.createSession(...) insert the same microtask/tick wait used by the adjacent create-session tests (e.g., the Promise.resolve()/next-tick helper used elsewhere) before asserting queuePendingInput was called with 'mock-session-id' and payload and before asserting deepChatAgent.processMessage was not called so the asynchronous enqueue has time to run.src/renderer/src/stores/ui/message.ts (1)
241-275:⚠️ Potential issue | 🟠 Major | ⚡ Quick winReset stale history state before a new restore can fail.
currentSessionIdis switched immediately, but the previousmessageIds/messageCache/cursor state survives until the restore succeeds. If the new restore is slow or throws, the store can expose the old session’s history under the new id, andloadOlderMessages()can page with the stale cursor.Suggested direction
async function loadMessages(sessionId: string): Promise<SessionWithState | null> { + const switchingSessions = currentSessionId.value !== sessionId const desiredCount = currentSessionId.value === sessionId ? Math.max(messageIds.value.length, 100) : 100 const requestId = ++latestLoadRequestId latestHistoryRequestId += 1 setCurrentSessionId(sessionId) isLoadingHistory.value = false + nextCursor.value = null + hasMoreHistory.value = false + if (switchingSessions) { + messageIds.value = [] + messageCache.value = new Map() + parsedMessageCache.clear() + } try { const restored = await restoreMessageWindow(sessionId, desiredCount, requestId) if (!restored) { return null } @@ } catch (e) { + if (isCurrentLoadRequest(requestId, sessionId)) { + messageIds.value = [] + messageCache.value = new Map() + nextCursor.value = null + hasMoreHistory.value = false + parsedMessageCache.clear() + } console.error('Failed to load messages:', e) return null } }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/renderer/src/stores/ui/message.ts` around lines 241 - 275, When starting loadMessages for a new session (in function loadMessages) reset any stale history state immediately after setCurrentSessionId(sessionId) and before calling restoreMessageWindow: clear parsedMessageCache, set messageCache.value to an empty Map, set messageIds.value to an empty array, set nextCursor.value to null (or undefined), set hasMoreHistory.value to false (and optionally isLoadingHistory.value = true if present) so a failed or slow restore cannot expose the previous session’s history; keep the existing request-id increments (latestLoadRequestId/latestHistoryRequestId) and only populate these stores from restored when the request succeeds and isCurrentLoadRequest(...) returns true.
🧹 Nitpick comments (6)
src/main/presenter/agentRuntimePresenter/messageStore.ts (1)
455-499: ⚡ Quick win
cloneSentMessagesToSessionwrites multiple tables without a transactionFor each message, the loop inserts into
deepchatMessagesTable, then optionally writes todeepchatUserMessagesTable/files/links ordeepchatAssistantBlocksTable, then upserts a search document — without an enclosing transaction. A crash between any two of these writes produces a structurally incomplete clone: the head row exists but sub-table rows are absent (or vice versa). The fallback torow.contentinmaterializeContentwill still make reads work, but search documents could be silently missing.♻️ Wrap loop body in a single transaction
+ this.sqlitePresenter.runTransaction(() => { for (const record of sourceRecords) { const nextId = nanoid() this.sqlitePresenter.deepchatMessagesTable.insert({ ... }) if (record.role === 'user') { ... } else { ... } this.upsertMessageSearchDocument(...) nextOrderSeq += 1 } + })🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/main/presenter/agentRuntimePresenter/messageStore.ts` around lines 455 - 499, cloneSentMessagesToSession performs multiple related writes per message (deepchatMessagesTable.insert, persistUserContent or deepchatAssistantBlocksTable.replaceForMessage, and upsertMessageSearchDocument) without a transaction, risking partial clones; wrap the per-message operations in a single DB transaction so each message insert + its sub-table writes + the search upsert are atomic. Locate cloneSentMessagesToSession and change the loop to begin a transaction (using the sqlitePresenter's transaction helper or begin/commit API), perform insert(...) then persistUserContent(...) or replaceForMessage(...), then upsertMessageSearchDocument(...), commit the transaction (rollback on error), and only increment nextOrderSeq after successful commit so partial writes cannot leave inconsistent state.test/main/presenter/sqlitePresenter/deepchatMessagesTable.test.ts (1)
65-77: ⚡ Quick winAdd a test case for the cursor-based pagination path
The single test only validates the initial-load (no cursor) case. The cursor path — which is the mechanism used for every "load older messages" call beyond the first — is exercised by the mock's cursor-detection logic (lines 32–45) but never tested. A broken cursor filter or incorrect row ordering would go undetected.
Suggested addition:
🧪 Proposed cursor-based test case
+ it('applies cursor filtering and returns rows older than the cursor', () => { + const rows = Array.from({ length: 10 }, (_, i) => createMessageRow(i + 1)) + const db = createMockDb(rows) + const table = new DeepChatMessagesTable(db) + + // Simulate a second page: cursor points at row with order_seq=6, id='m6' + const page = table.listPageBySession('s1', { + cursor: { orderSeq: 6, id: 'm6' }, + limit: 5 + }) + + // Rows older than cursor (order_seq < 6): order_seq 1-5, DESC → [5,4,3,2,1] + expect(page).toHaveLength(5) + expect(page[0]?.order_seq).toBe(5) + expect(page[4]?.order_seq).toBe(1) + })🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/main/presenter/sqlitePresenter/deepchatMessagesTable.test.ts` around lines 65 - 77, Add a new test that exercises the cursor-based pagination path in DeepChatMessagesTable: use createMessageRow to build a larger set, createMockDb(rows), instantiate new DeepChatMessagesTable(db), then call table.listPageBySession('s1', { limit: N, cursor: <cursorValue> }) with a cursor that targets a mid-list position (e.g., based on an existing row's order_seq or id) and assert the returned page length, the ordering (first element has order_seq less than the cursor row and continues descending), and that boundary items (hasMore behavior) match expectations; place this alongside the existing test so the cursor-detection logic in the mock is exercised.docs/specs/sqlite-mainline-normalization/spec.md (1)
1-1: ⚡ Quick winMove to
docs/features/sqlite-mainline-normalization/spec.mdto match the documentation folder convention.The file lives under
docs/specs/, but the coding guidelines prescribedocs/features/<goal>/for new features,docs/issues/<goal>/for bug fixes, anddocs/architecture/<goal>/for refactors.As per coding guidelines: "Create specification-driven development documentation in kebab-case folders:
docs/features/<goal>/for new features,docs/issues/<goal>/for bug fixes,docs/architecture/<goal>/for refactors"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@docs/specs/sqlite-mainline-normalization/spec.md` at line 1, Rename and move the specification file named `sqlite-mainline-normalization/spec.md` from the current docs/specs location into the features documentation area (docs/features/sqlite-mainline-normalization/spec.md) to follow the repo convention; update any internal front-matter metadata (title/slug), all references/links (README, TOC, other docs) that point to the old path, and adjust any CI or docs build config that lists the old file so the spec is discoverable under the features/<goal> convention.src/main/presenter/sqlitePresenter/tables/deepchatUserMessageLinks.ts (1)
19-26: ⚡ Quick winDrop the redundant
(message_id, ordinal)index.The composite primary key already indexes these columns, so
idx_deepchat_user_message_links_messageonly adds extra write/storage overhead for no gain on the queries in this table wrapper.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/main/presenter/sqlitePresenter/tables/deepchatUserMessageLinks.ts` around lines 19 - 26, Remove the redundant explicit index creation "CREATE INDEX IF NOT EXISTS idx_deepchat_user_message_links_message ON deepchat_user_message_links(message_id, ordinal);" because the composite PRIMARY KEY (message_id, ordinal) in deepchat_user_message_links already provides the same index; update the SQL in deepchat_user_message_links to delete that CREATE INDEX statement so you avoid extra write/storage overhead.src/main/presenter/sqlitePresenter/tables/newSessionActiveSkills.ts (1)
19-26: ⚡ Quick winRemove the duplicate primary-key index.
PRIMARY KEY (session_id, ordinal)already gives SQLite an index on these columns, soidx_new_session_active_skills_sessionjust adds extra write and migration cost without improving this access pattern.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/main/presenter/sqlitePresenter/tables/newSessionActiveSkills.ts` around lines 19 - 26, The table new_session_active_skills declares a composite primary key (PRIMARY KEY (session_id, ordinal)) which already creates the necessary index, so remove the redundant CREATE INDEX statement named idx_new_session_active_skills_session; edit the migration/sql in newSessionActiveSkills to delete the "CREATE INDEX IF NOT EXISTS idx_new_session_active_skills_session ON new_session_active_skills(session_id, ordinal);" line to avoid duplicate indexes and extra write/migration cost while leaving the PRIMARY KEY intact.docs/specs/sqlite-mainline-normalization/plan.md (1)
1-1: ⚡ Quick winMove this plan into the documented docs hierarchy.
docs/specs/...sits outside the repo’s approved spec-driven documentation layout, so this plan will be harder to discover and maintain alongside the rest of the design docs.As per coding guidelines, "Create specification-driven development documentation in kebab-case folders:
docs/features/<goal>/for new features,docs/issues/<goal>/for bug fixes,docs/architecture/<goal>/for refactors".🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@docs/specs/sqlite-mainline-normalization/plan.md` at line 1, The plan file docs/specs/sqlite-mainline-normalization/plan.md should be moved into the repository's spec-driven docs layout: pick the correct kebab-case folder under docs/features/, docs/issues/, or docs/architecture/ (e.g., docs/features/sqlite-mainline-normalization/plan.md or docs/architecture/sqlite-mainline-normalization/plan.md) and update any internal links or references to point to the new location so the plan is discoverable and maintained alongside other design docs.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@docs/specs/sqlite-mainline-normalization/tasks.md`:
- Around line 73-80: Run the missing quality-gate commands and update the
checklist: execute `pnpm test`, `pnpm run format`, `pnpm run i18n`, and `pnpm
run lint`; fix any failures (apply Oxfmt formatting with single quotes, no
semicolons, width 100, and resolve OxLint/TypeScript lint errors), commit the
fixes, then update the tasks in the
docs/specs/sqlite-mainline-normalization/tasks.md checklist to mark those four
items checked so the file reflects the passing state.
- Around line 1-7: The docs live in the wrong top-level folder; move the
sqlite-mainline-normalization spec, plan and tasks files (spec.md, plan.md,
tasks.md) out of the current specs directory into the architecture docs area by
creating a kebab-case folder for the goal under docs/architecture (i.e.,
docs/architecture/sqlite-mainline-normalization/) and relocating those three
files there; also update any internal references or links that pointed to the
old location so they now reference the new
docs/architecture/sqlite-mainline-normalization/* paths.
In `@src/main/presenter/agentSessionPresenter/index.ts`:
- Around line 2849-2865: The insert for file MIME type in
sqlitePresenter.deepchatUserMessageFilesTable.replaceForMessage uses
file.mimeType ?? file.type which can be undefined for legacy attachments and can
cause SqliteError in backfillNormalizedMessageRow; change the MIME selection to
fall back to 'application/octet-stream' (same behavior as
parseUserExportContent) so the object inserted has a non-null mimeType, e.g.
compute a mimeTypeValue = file.mimeType ?? file.type ??
'application/octet-stream' and use that in the replaceForMessage payload and in
metadataJson where type is set.
- Around line 2803-2819: The current code materializes the entire
deepchat_messages result via stmt.all(), causing OOM; change the SELECT that
builds messageRows to use the better-sqlite3 iterator (stmt.iterate()) and
iterate with for (const row of stmt.iterate()) calling
this.backfillNormalizedMessageRow(row), incrementing processedCount and keeping
the existing every-200 update to
sqlitePresenter.configTables.setAgentSetting(SQLITE_MAINLINE_NORMALIZATION_KEY,
...) and await this.yieldToEventLoop() logic; apply the same replacement (use
.iterate() and a streaming for-loop) to the new_sessions query mentioned in the
review so both message and session backfills are memory-efficient.
- Around line 762-768: The sendMessage handler currently awaits
agent.processMessage which blocks until processStream (and the full LLM
interaction) completes; to match createSession and avoid IPC hang, change
sendMessage to kick off processing asynchronously instead of awaiting: call
agent.processMessage(sessionId, normalizedInput, {...}) without await (use void
or assign the Promise and attach .catch to log errors) so sendMessage can
immediately return the MessageStartResult, and ensure createSession and
sendMessage use the same fire-and-forget pattern; reference the
agent.processMessage and its internals (processStream) and the
sendMessage/createSession call sites when making this change.
In `@src/main/presenter/sqlitePresenter/tables/deepchatSearchDocuments.ts`:
- Around line 220-241: The searchLike method builds likeQuery =
`%${query.trim().toLowerCase()}%` which turns blank/whitespace input into `%%`
and causes a full-table scan; update the searchLike function to trim the
incoming query and return an empty array immediately if the trimmed query is
empty (same behavior as searchFts), before constructing likeQuery or running the
DB prepare/all; ensure you reference the searchLike function and the likeQuery
check so the early-return prevents executing the SQL when query is blank.
In `@src/main/presenter/sqlitePresenter/tables/newSessions.ts`:
- Around line 167-169: The parent-row mutation and the normalized-table sync
calls (e.g., the call sequence that updates/creates/deletes in new_sessions
together with replaceActiveSkillsRows and replaceDisabledAgentToolRows in
newSessions.ts) must be executed inside a single database transaction; modify
the public mutation methods (the functions that perform the new_sessions
insert/update/delete) to call db.transaction(...) and invoke the parent-row
write and the two sync helpers (replaceActiveSkillsRows,
replaceDisabledAgentToolRows) inside that transaction so they are atomic; apply
the same change to the other public mutation blocks referenced around lines
338-345 and 349-351 so each parent-row change plus its corresponding
normalized-table updates run within one db.transaction.
In `@src/renderer/src/pages/ChatPage.vue`:
- Around line 286-292: The call to loadOlderMessagesAtTop() can reject and
currently has no internal error handling; wrap the implementation of
loadOlderMessagesAtTop in a try/catch (or add a .catch) so any thrown errors are
caught and handled consistently with other async ops (e.g., call the same error
reporter/logging or user notification used elsewhere in this file such as
onMessageRetry/onMessageDelete). Locate the loadOlderMessagesAtTop function and
add a catch block that logs the error (processLogger/console.error) and surfaces
a user-friendly notification or fallback behavior to avoid unhandled promise
rejections.
In `@src/renderer/src/stores/ui/message.ts`:
- Around line 294-313: The code currently uses incomingIds.length (from
page.messages) to update lastPersistedRevision and the return value, which
counts duplicates that may already exist and weren't prepended; instead compute
which IDs are actually new by creating filteredIncomingIds =
incomingIds.filter(id => !existingIds.has(id)) (or build the incoming list by
filtering against existingIds before merging), use that filtered list when
updating messageIds, increment lastPersistedRevision by
filteredIncomingIds.length, and return filteredIncomingIds.length; keep setting
messageCache.value for each msg in page.messages as before and reference
messageCache, page.messages, incomingIds, existingIds, messageIds, and
lastPersistedRevision to locate the changes.
---
Outside diff comments:
In `@src/main/presenter/agentRuntimePresenter/index.ts`:
- Around line 742-790: The code currently awaits runStreamForMessage() and only
returns the MessageStartResult after the stream finishes; change it to return
the start result immediately and handle the stream result asynchronously.
Concretely: call runStreamForMessage(...) to get its promise (and extract runId
if available from the start response), return {requestId: assistantMessageId,
messageId: assistantMessageId} immediately, and move the existing post-stream
logic (pendingQueue handling, steer handling, applyProcessResultStatus,
clearActiveGeneration, settleSteerInterruptedAssistant, continueWithSteerInput,
drainPendingQueueIfPossible, rollback/consumeQueuedInput) into an async
continuation (e.g. an awaited promise.then or an async IIFE) that uses the
runId/result when the stream completes; apply the same refactor to the analogous
block around the later 840-873 code paths so callers receive the start result
right away while the stream finishes in the background.
In `@src/main/presenter/sqlitePresenter/index.ts`:
- Around line 509-527: clearNewAgentData() fails to remove session-scoped queued
items in deepchat_pending_inputs, leaving orphaned pending inputs after sessions
are deleted; update the transaction in clearNewAgentData (method
clearNewAgentData) to include a DELETE FROM deepchat_pending_inputs within the
same runTransaction block (alongside the other DELETE statements such as
deepchat_sessions, new_sessions, etc.) so queued pending inputs are cleared
atomically with session and message data.
In `@src/renderer/src/stores/ui/message.ts`:
- Around line 241-275: When starting loadMessages for a new session (in function
loadMessages) reset any stale history state immediately after
setCurrentSessionId(sessionId) and before calling restoreMessageWindow: clear
parsedMessageCache, set messageCache.value to an empty Map, set messageIds.value
to an empty array, set nextCursor.value to null (or undefined), set
hasMoreHistory.value to false (and optionally isLoadingHistory.value = true if
present) so a failed or slow restore cannot expose the previous session’s
history; keep the existing request-id increments
(latestLoadRequestId/latestHistoryRequestId) and only populate these stores from
restored when the request succeeds and isCurrentLoadRequest(...) returns true.
In `@test/main/presenter/agentSessionPresenter/agentSessionPresenter.test.ts`:
- Around line 411-421: The test asserts queuePendingInput too soon because
createSession() enqueues asynchronously; after calling
presenter.createSession(...) insert the same microtask/tick wait used by the
adjacent create-session tests (e.g., the Promise.resolve()/next-tick helper used
elsewhere) before asserting queuePendingInput was called with 'mock-session-id'
and payload and before asserting deepChatAgent.processMessage was not called so
the asynchronous enqueue has time to run.
---
Nitpick comments:
In `@docs/specs/sqlite-mainline-normalization/plan.md`:
- Line 1: The plan file docs/specs/sqlite-mainline-normalization/plan.md should
be moved into the repository's spec-driven docs layout: pick the correct
kebab-case folder under docs/features/, docs/issues/, or docs/architecture/
(e.g., docs/features/sqlite-mainline-normalization/plan.md or
docs/architecture/sqlite-mainline-normalization/plan.md) and update any internal
links or references to point to the new location so the plan is discoverable and
maintained alongside other design docs.
In `@docs/specs/sqlite-mainline-normalization/spec.md`:
- Line 1: Rename and move the specification file named
`sqlite-mainline-normalization/spec.md` from the current docs/specs location
into the features documentation area
(docs/features/sqlite-mainline-normalization/spec.md) to follow the repo
convention; update any internal front-matter metadata (title/slug), all
references/links (README, TOC, other docs) that point to the old path, and
adjust any CI or docs build config that lists the old file so the spec is
discoverable under the features/<goal> convention.
In `@src/main/presenter/agentRuntimePresenter/messageStore.ts`:
- Around line 455-499: cloneSentMessagesToSession performs multiple related
writes per message (deepchatMessagesTable.insert, persistUserContent or
deepchatAssistantBlocksTable.replaceForMessage, and upsertMessageSearchDocument)
without a transaction, risking partial clones; wrap the per-message operations
in a single DB transaction so each message insert + its sub-table writes + the
search upsert are atomic. Locate cloneSentMessagesToSession and change the loop
to begin a transaction (using the sqlitePresenter's transaction helper or
begin/commit API), perform insert(...) then persistUserContent(...) or
replaceForMessage(...), then upsertMessageSearchDocument(...), commit the
transaction (rollback on error), and only increment nextOrderSeq after
successful commit so partial writes cannot leave inconsistent state.
In `@src/main/presenter/sqlitePresenter/tables/deepchatUserMessageLinks.ts`:
- Around line 19-26: Remove the redundant explicit index creation "CREATE INDEX
IF NOT EXISTS idx_deepchat_user_message_links_message ON
deepchat_user_message_links(message_id, ordinal);" because the composite PRIMARY
KEY (message_id, ordinal) in deepchat_user_message_links already provides the
same index; update the SQL in deepchat_user_message_links to delete that CREATE
INDEX statement so you avoid extra write/storage overhead.
In `@src/main/presenter/sqlitePresenter/tables/newSessionActiveSkills.ts`:
- Around line 19-26: The table new_session_active_skills declares a composite
primary key (PRIMARY KEY (session_id, ordinal)) which already creates the
necessary index, so remove the redundant CREATE INDEX statement named
idx_new_session_active_skills_session; edit the migration/sql in
newSessionActiveSkills to delete the "CREATE INDEX IF NOT EXISTS
idx_new_session_active_skills_session ON new_session_active_skills(session_id,
ordinal);" line to avoid duplicate indexes and extra write/migration cost while
leaving the PRIMARY KEY intact.
In `@test/main/presenter/sqlitePresenter/deepchatMessagesTable.test.ts`:
- Around line 65-77: Add a new test that exercises the cursor-based pagination
path in DeepChatMessagesTable: use createMessageRow to build a larger set,
createMockDb(rows), instantiate new DeepChatMessagesTable(db), then call
table.listPageBySession('s1', { limit: N, cursor: <cursorValue> }) with a cursor
that targets a mid-list position (e.g., based on an existing row's order_seq or
id) and assert the returned page length, the ordering (first element has
order_seq less than the cursor row and continues descending), and that boundary
items (hasMore behavior) match expectations; place this alongside the existing
test so the cursor-detection logic in the mock is exercised.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6a92f6c9-b1f2-4c54-9250-1b6c72701b58
📒 Files selected for processing (50)
docs/FLOWS.mddocs/architecture/agent-system.mddocs/architecture/main-kernel-refactor/route-schema-catalog.mddocs/architecture/session-management.mddocs/guides/code-navigation.mddocs/issues/chat-history-pagination-stability/plan.mddocs/issues/chat-history-pagination-stability/spec.mddocs/issues/chat-history-pagination-stability/tasks.mddocs/specs/sqlite-mainline-normalization/plan.mddocs/specs/sqlite-mainline-normalization/spec.mddocs/specs/sqlite-mainline-normalization/tasks.mdsrc/main/presenter/agentRuntimePresenter/index.tssrc/main/presenter/agentRuntimePresenter/messageStore.tssrc/main/presenter/agentSessionPresenter/index.tssrc/main/presenter/agentSessionPresenter/legacyImportService.tssrc/main/presenter/agentSessionPresenter/sessionManager.tssrc/main/presenter/lifecyclePresenter/hooks/after-start/sqliteMainlineNormalizationHook.tssrc/main/presenter/lifecyclePresenter/hooks/index.tssrc/main/presenter/sqlitePresenter/index.tssrc/main/presenter/sqlitePresenter/schemaCatalog.tssrc/main/presenter/sqlitePresenter/tables/deepchatAssistantBlocks.tssrc/main/presenter/sqlitePresenter/tables/deepchatMessages.tssrc/main/presenter/sqlitePresenter/tables/deepchatSearchDocuments.tssrc/main/presenter/sqlitePresenter/tables/deepchatUserMessageFiles.tssrc/main/presenter/sqlitePresenter/tables/deepchatUserMessageLinks.tssrc/main/presenter/sqlitePresenter/tables/deepchatUserMessages.tssrc/main/presenter/sqlitePresenter/tables/newSessionActiveSkills.tssrc/main/presenter/sqlitePresenter/tables/newSessionDisabledAgentTools.tssrc/main/presenter/sqlitePresenter/tables/newSessions.tssrc/main/routes/chat/chatService.tssrc/main/routes/hotPathPorts.tssrc/main/routes/index.tssrc/main/routes/sessions/sessionService.tssrc/renderer/api/SessionClient.tssrc/renderer/src/pages/ChatPage.vuesrc/renderer/src/stores/ui/message.tssrc/shared/contracts/common.tssrc/shared/contracts/routes.tssrc/shared/contracts/routes/sessions.routes.tssrc/shared/types/agent-interface.d.tssrc/shared/types/presenters/agent-session.presenter.d.tstest/main/presenter/agentRuntimePresenter/messageStore.test.tstest/main/presenter/agentSessionPresenter/agentSessionPresenter.test.tstest/main/presenter/sqlitePresenter/deepchatMessagesTable.test.tstest/main/routes/chatService.test.tstest/main/routes/sessionService.test.tstest/renderer/api/clients.test.tstest/renderer/components/ChatPage.test.tstest/renderer/stores/messageStore.reactivity.test.tstest/renderer/stores/messageStore.test.ts
| # `agent.db` 主链路 SQLite 结构化收敛任务清单 | ||
|
|
||
| ## T0 规格 | ||
|
|
||
| - [x] 新增 `docs/specs/sqlite-mainline-normalization/spec.md` | ||
| - [x] 新增 `docs/specs/sqlite-mainline-normalization/plan.md` | ||
| - [x] 新增 `docs/specs/sqlite-mainline-normalization/tasks.md` |
There was a problem hiding this comment.
Doc directory path doesn't follow the coding guideline
The file lives under docs/specs/, which doesn't match any of the prescribed paths. Since this is a normalization/refactor initiative, the guideline points to docs/architecture/<goal>/.
As per coding guidelines: "Create specification-driven development documentation in kebab-case folders: docs/features/<goal>/ for new features, docs/issues/<goal>/ for bug fixes, docs/architecture/<goal>/ for refactors."
📁 Suggested rename
docs/specs/sqlite-mainline-normalization/ → docs/architecture/sqlite-mainline-normalization/
The sibling plan/spec files should move with it.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/specs/sqlite-mainline-normalization/tasks.md` around lines 1 - 7, The
docs live in the wrong top-level folder; move the sqlite-mainline-normalization
spec, plan and tasks files (spec.md, plan.md, tasks.md) out of the current specs
directory into the architecture docs area by creating a kebab-case folder for
the goal under docs/architecture (i.e.,
docs/architecture/sqlite-mainline-normalization/) and relocating those three
files there; also update any internal references or links that pointed to the
old location so they now reference the new
docs/architecture/sqlite-mainline-normalization/* paths.
| - [ ] 运行全量 `pnpm test` | ||
|
|
||
| ## T9 质量检查 | ||
|
|
||
| - [ ] `pnpm run format` | ||
| - [ ] `pnpm run i18n` | ||
| - [ ] `pnpm run lint` | ||
| - [x] `pnpm run typecheck` |
There was a problem hiding this comment.
Several quality-gate tasks remain unchecked
pnpm test, pnpm run format, pnpm run i18n, and pnpm run lint are all unchecked. The last three are required by the coding guidelines (Oxfmt formatting, OxLint linting). Please ensure these pass before merging.
As per coding guidelines: "Use Oxfmt formatting with single quotes, no semicolons, and width 100; run pnpm run format" and "Use OxLint for JavaScript/TypeScript linting with hooks running lint-staged and typecheck."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/specs/sqlite-mainline-normalization/tasks.md` around lines 73 - 80, Run
the missing quality-gate commands and update the checklist: execute `pnpm test`,
`pnpm run format`, `pnpm run i18n`, and `pnpm run lint`; fix any failures (apply
Oxfmt formatting with single quotes, no semicolons, width 100, and resolve
OxLint/TypeScript lint errors), commit the fixes, then update the tasks in the
docs/specs/sqlite-mainline-normalization/tasks.md checklist to mark those four
items checked so the file reflects the passing state.
| const result = await agent.processMessage(sessionId, normalizedInput, { | ||
| projectDir: session.projectDir ?? null | ||
| }) | ||
| if (!hadMessages && !wasDraft) { | ||
| void this.generateSessionTitle(sessionId, session.title, providerId, state?.modelId ?? '') | ||
| } | ||
| return result |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether processMessage in AgentRuntime implementations returns before LLM completes
# (i.e., it should fire-and-forget processing internally and return MessageStartResult early)
rg -n --type=ts -A 20 'async processMessage\(' \
--iglob '!**/node_modules/**' \
--iglob '!**/*.test.*' \
--iglob '!**/*.spec.*'Repository: ThinkInAIXYZ/deepchat
Length of output: 2166
🏁 Script executed:
#!/bin/bash
# Get the full processMessage implementation starting from line 579
sed -n '579,700p' src/main/presenter/agentRuntimePresenter/index.tsRepository: ThinkInAIXYZ/deepchat
Length of output: 4850
🏁 Script executed:
#!/bin/bash
# Continue from line 690 to find the return statement and streaming behavior
sed -n '690,800p' src/main/presenter/agentRuntimePresenter/index.tsRepository: ThinkInAIXYZ/deepchat
Length of output: 4252
🏁 Script executed:
#!/bin/bash
# Find and inspect runStreamForMessage implementation
rg -n --type=ts -A 40 'async runStreamForMessage\(' \
--iglob '!**/node_modules/**' \
--iglob '!**/*.test.*' \
--iglob '!**/*.spec.*'Repository: ThinkInAIXYZ/deepchat
Length of output: 3659
🏁 Script executed:
#!/bin/bash
# Get the full runStreamForMessage implementation
sed -n '1698,1900p' src/main/presenter/agentRuntimePresenter/index.tsRepository: ThinkInAIXYZ/deepchat
Length of output: 7949
🏁 Script executed:
#!/bin/bash
# Find processStream implementation
rg -n --type=ts 'export.*processStream|function processStream|const processStream' \
--iglob '!**/node_modules/**' \
--iglob '!**/*.test.*' \
--iglob '!**/*.spec.*'Repository: ThinkInAIXYZ/deepchat
Length of output: 206
🏁 Script executed:
#!/bin/bash
# Read processStream implementation
sed -n '292,400p' src/main/presenter/agentRuntimePresenter/process.tsRepository: ThinkInAIXYZ/deepchat
Length of output: 3461
🏁 Script executed:
#!/bin/bash
# Check if sendMessage is an IPC handler and see the current code at lines 762-768
sed -n '750,800p' src/main/presenter/agentSessionPresenter/index.tsRepository: ThinkInAIXYZ/deepchat
Length of output: 1851
🏁 Script executed:
#!/bin/bash
# Find the sendMessage method signature and check if it's an IPC handler
rg -n --type=ts -B 5 'async sendMessage\(' src/main/presenter/agentSessionPresenter/index.ts | head -20Repository: ThinkInAIXYZ/deepchat
Length of output: 208
🏁 Script executed:
#!/bin/bash
# Check if sendMessage is registered as an IPC handler and find createSession call
sed -n '712,770p' src/main/presenter/agentSessionPresenter/index.ts && echo "---" && rg -n -B 3 'createSession.*processMessage' src/main/presenter/agentSessionPresenter/index.tsRepository: ThinkInAIXYZ/deepchat
Length of output: 2106
🏁 Script executed:
#!/bin/bash
# Check if sendMessage is exposed as IPC and find createSession processMessage call
rg -n 'sendMessage|createSession' src/main/presenter/agentSessionPresenter/index.ts | grep -E '(registerIPCHandler|exposeIPCHandler|addEventListener|on\(|createSession|processMessage)' | head -30Repository: ThinkInAIXYZ/deepchat
Length of output: 268
🏁 Script executed:
#!/bin/bash
# Get the createSession implementation to see how it calls processMessage
sed -n '289,450p' src/main/presenter/agentSessionPresenter/index.tsRepository: ThinkInAIXYZ/deepchat
Length of output: 5770
Confirm processMessage blocks until full LLM completion; await in sendMessage will hang IPC.
processMessage does not return early after message creation. Instead, it blocks for the entire LLM interaction by awaiting processStream, which consumes the complete stream in a while(true) loop that re-iterates on tool execution. This means sendMessage's await agent.processMessage(...) will block the IPC call for the duration of model response (potentially minutes), rather than returning quickly with MessageStartResult.
The asymmetry is real: createSession (line 430) still calls processMessage without awaiting, while sendMessage now awaits it. Either both should await (and accept IPC blocking) or both should fire-and-forget with background completion handling.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/main/presenter/agentSessionPresenter/index.ts` around lines 762 - 768,
The sendMessage handler currently awaits agent.processMessage which blocks until
processStream (and the full LLM interaction) completes; to match createSession
and avoid IPC hang, change sendMessage to kick off processing asynchronously
instead of awaiting: call agent.processMessage(sessionId, normalizedInput,
{...}) without await (use void or assign the Promise and attach .catch to log
errors) so sendMessage can immediately return the MessageStartResult, and ensure
createSession and sendMessage use the same fire-and-forget pattern; reference
the agent.processMessage and its internals (processStream) and the
sendMessage/createSession call sites when making this change.
| const messageRows = db | ||
| .prepare('SELECT * FROM deepchat_messages ORDER BY created_at ASC') | ||
| .all() as DeepChatMessageRow[] | ||
|
|
||
| for (const row of messageRows) { | ||
| this.backfillNormalizedMessageRow(row) | ||
| processedCount += 1 | ||
| if (processedCount % 200 === 0) { | ||
| this.sqlitePresenter.configTables.setAgentSetting(SQLITE_MAINLINE_NORMALIZATION_KEY, { | ||
| status: 'running', | ||
| startedAt, | ||
| finishedAt: null, | ||
| updatedAt: Date.now() | ||
| }) | ||
| await this.yieldToEventLoop() | ||
| } | ||
| } |
There was a problem hiding this comment.
OOM risk: .all() materializes the full deepchat_messages table into memory at once.
For users with large message histories, SELECT * FROM deepchat_messages via .all() allocates a JS array holding every row's JSON content simultaneously. Use stmt.iterate() for memory-efficient processing of large result sets — similar to .all(), but instead of returning every row together, an iterator is returned so you can retrieve the rows one by one.
The existing await this.yieldToEventLoop() cadence works naturally with iterate() since better-sqlite3's synchronous iterator resumes from the correct cursor position after the await. The same concern applies to the sessions query at line 2766.
🛠️ Proposed fix — stream rows with .iterate()
- const messageRows = db
- .prepare('SELECT * FROM deepchat_messages ORDER BY created_at ASC')
- .all() as DeepChatMessageRow[]
-
- for (const row of messageRows) {
+ const messageStmt = db.prepare('SELECT * FROM deepchat_messages ORDER BY created_at ASC')
+ for (const row of messageStmt.iterate() as Iterable<DeepChatMessageRow>) {
this.backfillNormalizedMessageRow(row)
processedCount += 1
if (processedCount % 200 === 0) {Apply the same change to the new_sessions query at line 2766 for consistency.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/main/presenter/agentSessionPresenter/index.ts` around lines 2803 - 2819,
The current code materializes the entire deepchat_messages result via
stmt.all(), causing OOM; change the SELECT that builds messageRows to use the
better-sqlite3 iterator (stmt.iterate()) and iterate with for (const row of
stmt.iterate()) calling this.backfillNormalizedMessageRow(row), incrementing
processedCount and keeping the existing every-200 update to
sqlitePresenter.configTables.setAgentSetting(SQLITE_MAINLINE_NORMALIZATION_KEY,
...) and await this.yieldToEventLoop() logic; apply the same replacement (use
.iterate() and a streaming for-loop) to the new_sessions query mentioned in the
review so both message and session backfills are memory-efficient.
| this.sqlitePresenter.deepchatUserMessageFilesTable.replaceForMessage( | ||
| row.id, | ||
| content.files.map((file) => ({ | ||
| name: file.name, | ||
| path: file.path, | ||
| mimeType: file.mimeType ?? file.type, | ||
| size: file.size, | ||
| metadataJson: JSON.stringify({ | ||
| type: file.type, | ||
| content: file.content, | ||
| token: file.token, | ||
| thumbnail: file.thumbnail, | ||
| metadata: file.metadata | ||
| }) | ||
| })) | ||
| ) | ||
| this.sqlitePresenter.deepchatUserMessageLinksTable.replaceForMessage(row.id, content.links) |
There was a problem hiding this comment.
Missing MIME-type fallback could abort the backfill for legacy file attachments.
file.mimeType ?? file.type can evaluate to undefined for old messages that pre-date the MIME-type field. Contrast this with parseUserExportContent (line 2648–2652), which already guards against this case with a final 'application/octet-stream' fallback. If deepchatUserMessageFilesTable has mimeType NOT NULL, inserting undefined will throw a SqliteError that propagates uncaught through backfillNormalizedMessageRow, causing the entire backfill to be marked 'failed'.
🛡️ Proposed fix — add final MIME-type fallback
- mimeType: file.mimeType ?? file.type,
+ mimeType: file.mimeType ?? file.type ?? 'application/octet-stream',📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| this.sqlitePresenter.deepchatUserMessageFilesTable.replaceForMessage( | |
| row.id, | |
| content.files.map((file) => ({ | |
| name: file.name, | |
| path: file.path, | |
| mimeType: file.mimeType ?? file.type, | |
| size: file.size, | |
| metadataJson: JSON.stringify({ | |
| type: file.type, | |
| content: file.content, | |
| token: file.token, | |
| thumbnail: file.thumbnail, | |
| metadata: file.metadata | |
| }) | |
| })) | |
| ) | |
| this.sqlitePresenter.deepchatUserMessageLinksTable.replaceForMessage(row.id, content.links) | |
| this.sqlitePresenter.deepchatUserMessageFilesTable.replaceForMessage( | |
| row.id, | |
| content.files.map((file) => ({ | |
| name: file.name, | |
| path: file.path, | |
| mimeType: file.mimeType ?? file.type ?? 'application/octet-stream', | |
| size: file.size, | |
| metadataJson: JSON.stringify({ | |
| type: file.type, | |
| content: file.content, | |
| token: file.token, | |
| thumbnail: file.thumbnail, | |
| metadata: file.metadata | |
| }) | |
| })) | |
| ) | |
| this.sqlitePresenter.deepchatUserMessageLinksTable.replaceForMessage(row.id, content.links) |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/main/presenter/agentSessionPresenter/index.ts` around lines 2849 - 2865,
The insert for file MIME type in
sqlitePresenter.deepchatUserMessageFilesTable.replaceForMessage uses
file.mimeType ?? file.type which can be undefined for legacy attachments and can
cause SqliteError in backfillNormalizedMessageRow; change the MIME selection to
fall back to 'application/octet-stream' (same behavior as
parseUserExportContent) so the object inserted has a non-null mimeType, e.g.
compute a mimeTypeValue = file.mimeType ?? file.type ??
'application/octet-stream' and use that in the replaceForMessage payload and in
metadataJson where type is set.
| searchLike(query: string, limit: number): Array<DeepChatSearchDocumentRow & { rank: number }> { | ||
| const likeQuery = `%${query.trim().toLowerCase()}%` | ||
| return this.db | ||
| .prepare( | ||
| `SELECT | ||
| rowid, | ||
| document_key, | ||
| session_id, | ||
| message_id, | ||
| document_kind, | ||
| role, | ||
| title, | ||
| content, | ||
| updated_at, | ||
| 0 AS rank | ||
| FROM deepchat_search_documents | ||
| WHERE lower(title) LIKE ? | ||
| OR lower(content) LIKE ? | ||
| ORDER BY updated_at DESC | ||
| LIMIT ?` | ||
| ) | ||
| .all(likeQuery, likeQuery, limit) as Array<DeepChatSearchDocumentRow & { rank: number }> |
There was a problem hiding this comment.
Guard empty LIKE queries to avoid full-table scans.
When query is empty/whitespace, likeQuery becomes %%, which matches everything and can trigger an unnecessary broad scan/results set in fallback mode. Return early for blank queries, similar to searchFts.
Proposed fix
searchLike(query: string, limit: number): Array<DeepChatSearchDocumentRow & { rank: number }> {
- const likeQuery = `%${query.trim().toLowerCase()}%`
+ const normalizedQuery = query.trim().toLowerCase()
+ if (!normalizedQuery) {
+ return []
+ }
+ const likeQuery = `%${normalizedQuery}%`
return this.db
.prepare(
`SELECT🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/main/presenter/sqlitePresenter/tables/deepchatSearchDocuments.ts` around
lines 220 - 241, The searchLike method builds likeQuery =
`%${query.trim().toLowerCase()}%` which turns blank/whitespace input into `%%`
and causes a full-table scan; update the searchLike function to trim the
incoming query and return an empty array immediately if the trimmed query is
empty (same behavior as searchFts), before constructing likeQuery or running the
DB prepare/all; ensure you reference the searchLike function and the likeQuery
check so the early-return prevents executing the SQL when query is blank.
|
|
||
| this.replaceActiveSkillsRows(id, options?.activeSkills ?? []) | ||
| this.replaceDisabledAgentToolRows(id, options?.disabledAgentTools ?? []) |
There was a problem hiding this comment.
Wrap the parent-row change and normalized-table sync in one transaction.
These writes are no longer atomic: if the process fails after new_sessions is inserted/updated/deleted but before the corresponding new_session_* rows are synced, the legacy JSON columns and normalized tables can drift out of sync, or a delete can be only partially applied. Please make each public mutation a single db.transaction(...).
Also applies to: 338-345, 349-351
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/main/presenter/sqlitePresenter/tables/newSessions.ts` around lines 167 -
169, The parent-row mutation and the normalized-table sync calls (e.g., the call
sequence that updates/creates/deletes in new_sessions together with
replaceActiveSkillsRows and replaceDisabledAgentToolRows in newSessions.ts) must
be executed inside a single database transaction; modify the public mutation
methods (the functions that perform the new_sessions insert/update/delete) to
call db.transaction(...) and invoke the parent-row write and the two sync
helpers (replaceActiveSkillsRows, replaceDisabledAgentToolRows) inside that
transaction so they are atomic; apply the same change to the other public
mutation blocks referenced around lines 338-345 and 349-351 so each parent-row
change plus its corresponding normalized-table updates run within one
db.transaction.
| const el = scrollContainer.value | ||
| if (!el || el.scrollTop > TOP_HISTORY_THRESHOLD) { | ||
| return | ||
| } | ||
|
|
||
| void loadOlderMessagesAtTop() | ||
| } |
There was a problem hiding this comment.
Unhandled rejection in void loadOlderMessagesAtTop()
Every other async operation in this file has a try/catch (onStop, onMessageRetry, onMessageDelete, etc.), but loadOlderMessagesAtTop failures will produce silent unhandled promise rejections. Add error handling inside the function to stay consistent.
🛡️ Proposed fix
async function loadOlderMessagesAtTop(): Promise<void> {
if (messageStore.isLoadingHistory || !messageStore.hasMoreHistory) {
return
}
const el = scrollContainer.value
if (!el) {
return
}
+ try {
const previousScrollHeight = el.scrollHeight
const previousScrollTop = el.scrollTop
const loadedCount = await messageStore.loadOlderMessages()
if (loadedCount === 0) {
return
}
await nextTick()
const nextScrollHeight = el.scrollHeight
el.scrollTop = previousScrollTop + (nextScrollHeight - previousScrollHeight)
syncScrollPosition()
+ } catch (error) {
+ console.error('[ChatPage] loadOlderMessages failed:', error)
+ }
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/renderer/src/pages/ChatPage.vue` around lines 286 - 292, The call to
loadOlderMessagesAtTop() can reject and currently has no internal error
handling; wrap the implementation of loadOlderMessagesAtTop in a try/catch (or
add a .catch) so any thrown errors are caught and handled consistently with
other async ops (e.g., call the same error reporter/logging or user notification
used elsewhere in this file such as onMessageRetry/onMessageDelete). Locate the
loadOlderMessagesAtTop function and add a catch block that logs the error
(processLogger/console.error) and surfaces a user-friendly notification or
fallback behavior to avoid unhandled promise rejections.
| const incomingIds: string[] = [] | ||
| for (const msg of page.messages) { | ||
| messageCache.value.set(msg.id, msg) | ||
| incomingIds.push(msg.id) | ||
| } | ||
|
|
||
| if (incomingIds.length > 0) { | ||
| const existingIds = new Set(messageIds.value) | ||
| messageIds.value = [ | ||
| ...incomingIds.filter((id) => !existingIds.has(id)), | ||
| ...messageIds.value | ||
| ] | ||
| } | ||
|
|
||
| nextCursor.value = page.nextCursor | ||
| hasMoreHistory.value = page.hasMore | ||
| if (incomingIds.length > 0) { | ||
| lastPersistedRevision.value += 1 | ||
| } | ||
| return incomingIds.length |
There was a problem hiding this comment.
Count only messages that were actually prepended.
incomingIds.length includes duplicates that never make it into messageIds. Returning that raw count and bumping lastPersistedRevision from it can skew scroll-anchor math and trigger unnecessary UI updates when page boundaries overlap.
Suggested fix
const incomingIds: string[] = []
for (const msg of page.messages) {
messageCache.value.set(msg.id, msg)
incomingIds.push(msg.id)
}
+ const existingIds = new Set(messageIds.value)
+ const addedIds = incomingIds.filter((id) => !existingIds.has(id))
+
- if (incomingIds.length > 0) {
- const existingIds = new Set(messageIds.value)
- messageIds.value = [
- ...incomingIds.filter((id) => !existingIds.has(id)),
- ...messageIds.value
- ]
+ if (addedIds.length > 0) {
+ messageIds.value = [...addedIds, ...messageIds.value]
}
nextCursor.value = page.nextCursor
hasMoreHistory.value = page.hasMore
- if (incomingIds.length > 0) {
+ if (addedIds.length > 0) {
lastPersistedRevision.value += 1
}
- return incomingIds.length
+ return addedIds.length📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const incomingIds: string[] = [] | |
| for (const msg of page.messages) { | |
| messageCache.value.set(msg.id, msg) | |
| incomingIds.push(msg.id) | |
| } | |
| if (incomingIds.length > 0) { | |
| const existingIds = new Set(messageIds.value) | |
| messageIds.value = [ | |
| ...incomingIds.filter((id) => !existingIds.has(id)), | |
| ...messageIds.value | |
| ] | |
| } | |
| nextCursor.value = page.nextCursor | |
| hasMoreHistory.value = page.hasMore | |
| if (incomingIds.length > 0) { | |
| lastPersistedRevision.value += 1 | |
| } | |
| return incomingIds.length | |
| const incomingIds: string[] = [] | |
| for (const msg of page.messages) { | |
| messageCache.value.set(msg.id, msg) | |
| incomingIds.push(msg.id) | |
| } | |
| const existingIds = new Set(messageIds.value) | |
| const addedIds = incomingIds.filter((id) => !existingIds.has(id)) | |
| if (addedIds.length > 0) { | |
| messageIds.value = [...addedIds, ...messageIds.value] | |
| } | |
| nextCursor.value = page.nextCursor | |
| hasMoreHistory.value = page.hasMore | |
| if (addedIds.length > 0) { | |
| lastPersistedRevision.value += 1 | |
| } | |
| return addedIds.length |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/renderer/src/stores/ui/message.ts` around lines 294 - 313, The code
currently uses incomingIds.length (from page.messages) to update
lastPersistedRevision and the return value, which counts duplicates that may
already exist and weren't prepended; instead compute which IDs are actually new
by creating filteredIncomingIds = incomingIds.filter(id => !existingIds.has(id))
(or build the incoming list by filtering against existingIds before merging),
use that filtered list when updating messageIds, increment lastPersistedRevision
by filteredIncomingIds.length, and return filteredIncomingIds.length; keep
setting messageCache.value for each msg in page.messages as before and reference
messageCache, page.messages, incomingIds, existingIds, messageIds, and
lastPersistedRevision to locate the changes.
Summary by CodeRabbit
New Features
Documentation