feat(noter): wire memory detection panel + Walruscan explorer links#57
feat(noter): wire memory detection panel + Walruscan explorer links#57Ashwin-3cS wants to merge 3 commits intoMystenLabs:devfrom
Conversation
Wire up existing but unused MemoryPanelEnhanced and MemoryDetectButton components into the NoteEditor, completing the detect → approve → save memory workflow. - Register MemoryHighlightNode + MemoryHighlightPlugin in editor config - Add "Detect Memories" button to toolbar alongside existing Save button - Render MemoryPanelEnhanced as right sidebar for memory approval flow - Add fuzzy text matching in MemoryHighlightPlugin to handle LLM-rephrased facts that don't match editor text verbatim (longest common substring) - Add /api/memory/remember-one endpoint for individual memory saves (existing /api/memory/remember handles bulk analyze, unchanged) - Replace TODO Sui Explorer placeholder with direct Walruscan blob links - Fix memory-hover-preview to use direct <a> link instead of toast
The hover preview was linking to Suiscan with the vector DB UUID (memwalMemoryId), which isn't a Sui object. Changed to Walruscan blob URL using memwalBlobId, matching the panel component fix.
|
Fixed a missing null guard on memwalBlobId in memory-hover-preview.tsx; the link was rendering unconditionally, which would produce a walruscan.com/.../blob/null URL for memories without a blob ID. |
Hey, raised the configurable model setup as a separate PR (#77) - would love your thoughts on it when you get a chance! |
Description
While implementing the Walruscan explorer link for the
TODO: Add link to Sui Explorerinmemory-panel-enhanced.tsx, it was observed thatMemoryPanelEnhanced,MemoryDetectButton,MemoryHighlightPlugin, andMemoryHighlightNodeare all fully implemented but never wired into theNoteEditor.This PR connects them, enabling the detect → approve → save memory workflow.
Abstract
What this PR does
MemoryHighlightNodein the Lexical editor config so the editor can serialize/deserialize memory highlightsMemoryHighlightPluginto the editor plugin list to handle highlight injection, hover previews, and status updatesMemoryDetectButtonto the toolbar (next to existing Save button) — triggers AI-powered memory detectionMemoryPanelEnhancedas a right sidebar — shows detected memories with approve/reject/retry flow and progress stagesTODO: Add link to Sui Explorerplaceholder toast with a direct Walruscan link (https://walruscan.com/{network}/blob/{blobId}) on saved memoriesWhy Walruscan instead of Sui Explorer
Previously this was a TODO block, and since the API does not return Sui objects (only the blob ID), Walruscan is used instead.
This PR switches to Walruscan using
memwalBlobId, since Walrus is the underlying blob storage layer and the blob ID represents the actual stored data. This ensures links point to real, inspectable resources and aligns behavior across the panel and hover preview components.New API endpoint:
/api/memory/remember-oneThe existing
/api/memory/rememberendpoint callsmemwal.analyze()which extracts multiple facts from a full note and stores them all at once. However,useNoteMemorySave(the hook powering the panel's "Approve" button) saves a single approved memory and expects a{ id, blob_id }response shape — which is whatmemwal.remember()returns.Rather than changing the existing endpoint's behavior, this PR adds
/api/memory/remember-onethat callsrememberText()(wrappingmemwal.remember()) for individual memory saves. The existing bulk analyze endpoint is unchanged.Fuzzy text matching in
MemoryHighlightPluginThe
INJECT_MEMORY_HIGHLIGHTS_COMMANDhandler searches for exact text matches in the editor to create highlight nodes. However, the server'sFACT_EXTRACTION_PROMPTinstructs the LLM to rephrase facts into third-person statements:Input: "i am the new f1 driver for tokyo"
LLM output: "User is the new F1 driver for Tokyo"
Since the output does not appear verbatim in the editor text,
indexOfreturns -1 and no highlight is created, leaving the panel empty.This PR adds a two-stage fallback after the existing exact + trimmed matching:
This handles LLM prefix rephrasing ("User is/has/lives in...") by finding the best overlapping text between LLM output and editor content.
This is a frontend-side mitigation. The root cause is that the
FACT_EXTRACTION_PROMPTinroutes.rsproduces rephrased statements rather than verbatim excerpts. A server-side prompt change could eliminate the need for fuzzy matching, but that is a separate concern affecting all MemWal apps.Test plan
branch; would love your take on whether I can raise a separate PR for this easily configurable model setup for users, as well as any changes you'd like to see on this PR.