Skip to content

feat(noter): wire memory detection panel + Walruscan explorer links#57

Open
Ashwin-3cS wants to merge 3 commits intoMystenLabs:devfrom
Ashwin-3cS:feat/noter-wire-memory-panel
Open

feat(noter): wire memory detection panel + Walruscan explorer links#57
Ashwin-3cS wants to merge 3 commits intoMystenLabs:devfrom
Ashwin-3cS:feat/noter-wire-memory-panel

Conversation

@Ashwin-3cS
Copy link
Copy Markdown
Contributor

Description

While implementing the Walruscan explorer link for the TODO: Add link to Sui Explorer in memory-panel-enhanced.tsx, it was observed that MemoryPanelEnhanced, MemoryDetectButton, MemoryHighlightPlugin, and MemoryHighlightNode are all fully implemented but never wired into the NoteEditor.

This PR connects them, enabling the detect → approve → save memory workflow.


Abstract

What this PR does

  • Registers MemoryHighlightNode in the Lexical editor config so the editor can serialize/deserialize memory highlights
  • Adds MemoryHighlightPlugin to the editor plugin list to handle highlight injection, hover previews, and status updates
  • Adds MemoryDetectButton to the toolbar (next to existing Save button) — triggers AI-powered memory detection
  • Renders MemoryPanelEnhanced as a right sidebar — shows detected memories with approve/reject/retry flow and progress stages
  • Replaces the TODO: Add link to Sui Explorer placeholder toast with a direct Walruscan link (https://walruscan.com/{network}/blob/{blobId}) on saved memories
  • Adds the same Walruscan link to the memory hover preview component

Why Walruscan instead of Sui Explorer

Previously this was a TODO block, and since the API does not return Sui objects (only the blob ID), Walruscan is used instead.

This PR switches to Walruscan using memwalBlobId, since Walrus is the underlying blob storage layer and the blob ID represents the actual stored data. This ensures links point to real, inspectable resources and aligns behavior across the panel and hover preview components.


New API endpoint: /api/memory/remember-one

The existing /api/memory/remember endpoint calls memwal.analyze() which extracts multiple facts from a full note and stores them all at once. However, useNoteMemorySave (the hook powering the panel's "Approve" button) saves a single approved memory and expects a { id, blob_id } response shape — which is what memwal.remember() returns.

Rather than changing the existing endpoint's behavior, this PR adds /api/memory/remember-one that calls rememberText() (wrapping memwal.remember()) for individual memory saves. The existing bulk analyze endpoint is unchanged.


Fuzzy text matching in MemoryHighlightPlugin

The INJECT_MEMORY_HIGHLIGHTS_COMMAND handler searches for exact text matches in the editor to create highlight nodes. However, the server's FACT_EXTRACTION_PROMPT instructs the LLM to rephrase facts into third-person statements:

Input: "i am the new f1 driver for tokyo"
LLM output: "User is the new F1 driver for Tokyo"

Since the output does not appear verbatim in the editor text, indexOf returns -1 and no highlight is created, leaving the panel empty.

This PR adds a two-stage fallback after the existing exact + trimmed matching:

  1. Case-insensitive exact match
  2. Longest common substring (minimum 10 characters)

This handles LLM prefix rephrasing ("User is/has/lives in...") by finding the best overlapping text between LLM output and editor content.

This is a frontend-side mitigation. The root cause is that the FACT_EXTRACTION_PROMPT in routes.rs produces rephrased statements rather than verbatim excerpts. A server-side prompt change could eliminate the need for fuzzy matching, but that is a separate concern affecting all MemWal apps.


Test plan

  • Write a note with factual content (e.g. "I live in Tokyo and work at Google")
  • Click "Save" — existing flow works unchanged
  • Click "Detect Memories" — right panel shows detected memory cards with highlights in editor
  • Approve a memory — progress stages (uploading → saved), blockchain data appears
  • Click Walruscan link on saved memory — opens blob on walruscan.com
  • Tested end-to-end against a self-hosted server using a free Jina embedding model and a free OpenRouter LLM for fact extraction — server-side config changes for split model support are on a separate
    branch
    ; would love your take on whether I can raise a separate PR for this easily configurable model setup for users, as well as any changes you'd like to see on this PR.
Screenshot 2026-03-28 201919

Wire up existing but unused MemoryPanelEnhanced and MemoryDetectButton
components into the NoteEditor, completing the detect → approve → save
memory workflow.

- Register MemoryHighlightNode + MemoryHighlightPlugin in editor config
- Add "Detect Memories" button to toolbar alongside existing Save button
- Render MemoryPanelEnhanced as right sidebar for memory approval flow
- Add fuzzy text matching in MemoryHighlightPlugin to handle LLM-rephrased
  facts that don't match editor text verbatim (longest common substring)
- Add /api/memory/remember-one endpoint for individual memory saves
  (existing /api/memory/remember handles bulk analyze, unchanged)
- Replace TODO Sui Explorer placeholder with direct Walruscan blob links
- Fix memory-hover-preview to use direct <a> link instead of toast
The hover preview was linking to Suiscan with the vector DB UUID
(memwalMemoryId), which isn't a Sui object. Changed to Walruscan
blob URL using memwalBlobId, matching the panel component fix.
@Aaron1924 Aaron1924 requested a review from ducnmm April 2, 2026 06:56
@Ashwin-3cS
Copy link
Copy Markdown
Contributor Author

Fixed a missing null guard on memwalBlobId in memory-hover-preview.tsx; the link was rendering unconditionally, which would produce a walruscan.com/.../blob/null URL for memories without a blob ID.
Now consistent with the guard already in memory-panel-enhanced.tsx.

@Ashwin-3cS
Copy link
Copy Markdown
Contributor Author

Tested end-to-end against a self-hosted server using a free Jina embedding model and a free OpenRouter LLM for fact extraction — server-side config changes for split model support are on a separate
branch; would love your take on whether I can raise a separate PR for this easily configurable model setup for users, as well as any changes you'd like to see on this PR.

Hey, raised the configurable model setup as a separate PR (#77) - would love your thoughts on it when you get a chance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant