[ENG-1295 & ENG-1350 & ENG-1297] F5 import and refresh published nodes#722
Conversation
|
@CodeRabbit full review |
|
Updates to Preview Branch (eng-1295-f5-import-and-refresh-published-nodes) ↗︎
Tasks are run on every commit but only new migration files are pushed.
❌ Branch Error • Sun, 08 Feb 2026 14:54:11 UTC View logs for this Workflow Run ↗︎. |
✅ Actions performedFull review triggered. |
|
|
||
| if (!this.settings.showIdsInFrontmatter) { | ||
| keysToHide.push("nodeTypeId"); | ||
| keysToHide.push( |
There was a problem hiding this comment.
📝 WalkthroughWalkthroughThis pull request adds import and refresh functionality for nodes in Obsidian. It introduces a modal UI for importing nodes from other spaces, utilities to fetch and import remote node content, mechanisms to refresh imported files, command registration for user-initiated actions, and database policy changes enabling access through resource-based permissions. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User / Obsidian
participant Modal as ImportNodesModal
participant Supabase as Supabase Client
participant Vault as Obsidian Vault
participant DB as Database
User->>Modal: Open import modal
Modal->>Supabase: Fetch available groups
Supabase->>DB: Query groups
DB-->>Supabase: Return groups
Supabase-->>Modal: Return groups
Modal->>Supabase: Fetch published nodes for groups
Supabase->>DB: Query published nodes
DB-->>Supabase: Return nodes
Supabase-->>Modal: Return nodes
User->>Modal: Select nodes and confirm
Modal->>Supabase: Import selected nodes (batch)
Supabase->>DB: Fetch node content & metadata
DB-->>Supabase: Return content & metadata
Supabase->>Vault: Create/update files with content
Vault->>Vault: Write frontmatter (nodeTypeId, importedFromSpaceId)
Vault-->>Supabase: Confirm file creation
Supabase-->>Modal: Return import results
Modal-->>User: Display success/failure notice
sequenceDiagram
participant User as User / Obsidian
participant View as DiscourseContextView
participant Utils as refreshImportedFile
participant Supabase as Supabase Client
participant Vault as Obsidian Vault
participant DB as Database
User->>View: Click refresh button
View->>View: Set isRefreshing = true
View->>Utils: Call refreshImportedFile
Utils->>Supabase: Fetch latest node content
Supabase->>DB: Query node content
DB-->>Supabase: Return content
Supabase-->>Utils: Return content
Utils->>Vault: Update file with new content
Vault->>Vault: Update frontmatter
Vault-->>Utils: Confirm update
Utils-->>View: Return success/error result
View->>View: Set isRefreshing = false
View-->>User: Display success/failure notice
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@apps/obsidian/src/components/ImportNodesModal.tsx`:
- Line 81: Remove the debug console.log in ImportNodesModal.tsx: locate the
stray console.log("spaceNames", spaceNames); inside the ImportNodesModal
component (or the function that builds/loads spaceNames) and delete that
statement (or replace with a proper logger call if persistent logging is needed)
so no debug output remains in production code.
- Around line 315-332: The progress bar never updates because
importProgress.current is never changed; to fix quickly, replace the progress
bar UI in renderImportingStep with a simple spinner and static "Importing..."
message (remove or stop referencing
importProgress.current/importProgress.total), or alternatively implement an
onProgress callback in importSelectedNodes and wire it to update importProgress
state; update references to renderImportingStep, importProgress, and
importSelectedNodes accordingly so the component either shows a spinner while
importSelectedNodes runs or receives progress updates via the new callback.
In `@apps/obsidian/src/utils/importNodes.ts`:
- Around line 611-622: The code reads frontmatter and blindly casts
frontmatter?.importedFromSpaceId and frontmatter?.nodeInstanceId before calling
getSpaceName and importSelectedNodes; add explicit validation for required
frontmatter fields (importedFromSpaceId and nodeInstanceId) before proceeding in
the refresh path: check that cache?.frontmatter exists and that
importedFromSpaceId is a finite number and nodeInstanceId is a non-empty string,
and if validation fails return a helpful error (e.g., { success: false, error:
"Missing frontmatter: importedFromSpaceId or nodeInstanceId" }) instead of
calling getSpaceName or importSelectedNodes; keep existing calls to getSpaceName
and importSelectedNodes (referenced here) unchanged otherwise.
- Around line 324-331: parseSchemaLiteralContent currently calls JSON.parse on
literalContent when it's a string which will throw on malformed JSON; wrap the
JSON.parse call in a try/catch inside parseSchemaLiteralContent (and when
literalContent is a string) so parse errors are caught, fallback to an empty
object (or safe default), and optionally surface/log the error for debugging;
ensure the rest of the function continues to use that safe obj when JSON parsing
fails while keeping the existing fallbackName behavior.
- Around line 284-300: The current array parsing only checks
valueStr.startsWith("-") so fields like publishedToGroups: with an empty value
are parsed incorrectly; in the importNodes parsing function update the logic
around valueStr/currentLine/nextLineIndex to also treat an empty valueStr as the
start of a YAML block array by checking the subsequent line(s) for lines that
trim-start with "-" and collecting those items into arrayValues; use the same
collection loop (currentLine, nextLineIndex, arrayValues, itemValue) to gather
each "- item" line and then set value = arrayValues (falling back to
[trimmed.slice(1).trim()] only if no block items were found) so
publishedToGroups and other multi-line arrays from processFrontMatter parse as
string[].
In
`@packages/database/supabase/migrations/20260123135938_allow_space_view_via_resource_access.sql`:
- Around line 5-10: The comment incorrectly states column-level security for
ResourceAccess users while the RLS policy's USING clause grants full-row access;
update the comment to state that the policy controls row visibility only (both
SpaceAccess and ResourceAccess can SELECT all columns) or, if you truly need
column-level restriction for ResourceAccess, replace the current approach by
creating a restricted view (e.g., space_name_view) exposing only name and grant
SELECT on that view to ResourceAccess users, or apply column-level GRANT
SELECT(name) on the base table, or expose a SECURITY DEFINER function that
returns only allowed columns; reference the RLS policy USING clause and the
ResourceAccess/SpaceAccess conditions when making the change.
🧹 Nitpick comments (3)
apps/obsidian/src/utils/importNodes.ts (2)
83-107: Consider batching Content queries to avoid N+1 performance issue.The current implementation makes a separate database query for each node, which could be slow with many nodes. Consider batching these queries.
♻️ Suggested optimization approach
- for (const { spaceId, sourceLocalId } of spaceIdSourceIdPairs) { - const { data: contentData } = await client - .from("Content") - .select("text") - .eq("source_local_id", sourceLocalId) - .eq("space_id", spaceId) - .eq("variant", "direct") - .maybeSingle(); - - if (contentData?.text) { - const relevantRAs = resourceAccessData.filter( - (ra) => - ra.source_local_id === sourceLocalId && ra.space_id === spaceId, - ); - - for (const ra of relevantRAs) { - nodes.push({ - source_local_id: sourceLocalId, - space_id: spaceId, - text: contentData.text, - account_uid: ra.account_uid, - }); - } - } - } + // Batch fetch all content in a single query using compound filters + const { data: allContent } = await client + .from("Content") + .select("source_local_id, space_id, text") + .eq("variant", "direct") + .in("source_local_id", spaceIdSourceIdPairs.map(p => p.sourceLocalId)) + .in("space_id", spaceIdSourceIdPairs.map(p => p.spaceId)); + + const contentMap = new Map<string, string>(); + for (const c of allContent || []) { + if (c.text) { + contentMap.set(`${c.space_id}:${c.source_local_id}`, c.text); + } + } + + for (const { spaceId, sourceLocalId } of spaceIdSourceIdPairs) { + const text = contentMap.get(`${spaceId}:${sourceLocalId}`); + if (text) { + const relevantRAs = resourceAccessData.filter( + (ra) => ra.source_local_id === sourceLocalId && ra.space_id === spaceId, + ); + for (const ra of relevantRAs) { + nodes.push({ source_local_id: sourceLocalId, space_id: spaceId, text, account_uid: ra.account_uid }); + } + } + }Note: Verify that the Supabase query with multiple
.in()clauses produces the correct compound filter behavior.
122-127: Simplify redundant conditional check.The inner
if (nodeInstanceId)check is redundant sincefrontmatter?.nodeInstanceIdwas already verified as truthy.♻️ Proposed simplification
if (frontmatter?.nodeInstanceId) { - const nodeInstanceId = frontmatter.nodeInstanceId as string; - if (nodeInstanceId) { - nodeInstanceIds.add(nodeInstanceId); - } + nodeInstanceIds.add(frontmatter.nodeInstanceId as string); }apps/obsidian/src/components/ImportNodesModal.tsx (1)
68-69: Simplify redundant Set creation.
getLocalNodeInstanceIdsalready returnsSet<string>, so creating a new Set from it is unnecessary.♻️ Proposed simplification
// Get local nodeInstanceIds to filter out existing nodes const localNodeInstanceIds = await getLocalNodeInstanceIds(plugin); - const localIdsSet = new Set(localNodeInstanceIds); // Filter out nodes that already exist locally const importableNodes = publishedNodes.filter( - (node) => !localIdsSet.has(node.source_local_id), + (node) => !localNodeInstanceIds.has(node.source_local_id), );
packages/database/supabase/migrations/20260123135938_allow_space_view_via_resource_access.sql
Outdated
Show resolved
Hide resolved
| import type { DGSupabaseClient } from "@repo/database/lib/client"; | ||
|
|
||
| const publishSchema = async ({ | ||
| client, |
There was a problem hiding this comment.
required for ENG-1297: F11: Convert node type on import
b9a9410 to
3e5311d
Compare
3e5311d to
469ffc2
Compare
b1a1f77 to
6bd9447
Compare
maparent
left a comment
There was a problem hiding this comment.
I'm not done, but I hate the idea of you having to wait for all my comments before you can work. I'll keep going.
First: Still very happy with the overall work. Behaviour of happy path is pretty much what we want.
Still looking at edge cases.
Another question I'd like for us to think about: The "imported From" and groupId are using opaque Ids. (The latter is on me!)
It would be nice to have space name and group name in the frontMatter.
Is there a way to store both in the frontmatter that mostly shows the user-friendly name?
Of course we can store both as a map, or an array of pairs, but neither would look good.
We might store something like :name , which is easy to parse, and at least will include the name. What do you think?
Some improvements we can delay to other PRs (I created a task for node templates) but I think we should not store wrong things in the frontmatter if possible (though I already did that with groups!)
(Making the group names available now.)
6bd9447 to
bcd7d3b
Compare
50b4542 to
cad2844
Compare
maparent
left a comment
There was a problem hiding this comment.
Have a quick look at #722 (comment)
I think it's a different scope, but I could be wrong.
Take a decision, I approve either way.
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
https://www.loom.com/share/410fb6cdb4aa4e9ba1a6a4058a69bb19
https://www.loom.com/share/69ae6c6ba43d4856b1adc1ab8ff4061e
https://www.loom.com/share/b02693ddf5cc4a9cba0aaf97986080c3
Summary by CodeRabbit
Release Notes
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.