Skip to content

Vault#70

Merged
gajendraxdev merged 8 commits into
zync-sh:stagingfrom
gajendraxdev:vault
May 12, 2026
Merged

Vault#70
gajendraxdev merged 8 commits into
zync-sh:stagingfrom
gajendraxdev:vault

Conversation

@gajendraxdev
Copy link
Copy Markdown
Member

@gajendraxdev gajendraxdev commented May 11, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added Local Vault for securely storing SSH credentials (passwords and private keys)
    • Added Google Drive backup and restore for vault data
    • Added recovery key generation and management for vault access
    • Integrated vault credentials into SSH connection authentication
    • Added vault management UI with credential creation, assignment, and rotation
    • Added "Secure to Vault" workflow to migrate existing SSH credentials
  • Documentation

    • Added Vault and Sync architecture guides

Review Change Stack

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 11, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a07e9cee-c002-4d74-9d28-597db3b7fbbd

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • ✅ Review completed - (🔄 Check again to review again)
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 19

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/features/connections/domain/validation.ts (1)

61-104: ⚠️ Potential issue | 🟠 Major | ⚖️ Poor tradeoff

Add vault mode validation to form-level checks.

Vault validation exists in buildConnectionSavePayload but only at save time (throws error). The form-level validateConnectionDraft function has no vault branch and ConnectionDraft doesn't include the authRef field, so users don't get immediate feedback when vault mode is selected without choosing a credential.

Consider either:

  1. Adding authRef to ConnectionDraft and validating it in validateConnectionDraft for consistency with password/key handling
  2. Or documenting why vault validation is deferred to save time

This creates an asymmetry: password and key auth show validation errors immediately in the form, but vault auth errors only appear on save attempt.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/features/connections/domain/validation.ts` around lines 61 - 104,
validateConnectionDraft currently lacks vault-mode checks, so when authMode ===
'vault' users get no immediate feedback; update the ConnectionDraft type to
include authRef and add a vault branch in validateConnectionDraft that checks
normalizeText(draft.authRef) (or presence of draft.authRef) and, on failure,
push an error message and set fieldErrors.authRef similar to the existing
password/key branches; keep behavior consistent with buildConnectionSavePayload
by using the same validation message text and ensure the returned object still
includes normalizedPort/normalizedFolder as before.
src/store/connectionSlice.ts (1)

689-696: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Default restored vault tabs to a valid profile.

openVaultTab() always guarantees DEFAULT_VAULT_PROFILE_ID, but session restore leaves legacy/invalid vaultProfileId values as undefined. That makes restored vault tabs weaker than newly opened ones and can break older persisted sessions.

Suggested fix
                     return {
                         id: s.id,
                         type: s.tabType as Tab['type'],
                         title: s.title,
                         connectionId: s.connectionId,
-                        vaultProfileId: isVaultProfileId(s.vaultProfileId)
-                            ? s.vaultProfileId
-                            : undefined,
+                        vaultProfileId: s.tabType === 'vault'
+                            ? (isVaultProfileId(s.vaultProfileId)
+                                ? s.vaultProfileId
+                                : DEFAULT_VAULT_PROFILE_ID)
+                            : undefined,
                         view,
                     };
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/store/connectionSlice.ts` around lines 689 - 696, Restored vault tabs are
getting vaultProfileId left as undefined because the session-restore mapping
only uses isVaultProfileId(s.vaultProfileId) without falling back; update the
restore logic in connectionSlice (where the tab object is built) to mirror
openVaultTab behavior by defaulting invalid/undefined vaultProfileId to
DEFAULT_VAULT_PROFILE_ID (use isVaultProfileId to check validity), so restored
tabs receive the same DEFAULT_VAULT_PROFILE_ID as newly opened vault tabs.
🧹 Nitpick comments (18)
src/components/vault/RecoveryKeyModal.tsx (1)

47-67: 💤 Low value

Consider delaying URL revocation for robustness.

The URL.revokeObjectURL(url) is called immediately after a.click(). While this pattern generally works because browsers clone the URL internally, a brief delay would be more defensive against edge cases where the download hasn't fully initiated.

♻️ Optional: Add a small delay before revoking
 const handleDownload = () => {
   const content = [
     'Zync Vault Recovery Key',
     '=======================',
     '',
     'Store this file somewhere safe and offline.',
     'This key can unlock your vault if you forget your passphrase.',
     '',
     recoveryKey,
     '',
     `Generated: ${new Date().toISOString()}`,
   ].join('\n');

   const blob = new Blob([content], { type: 'text/plain' });
   const url = URL.createObjectURL(blob);
   const a = document.createElement('a');
   a.href = url;
   a.download = 'zync-vault-recovery-key.txt';
   a.click();
-  URL.revokeObjectURL(url);
+  setTimeout(() => URL.revokeObjectURL(url), 100);
 };
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/vault/RecoveryKeyModal.tsx` around lines 47 - 67, The
URL.revokeObjectURL call in handleDownload is executed immediately after
a.click(), which can race with download start; update handleDownload to defer
revocation (e.g., wrap URL.revokeObjectURL(url) in a short setTimeout like
100–500ms) or revoke in the anchor's onclick/onblur handler to ensure the
browser has time to start the download while keeping the rest of the logic
(building content, creating Blob, creating anchor, setting href/download,
calling a.click()) unchanged; refer to handleDownload and recoveryKey to locate
the code to change.
src-tauri/src/lib.rs (1)

43-45: ⚖️ Poor tradeoff

Single tokio::sync::Mutex<VaultService> serializes all vault IPC.

Wrapping VaultService in a single mutex means every vault command (including read-only vault_item_list / vault_item_get) acquires the same lock, which can starve interactive reads behind heavier writes (e.g., vault_import, vault_secure_to_vault). If VaultService holds a redb handle, redb already provides MVCC reads that don't need external serialization. Consider an internal RwLock (or splitting the lock between unlock state vs. data access) once the API stabilizes.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src-tauri/src/lib.rs` around lines 43 - 45, The VaultService is currently
wrapped in a single tokio::sync::Mutex (see app.manage(...
VaultService::new(...))) which serializes all IPC and blocks read-only handlers
(vault_item_list, vault_item_get) behind writers (vault_import,
vault_secure_to_vault); change the coordination to avoid global serialization by
replacing the external Mutex with a concurrent-safe container (e.g.,
tokio::sync::RwLock or Arc<VaultService> with internal fine-grained locking) or
by adding an internal RwLock inside VaultService that separates read access from
write/update state; update the app.manage(...) registration to store the new
wrapper (RwLock or Arc) and adjust code that acquires the lock accordingly so
readers use shared/read locks and writers use exclusive/write locks.
src/components/vault/VaultWorkspacePanel.tsx (1)

22-35: 💤 Low value

Consider making active-profile state more semantic for assistive tech.

The chips visually distinguish the active profile but expose no programmatic state. If these are intended to convey "current scope" to screen-reader users, adding aria-current="true" (or a visually hidden label) on the active item would help.

♻️ Suggested addition
                         <span
                             key={item.id}
+                            aria-current={profileId === item.id ? 'true' : undefined}
                             className={cn(
                                 'px-2 py-1 rounded-md text-xs border border-app-border/40',
                                 profileId === item.id
                                     ? 'bg-app-surface text-app-text'
                                     : 'bg-app-surface/40 text-app-muted',
                             )}
                         >
                             {item.label}
                         </span>
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/vault/VaultWorkspacePanel.tsx` around lines 22 - 35, The
profile chips rendered from VAULT_PROFILES in VaultWorkspacePanel.tsx currently
only convey the active item visually; update the mapping where the span is
rendered (the VAULT_PROFILES.map block that checks profileId === item.id) to
expose programmatic state for assistive tech by adding aria-current="true" to
the active item (and either aria-current="false" or no aria-current on
non-active items), and consider adding a clear aria-label or role (e.g.,
role="listitem" on each span and role="list" on the container) so screen readers
can announce the current profile; locate the span that renders {item.label} and
add these attributes accordingly.
.github/workflows/ci.yml (1)

39-45: ⚡ Quick win

cargo test known_answer can silently pass if tests are renamed.

cargo test <pattern> does substring filtering and exits 0 when zero tests match. If the known-answer tests are renamed or moved, this CI step will keep passing without running anything. Consider asserting a minimum count, listing tests, or pinning exact test names.

♻️ Suggested safeguard
       - name: Run vault crypto known-answer tests
         working-directory: src-tauri
         env:
           # Required only because sync commands compile with env!("GOOGLE_CLIENT_ID").
           # The vault crypto known-answer tests do not use Google OAuth.
           GOOGLE_CLIENT_ID: ci-known-answer-test-client-id
-        run: cargo test known_answer
+        shell: bash
+        run: |
+          # Fail if the substring filter matches zero tests.
+          count=$(cargo test known_answer -- --list 2>/dev/null | grep -c ': test$' || true)
+          if [ "$count" -eq 0 ]; then
+            echo "::error::No 'known_answer' tests found"; exit 1
+          fi
+          cargo test known_answer
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In @.github/workflows/ci.yml around lines 39 - 45, The CI step "Run vault crypto
known-answer tests" uses the substring filter command `cargo test known_answer`,
which can exit successfully when no tests match; update that step so it either
(a) runs the test by exact name (use Cargo's exact test matching for the
known_answer test) instead of a substring, or (b) add a pre-check that lists
tests and fails the job if zero tests match the known_answer pattern; target the
CI job/step currently invoking `cargo test known_answer` to implement one of
these safeguards.
tests/vaultNavState.test.mjs (1)

7-15: 💤 Low value

Consider continuing after first failure for better test reporting.

runTest rethrows on first failure, so subsequent tests are not executed and their status is unknown. For a small suite this is acceptable, but accumulating failures (and exiting non-zero at the end) gives more useful CI output.

♻️ Suggested change
+let failureCount = 0;
 function runTest(name, fn) {
   try {
     fn();
     console.log(`PASS ${name}`);
   } catch (error) {
     console.error(`FAIL ${name}`);
-    throw error;
+    console.error(error);
+    failureCount += 1;
   }
 }

Then at the end of the file:

-console.log('Vault nav state tests passed.');
+if (failureCount > 0) {
+  console.error(`${failureCount} vault nav state test(s) failed.`);
+  process.exit(1);
+}
+console.log('Vault nav state tests passed.');
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/vaultNavState.test.mjs` around lines 7 - 15, The test runner function
runTest currently rethrows on the first failure which stops subsequent tests;
modify runTest to record failures instead of throwing (e.g., push failure info
into a shared failures array) and continue running remaining tests, and after
all tests have run (at file end) inspect the failures array to log a summary and
call process.exit(1) if any failures were recorded; reference the runTest
function and the file-level test execution/termination logic when making this
change.
src/components/layout/Sidebar.tsx (1)

570-583: 💤 Low value

Unnecessary fragment around single child in the else-branch.

The <>...</> wrapper around the lone <SidebarSection> in the empty-active-connections branch can be removed for symmetry/cleanliness. Trivial nit; skip if you prefer to keep the structural parity with the populated branch.

♻️ Suggested change
                     ) : (
-                        <>
-                            <SidebarSection
-                                title="All Hosts"
-                                compactMode={compactMode}
-                                onContextMenu={(event) => {
-                                    event.preventDefault();
-                                    setAllHostsContextMenu({ x: event.clientX, y: event.clientY });
-                                }}
-                            >
-                                {allHostsContent}
-                            </SidebarSection>
-                        </>
+                        <SidebarSection
+                            title="All Hosts"
+                            compactMode={compactMode}
+                            onContextMenu={(event) => {
+                                event.preventDefault();
+                                setAllHostsContextMenu({ x: event.clientX, y: event.clientY });
+                            }}
+                        >
+                            {allHostsContent}
+                        </SidebarSection>
                     )}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/layout/Sidebar.tsx` around lines 570 - 583, Remove the
unnecessary React fragment that wraps the single <SidebarSection> in the
else-branch: replace the fragment block with the <SidebarSection> node itself
(the block using SidebarSection with props title="All Hosts", compactMode,
onContextMenu which calls setAllHostsContextMenu, and children allHostsContent)
so the JSX is cleaner and symmetric with the other branch.
src/features/connections/domain/merge.ts (1)

17-36: 💤 Low value

Confirm intent: existing vault ref always wins over incoming plaintext.

When incoming has no authRef but provides password/privateKeyPath, the helper preserves existing.authRef and clears the incoming plaintext credentials. That makes vault refs sticky on import, which is likely the desired behavior, but it also means a user re-importing a config to switch back from vault to plaintext auth would have their new credentials silently dropped.

If that downgrade path should ever be supported (e.g., explicit "unlink vault" import), this helper would need an opt-in flag. Otherwise consider documenting this stickiness in the function JSDoc so future callers understand the precedence rule.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/features/connections/domain/merge.ts` around lines 17 - 36,
preserveVaultCredentialOnUpdate currently treats an existing authRef as
authoritative and silently drops incoming plaintext credentials
(password/privateKeyPath), making vault refs "sticky"; either add an explicit
opt-in boolean parameter (e.g., allowVaultUnlink or preferIncomingPlaintext) to
preserve the current behavior unless callers opt in to let
incoming.password/incoming.privateKeyPath override and clear existing.authRef,
or if you intend stickiness always, add a clear JSDoc to
preserveVaultCredentialOnUpdate documenting that existing.authRef wins and
incoming plaintext will be discarded (referencing parameters existing, incoming,
authRef, password, privateKeyPath).
src-tauri/build.rs (2)

79-82: 💤 Low value

is_valid_google_client_id accepts any non-placeholder string.

A typo'd or partially copied client ID will pass the release-time gate. Tightening to require the <digits>-<chars>.apps.googleusercontent.com shape catches more shipping mistakes for marginal effort.

♻️ Tighter validation
 fn is_valid_google_client_id(value: &str) -> bool {
     let trimmed = value.trim();
-    !trimmed.is_empty() && trimmed != "PLACEHOLDER_CLIENT_ID"
+    !trimmed.is_empty()
+        && trimmed != "PLACEHOLDER_CLIENT_ID"
+        && trimmed.ends_with(".apps.googleusercontent.com")
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src-tauri/build.rs` around lines 79 - 82, The current
is_valid_google_client_id function only checks for non-empty and non-placeholder
values; replace that loose check with a stricter pattern match: require the
value to match the Google OAuth client ID format like
"<digits>-<chars>.apps.googleusercontent.com" (e.g., digits, a hyphen, an
alphanumeric/underscore/dash segment, then ".apps.googleusercontent.com") using
a regex; update is_valid_google_client_id to trim the input, test the regex, and
return true only on a full match (still rejecting the "PLACEHOLDER_CLIENT_ID"
token).

40-50: 💤 Low value

Expand the comment to explicitly document the installed-app OAuth rationale.

The approach is correct—Google's OAuth 2.0 guidance for installed apps explicitly states that client_secret is "not treated as a secret" and should be embedded in source code. Since this code implements a desktop PKCE flow (the recommended installed-app pattern), the allowlist is appropriate. However, clarifying the comment to mention "per Google's installed-app OAuth guidance" would prevent future maintainers from questioning this decision.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src-tauri/build.rs` around lines 40 - 50, Update the comment above the
function should_skip_rustc_env to explicitly reference Google's installed-app
OAuth guidance and note that for desktop PKCE flows the client_secret is "not
treated as a secret" and may be embedded, so GOOGLE_CLIENT_SECRET is
intentionally allowlisted; mention this rationale (desktop PKCE / installed-app
guidance) so future maintainers understand why the key is exempt from secret
filtering.
src/features/connections/infrastructure/connectionIpc.ts (1)

80-82: 💤 Low value

IPC channel naming convention inconsistency.

The new disconnectVaultBackedIpc uses underscore snake_case ('ssh_disconnect_vault_backed'), while other ssh-related IPC channels in this file use colon-separated names ('ssh:connect', 'ssh:disconnect', 'ssh:test', etc.). Consider renaming to 'ssh:disconnectVaultBacked' to match the established convention in this file, or document if this divergence is intentional.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/features/connections/infrastructure/connectionIpc.ts` around lines 80 -
82, The IPC channel name in disconnectVaultBackedIpc uses snake_case
('ssh_disconnect_vault_backed') which is inconsistent with other SSH channels
that use colon-separated names (e.g., 'ssh:connect', 'ssh:disconnect',
'ssh:test'); update the invoked channel string in disconnectVaultBackedIpc to
'ssh:disconnectVaultBacked' (or another colon-separated variant consistent with
the file) so it matches the established IPC naming convention used by functions
like the existing ssh connect/disconnect/test invocations.
src/vault/useVaultStore.ts (1)

28-44: ⚡ Quick win

Potential state inconsistency when refreshItems fails.

If refreshItems() throws at line 34, the status has already been set to the unlocked state, but items may be empty or stale. The catch block then resets status to null and clears items, which is correct, but there's a brief window where the store shows unlocked status with no items.

Consider wrapping the items refresh in a try-catch to handle it separately, or ensure the UI can handle the transitional state gracefully.

♻️ Suggested approach
 refresh: async () => {
   set({ isLoading: true, error: null });
   try {
     const status = await vaultIpc.status();
     set({ status });
     if (status.status === 'unlocked') {
-      await get().refreshItems();
+      try {
+        await get().refreshItems();
+      } catch (itemsError) {
+        console.warn('[Vault] Failed to load items after status refresh:', itemsError);
+        set({ items: [] });
+      }
     } else {
       set({ items: [] });
     }
   } catch (e) {
     set({ status: null, items: [], error: extractErrorMessage(e) });
     throw e;
   } finally {
     set({ isLoading: false });
   }
 },
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/vault/useVaultStore.ts` around lines 28 - 44, The refresh function sets
status from vaultIpc.status() then calls get().refreshItems(), which can throw
and leave the store briefly showing unlocked with no items; to fix, wrap the
call to get().refreshItems() in its own try/catch inside refresh so failures
clear items and set error without reverting the already-determined status (or
alternatively rollback status to null on item-refresh failure), e.g. call
vaultIpc.status() and set({ status }), then try { await get().refreshItems(); }
catch (e) { set({ items: [], error: extractErrorMessage(e) }); throw e; }
finally set({ isLoading: false }); ensure unique symbols referenced: refresh,
vaultIpc.status, get().refreshItems, set, extractErrorMessage.
src/components/layout/sidebar/VaultNavSection.tsx (1)

36-40: ⚡ Quick win

Consider surfacing sync status errors to the user.

The refreshGoogleSync callback logs errors to the console but doesn't update any UI state. If Google sync status checks repeatedly fail (network issues, auth expiry), users have no visual indication of the problem.

Consider adding error state to display a warning indicator in the sidebar when sync status cannot be retrieved.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/layout/sidebar/VaultNavSection.tsx` around lines 36 - 40, The
refreshGoogleSync callback currently swallows errors to console only; add a
local error state (e.g., googleSyncError via useState) inside VaultNavSection
and update it in refreshGoogleSync: clear googleSyncError on successful
syncIpc.status(...) / setGoogleSync(result), and set googleSyncError with the
caught error in the catch block. Use this googleSyncError flag where the sidebar
renders the Google sync nav item to show a warning indicator and optionally a
tooltip/message when status cannot be retrieved. Ensure any other callers of
setGoogleSync remain unchanged.
src/components/vault/VaultUnlockModal.tsx (1)

67-79: 💤 Low value

Consider simplifying recovery key error messages.

The error composition ${code ?? ''} ${message} at line 77 may produce awkwardly formatted messages like " Failed to unlock" (leading space) or "code_name Error message" (unclear concatenation).

Consider using just the message or a more structured format.

♻️ Cleaner alternative
       } catch (e: unknown) {
         const { code, message } = extractError(e);
-        const raw = `${code ?? ''} ${message}`.trim();
-        setLocalError(raw || 'Failed to unlock with recovery key.');
+        setLocalError(message || 'Failed to unlock with recovery key.');
       }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/vault/VaultUnlockModal.tsx` around lines 67 - 79, The current
error formatting in the recovery branch builds `${code ?? ''} ${message}` which
can produce leading spaces or awkward concatenation; update the catch block for
unlockWithRecoveryKey to compose a cleaner message by extracting code and
message from extractError(e), then create a result by joining only non-empty
parts (e.g., [code, message].filter(Boolean).join(': ') or just message if
present), trimming the final string, and pass that to setLocalError with the
fallback 'Failed to unlock with recovery key.' Keep references to unlockMode ===
'recovery', recoveryKey, extractError, unlockWithRecoveryKey, handleClose, and
setLocalError when making the change.
src-tauri/src/sync/commands.rs (2)

637-676: 💤 Low value

Move mod tests to the end of the file.

#[cfg(test)] mod tests sits between sync_status and sync_connect, splitting the public command surface. Moving it past the last command keeps the production code contiguous.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src-tauri/src/sync/commands.rs` around lines 637 - 676, The test module
#[cfg(test)] mod tests is currently inserted between the public command
functions (e.g., sync_status and sync_connect) and should be relocated to the
end of the file so production command definitions remain contiguous; move the
entire tests module block (including both tests
tokens_for_disk_strips_access_and_refresh_tokens and
token_scope_check_requires_exact_drive_scope) after the last public command in
this file so it no longer splits the public API surface.

12-13: ⚡ Quick win

Make GOOGLE_CLIENT_ID use option_env!() with a default empty string to match GOOGLE_CLIENT_SECRET.

env!("GOOGLE_CLIENT_ID") requires the environment variable at compile time, preventing contributors without OAuth credentials from building the crate. The CI workflow itself confirms this dependency (ci.yml line 44), requiring GOOGLE_CLIENT_ID to be set even though the runtime guard at line 508 is already prepared to handle an empty value. Since GOOGLE_CLIENT_SECRET already uses option_env!() with a runtime check, GOOGLE_CLIENT_ID should follow the same pattern for consistency and better developer experience.

♻️ Suggested change
-const GOOGLE_CLIENT_ID: &str = env!("GOOGLE_CLIENT_ID");
+const GOOGLE_CLIENT_ID: &str = match option_env!("GOOGLE_CLIENT_ID") {
+    Some(v) => v,
+    None => "",
+};
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src-tauri/src/sync/commands.rs` around lines 12 - 13, Change the
GOOGLE_CLIENT_ID declaration to use option_env! with a fallback empty string (so
builds do not require the env var at compile time) to match
GOOGLE_CLIENT_SECRET; update the const declaration for GOOGLE_CLIENT_ID (the
symbol GOOGLE_CLIENT_ID in src-tauri/src/sync/commands.rs) to read from
option_env!("GOOGLE_CLIENT_ID") and return "" when None (e.g., replace
env!("GOOGLE_CLIENT_ID") with an option_env!-based expression that yields a &str
default ""), leaving the runtime guard that checks the value (the existing logic
that uses GOOGLE_CLIENT_ID alongside GOOGLE_CLIENT_SECRET) unchanged.
src-tauri/src/vault/secure_to_vault.rs (1)

181-197: ⚡ Quick win

Dedup key includes auto-generated label, so renamed connections won't deduplicate.

existing_by_fingerprint is keyed by (kind, label, fingerprint). The label is auto-generated from conn.name/conn.username/conn.host, so if a user runs secure-to-vault, renames the connection, and re-runs (or runs after a partial failure), the secret with identical fingerprint but a new label will create a duplicate vault item instead of relinking to the existing one. Since the (kind, fingerprint) pair is what actually identifies the secret content, consider dropping label from the lookup key (and updating the matched record's label if you want to keep names in sync).

♻️ Suggested narrowing
-    let mut existing_by_fingerprint: HashMap<(String, String, String), (String, String, u64)> =
-        HashMap::new();
+    let mut existing_by_fingerprint: HashMap<(String, String), (String, String, u64)> =
+        HashMap::new();
     for record in existing_records {
         let fingerprint = vault.secret_fingerprint(&record.secret)?;
-        let key = (record.kind.clone(), record.label.clone(), fingerprint);
+        let key = (record.kind.clone(), fingerprint);
         ...
     }
     ...
-        let lookup_key = (kind.to_string(), secure_item.label.clone(), fingerprint);
+        let lookup_key = (kind.to_string(), fingerprint);
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src-tauri/src/vault/secure_to_vault.rs` around lines 181 - 197, The
deduplication currently uses existing_by_fingerprint keyed by (record.kind,
record.label, fingerprint) which prevents matching secrets when only the
auto-generated label changed; change the key to (record.kind, fingerprint) so
duplicates are detected by content, i.e., build existing_by_fingerprint:
HashMap<(String,String), (String, String, u64)> keyed by (record.kind.clone(),
fingerprint) and adjust the entry logic in the loop that references
existing_by_fingerprint, fingerprint, VaultService::record_logical_id and
record.created_at; optionally, when you match an existing record whose label
differs, update that record's label (or record the new label) so names stay in
sync.
src/components/settings/tabs/VaultTab.tsx (1)

122-149: ⚡ Quick win

Narrow the effect dependency on status to avoid redundant IPC calls.

This effect depends on the entire status object. Any time the store produces a new status reference with the same 'unlocked' state (e.g., after refresh()/refreshItems() updates itemCount), the whole block re-runs — refetching items, the secure preview, and recovery-key status on every cycle. Consider keying on the discriminator and identity instead.

♻️ Narrower dependency
-  }, [loadConnections, loadSecurePreview, refreshItems, showToast, status]);
+  }, [loadConnections, loadSecurePreview, refreshItems, showToast, status?.status, status?.status === 'unlocked' ? status.vaultId : undefined]);

Or memoize a derived key (unlocked:${vaultId}) and depend on that.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/settings/tabs/VaultTab.tsx` around lines 122 - 149, The effect
is re-running whenever the whole status object reference changes; narrow its
dependency to only the discriminant and identity so it only runs when unlocking
or vaultId changes. Replace the dependency on status with either explicit
properties (status?.status and status?.vaultId) or create a memo like
unlockedKey = status?.status === 'unlocked' ? `unlocked:${status.vaultId}` :
null and use unlockedKey in the useEffect deps; keep the existing logic
(refreshItems, loadSecurePreview, vaultIpc.hasRecoveryKey,
vaultIpc.backfillConnectionRefs and backfilledVaultIdsRef) unchanged so backfill
and recovery checks only fire when the unlocked state or vaultId actually
change.
src/components/modals/useAutoVault.ts (1)

60-68: ⚡ Quick win

deleteOldAuthItem is fire-and-forget and races the new-item create.

deleteOldAuthItem returns immediately and the surrounding flow (savePastedKey, autoVaultPassword, autoVaultKeyFile) proceeds to create the new vault item and return a new authRef while the delete may still be in-flight or may fail silently. If the delete is rejected, the old vault item is orphaned (the toast asks the user to remove it manually). It would be safer to await the delete and surface failure as a non-fatal but visible warning rather than relying on a fire-and-forget .catch.

Also worth noting: the old authRef.itemId is read at delete time from the current app store snapshot. If the user reopened the modal quickly or the connection was edited concurrently, the wrong old item could be targeted. Consider capturing the prior authRef.itemId once at hook open and using it explicitly.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/modals/useAutoVault.ts` around lines 60 - 68,
deleteOldAuthItem currently fires-and-forgets vaultIpc.itemDelete and reads the
authRef.itemId from the live store, causing race and wrong-item risks; make it
async, capture the existing authRef.itemId once when the hook/modal opens (e.g.,
store as initialAuthItemId), then inside deleteOldAuthItem use that captured id
and await vaultIpc.itemDelete(...) and propagate failures (return false or
throw) instead of swallowing them; update callers (savePastedKey,
autoVaultPassword, autoVaultKeyFile) to await deleteOldAuthItem before creating
the new vault item and handle a non-fatal failure by calling
showToast('warning', ...) so the UI surfaces the failure while still proceeding
if desired.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@CHANGELOG.md`:
- Around line 8-20: The changelog contains unresolved commit reference tokens
([6e8dd42], [d9d3663], [8cdb20d], [1d865ed], [bdbd81b], [bf01457])—add
corresponding footer link definitions at the end of CHANGELOG.md, each mapping
the short SHA token to the full commit URL or release comparison URL (e.g.,
"[6e8dd42]: https://github.com/ORG/REPO/commit/6e8dd42"), so those bracketed
references in the entries resolve correctly; ensure each token exactly matches
the references used in the top section.

In `@docs/VAULT_AND_SYNC_ARCHITECTURE.md`:
- Around line 178-196: Update the Markdown heading levels for the subsections
under "8) Sync Strategy" so they are nested correctly: change the headings "8.1
Local-first semantics", "8.2 Per-profile state machine", "8.3 Conflict policy
(phase 1)", and "8.4 Retry policy" from level-2 (##) to level-3 (###) to match
the parent "## 8) Sync Strategy" and the rest of the document (e.g., sections
like "17.1" that use ###); ensure the text and any backticked labels (like `idle
-> syncing -> success|conflict|retrying|error`) remain unchanged.

In `@src-tauri/build.rs`:
- Around line 2-3: The build script currently only calls
println!("cargo:rerun-if-changed=.env") but reads env vars like GOOGLE_CLIENT_ID
and PROFILE (and any future allowlisted keys), so add matching cargo rerun
directives for those env-derived inputs: emit
println!("cargo:rerun-if-env-changed=GOOGLE_CLIENT_ID") and
println!("cargo:rerun-if-env-changed=PROFILE") (and any other forwarded keys) in
build.rs next to the existing rerun-if-changed line so Cargo will rebuild when
those environment variables change; update the logic that constructs
file_google_client_id / PROFILE handling to ensure each forwarded env var has a
corresponding rerun-if-env-changed call.
- Around line 26-36: The current check uses std::env::var("PROFILE") ==
"release" which misses custom Cargo profiles; update the guard so it validates
GOOGLE_CLIENT_ID for any non-debug build (e.g., treat PROFILE != "debug" as
requiring a real client ID) or switch to checking a build-time indicator like
DEBUG or OPT_LEVEL instead; locate the block using PROFILE,
file_google_client_id, env_google_client_id, is_valid_google_client_id and
replace the conditional so has_valid_client_id is enforced for all
non-debug/optimized builds rather than only when PROFILE == "release".

In `@src-tauri/src/commands.rs`:
- Around line 1107-1127: The current code removes all connection IDs
unconditionally even if teardown failed; change the loop to perform cleanup
per-ID and only remove those that succeeded: for each id in ids, call
state.pty_manager.close_by_connection(id).await and then stop tunnels for that
specific id (either call a per-connection stop function like stop_tunnel or
refactor stop_tunnels_for_connections to accept a single id and return per-id
Result), collect errors per id when either call fails, and only call
connections.remove(id) when both cleanup calls returned Ok; aggregate and return
errors as before. Ensure you reference the existing symbols close_by_connection,
stop_tunnels_for_connections (or stop_tunnel if you add/use it), and
connections.remove in the change.

In `@src-tauri/src/sync/commands.rs`:
- Around line 716-749: The tmp file path vault.redb.sync-tmp used by sync_upload
(and vault.redb.download-tmp used by sync_download) is shared across concurrent
invocations causing race/overwrite; instead generate a unique per-invocation
temp filename (e.g., include a UUID or timestamp or use a tempfile API) for
tmp_path before calling export_vault, and update calls to
tokio::fs::read/remove_file to use that unique path; also apply the same fix to
sync_download and consider moving the file read inside the scope where
export_vault runs or guarding upload/download with a process-wide mutex if you
need snapshot semantics tied to the vault lock.

In `@src-tauri/src/vault/commands.rs`:
- Around line 371-375: save_saved_connections currently writes connections.json
in-place which can truncate/corrupt the file if the process exits mid-write;
change save_saved_connections to write atomically by writing the pretty JSON to
a temporary file in the same directory (e.g., append a .tmp or .partial suffix),
fs::write that temp path, then atomically replace the target using
std::fs::rename(temp_path, path) and map any IO errors to
VaultError::InvalidData; update error messages to reference the temp-write and
rename steps and keep the function signature and serde serialization
(serde_json::to_string_pretty, save_saved_connections) unchanged.

In `@src-tauri/src/vault/secure_to_vault.rs`:
- Around line 137-141: Currently the backend always stores the vault secret as
JSON with {"key": key_content, "passphrase": conn.password.as_ref()}, producing
null passphrase entries; change the logic in secure_to_vault.rs where `secret`
is built so that when `conn.password` is None/empty you set `secret` to the raw
`key_content` string, and only serialize the JSON object when a passphrase
exists (use `conn.password` presence check). This will align with the renderer's
convention (raw when no passphrase, JSON when passphrase) and still remain
compatible with `parse_key_secret`.

In `@src-tauri/src/vault/store.rs`:
- Around line 333-334: The insert into LOGICAL_IDS (via the local variable
logical_ids and call to logical_ids.insert using
Self::record_logical_id(&record)) can silently overwrite an existing logical-id
mapping and break item_get_by_logical_id()/connection relinking; change this to
first check for an existing mapping by querying logical_ids.get(key) where key =
Self::record_logical_id(&record). If a mapping exists and its value differs from
id.as_str(), return an error (reject duplicates) or, if you prefer upsert
semantics, explicitly perform an update/replace against that existing entry
(ensuring the previous record is cleaned up or its mapping removed) rather than
blindly inserting. Ensure write_txn and caller semantics handle the error path
or the explicit upsert consistently with item_get_by_logical_id().
- Around line 562-570: The has_recovery_key method currently returns
NotInitialized when self.db is None; update has_recovery_key to attempt to
initialize/open the DB when self.db is None by calling the existing try_open (or
the appropriate initialization method on the same struct) before proceeding,
propagate any errors from try_open as VaultError, then continue with the
existing read transaction logic (open_table(KEY_SLOTS) and check SLOT_RECOVERY)
so vault_has_recovery_key works when called standalone on a fresh process start;
reference the has_recovery_key method, self.db field, and try_open
initialization method in your change.
- Around line 813-828: The current validate_vault_database only checks for table
existence but must also reject imports where mandatory rows are missing or
empty; update validate_vault_database to open VAULT_META and KEY_SLOTS tables
and verify that VAULT_META contains a valid vault_id and meta row
(non-empty/parsable values) and that KEY_SLOTS contains at least the expected
passphrase slot(s) with non-empty contents before returning Ok; if any required
row is missing or malformed return a VaultError::InvalidData with a clear
message so import_vault() won’t replace the current vault and status() won’t
report Uninitialized.

In `@src-tauri/src/vault/types.rs`:
- Around line 22-28: VaultStatus serializes variant names to camelCase but not
the fields (vault_id, item_count), causing the frontend to receive undefined
vaultId/itemCount; update the enum attribute on VaultStatus to apply field
renaming (e.g., add serde(rename_all_fields = "camelCase") or equivalent) so the
struct-like variants Locked and Unlocked produce vaultId and itemCount, then
ensure variants Locked { vault_id: String } and Unlocked { vault_id: String,
item_count: u64 } remain unchanged so serde performs the field rename at
serialization.

In `@src/components/modals/AddConnectionModal.tsx`:
- Around line 115-118: The performSave function uses component state isSaving to
guard re-entrance but that leaves a race window; replace or augment the guard
with a synchronous ref (e.g., savingRef) checked and set synchronously at the
top of performSave to prevent parallel invocations: check savingRef.current and
return early if true, then immediately set savingRef.current = true before any
awaits, and ensure you clear it (and call setIsSaving(false)) in finally blocks;
update any button handlers that currently rely only on isSaving to also read
savingRef if needed and keep setIsSaving for UI state.

In `@src/components/modals/useAutoVault.ts`:
- Around line 185-189: The constants PRIVATE_KEY_BEGIN_PATTERN,
PRIVATE_KEY_END_PATTERN and the helper isValidPrivateKeyFormat are declared at
module scope but indented as if inside useAutoVault, which is misleading; fix
this by moving these three helpers above the useAutoVault declaration (or if you
prefer, actually place them inside useAutoVault above their first use) and
adjust their indentation so they clearly live at module scope (or within the
hook) to match their intended scope; reference the symbols
PRIVATE_KEY_BEGIN_PATTERN, PRIVATE_KEY_END_PATTERN, isValidPrivateKeyFormat and
the hook useAutoVault when making the change.

In `@src/components/settings/tabs/vault/VaultStatusCard.tsx`:
- Around line 26-51: Update Tailwind v3 bracketed CSS variable syntax to
Tailwind v4 parentheses shorthand in this component: replace occurrences like
'border-[var(--color-app-border)]/60', 'bg-[var(--color-app-surface)]/25',
'bg-[var(--color-app-surface)]', and 'text-[var(--color-app-muted)]' inside the
VaultStatusCard JSX (look for the className on the outer div and the nested
elements using isUnlocked, unlockedStatus, and the Shield icon) with the v4 form
using parentheses, e.g. change [var(--...)] to (--...), preserving any opacity
suffixes like /25 or /60 and keeping the rest of the class strings intact;
perform the same substitution for all className occurrences in this file (and
consider running the same find-and-replace across the repo for other files).

In `@src/components/ui/Modal.tsx`:
- Around line 113-120: The close button in Modal.tsx (the Button with props
variant="ghost" size="icon", onClick={onClose} and child <X />) is icon-only and
needs an accessible name; add an aria-label (e.g., aria-label="Close" or
aria-label={t('close')}) directly to that Button element so screen readers can
announce its purpose while preserving existing props and event handler.

In `@src/features/connections/domain/merge.ts`:
- Around line 59-69: The merge logic in merge.ts is allowing UI-only metadata
from imported connections to overwrite local fields because the spread
{...match, ...secureIncoming} merges those attributes; update the import flow to
strip UI-only fields (isFavorite, pinnedFeatures, icon, theme, folder, tags,
createdAt, lastConnected, homePath, etc.) before merging by either: (a) ensuring
toPersistedConnections excludes these fields when producing `imported`, or (b)
adding a sanitizer in preserveVaultCredentialOnUpdate (or a new helper called by
merge) that takes `incoming` and returns a copy without UI-only metadata so
mergedImported receives only allowed persisted fields; keep the existing
credential-preservation behavior and ensure status/id remain from `match`.

In `@src/vault/syncIpc.ts`:
- Around line 24-40: fallbackStatus can return stale lastKnownStatusByProvider
and status() never refreshes that cache; update the cache on successful calls
and avoid letting older cached entries override newer results: in status(),
after invoke<SyncProviderStatus>('sync_status', ...) store the returned status
into lastKnownStatusByProvider[provider] before returning; likewise ensure the
code paths that perform sync actions (e.g., handlers for 'sync_connect' and
'sync_disconnect') update lastKnownStatusByProvider on success; change
fallbackStatus(provider, fallback, error) to only return lastKnown if it is
strictly newer than fallback (compare a timestamp/version field like updatedAt
on SyncProviderStatus) otherwise return the fallback with the error — if no
timestamp/version exists, prefer the fallback over lastKnown.

In `@tests/sessionPersistence.test.mjs`:
- Around line 109-130: The test uses a hardcoded default vaultProfileId
('google') in the helper makeVaultTab() and assertions; update the test to
import and use the application's DEFAULT_VAULT_PROFILE_ID constant (from
src/vault/profileTypes) when calling makeVaultTab() without an explicit profile
and when asserting the default value for buildSessionData; keep explicit
custom-profile cases unchanged (makeVaultTab('vault-tab', 'custom-profile') and
its assertion).

---

Outside diff comments:
In `@src/features/connections/domain/validation.ts`:
- Around line 61-104: validateConnectionDraft currently lacks vault-mode checks,
so when authMode === 'vault' users get no immediate feedback; update the
ConnectionDraft type to include authRef and add a vault branch in
validateConnectionDraft that checks normalizeText(draft.authRef) (or presence of
draft.authRef) and, on failure, push an error message and set
fieldErrors.authRef similar to the existing password/key branches; keep behavior
consistent with buildConnectionSavePayload by using the same validation message
text and ensure the returned object still includes
normalizedPort/normalizedFolder as before.

In `@src/store/connectionSlice.ts`:
- Around line 689-696: Restored vault tabs are getting vaultProfileId left as
undefined because the session-restore mapping only uses
isVaultProfileId(s.vaultProfileId) without falling back; update the restore
logic in connectionSlice (where the tab object is built) to mirror openVaultTab
behavior by defaulting invalid/undefined vaultProfileId to
DEFAULT_VAULT_PROFILE_ID (use isVaultProfileId to check validity), so restored
tabs receive the same DEFAULT_VAULT_PROFILE_ID as newly opened vault tabs.

---

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 39-45: The CI step "Run vault crypto known-answer tests" uses the
substring filter command `cargo test known_answer`, which can exit successfully
when no tests match; update that step so it either (a) runs the test by exact
name (use Cargo's exact test matching for the known_answer test) instead of a
substring, or (b) add a pre-check that lists tests and fails the job if zero
tests match the known_answer pattern; target the CI job/step currently invoking
`cargo test known_answer` to implement one of these safeguards.

In `@src-tauri/build.rs`:
- Around line 79-82: The current is_valid_google_client_id function only checks
for non-empty and non-placeholder values; replace that loose check with a
stricter pattern match: require the value to match the Google OAuth client ID
format like "<digits>-<chars>.apps.googleusercontent.com" (e.g., digits, a
hyphen, an alphanumeric/underscore/dash segment, then
".apps.googleusercontent.com") using a regex; update is_valid_google_client_id
to trim the input, test the regex, and return true only on a full match (still
rejecting the "PLACEHOLDER_CLIENT_ID" token).
- Around line 40-50: Update the comment above the function should_skip_rustc_env
to explicitly reference Google's installed-app OAuth guidance and note that for
desktop PKCE flows the client_secret is "not treated as a secret" and may be
embedded, so GOOGLE_CLIENT_SECRET is intentionally allowlisted; mention this
rationale (desktop PKCE / installed-app guidance) so future maintainers
understand why the key is exempt from secret filtering.

In `@src-tauri/src/lib.rs`:
- Around line 43-45: The VaultService is currently wrapped in a single
tokio::sync::Mutex (see app.manage(... VaultService::new(...))) which serializes
all IPC and blocks read-only handlers (vault_item_list, vault_item_get) behind
writers (vault_import, vault_secure_to_vault); change the coordination to avoid
global serialization by replacing the external Mutex with a concurrent-safe
container (e.g., tokio::sync::RwLock or Arc<VaultService> with internal
fine-grained locking) or by adding an internal RwLock inside VaultService that
separates read access from write/update state; update the app.manage(...)
registration to store the new wrapper (RwLock or Arc) and adjust code that
acquires the lock accordingly so readers use shared/read locks and writers use
exclusive/write locks.

In `@src-tauri/src/sync/commands.rs`:
- Around line 637-676: The test module #[cfg(test)] mod tests is currently
inserted between the public command functions (e.g., sync_status and
sync_connect) and should be relocated to the end of the file so production
command definitions remain contiguous; move the entire tests module block
(including both tests tokens_for_disk_strips_access_and_refresh_tokens and
token_scope_check_requires_exact_drive_scope) after the last public command in
this file so it no longer splits the public API surface.
- Around line 12-13: Change the GOOGLE_CLIENT_ID declaration to use option_env!
with a fallback empty string (so builds do not require the env var at compile
time) to match GOOGLE_CLIENT_SECRET; update the const declaration for
GOOGLE_CLIENT_ID (the symbol GOOGLE_CLIENT_ID in src-tauri/src/sync/commands.rs)
to read from option_env!("GOOGLE_CLIENT_ID") and return "" when None (e.g.,
replace env!("GOOGLE_CLIENT_ID") with an option_env!-based expression that
yields a &str default ""), leaving the runtime guard that checks the value (the
existing logic that uses GOOGLE_CLIENT_ID alongside GOOGLE_CLIENT_SECRET)
unchanged.

In `@src-tauri/src/vault/secure_to_vault.rs`:
- Around line 181-197: The deduplication currently uses existing_by_fingerprint
keyed by (record.kind, record.label, fingerprint) which prevents matching
secrets when only the auto-generated label changed; change the key to
(record.kind, fingerprint) so duplicates are detected by content, i.e., build
existing_by_fingerprint: HashMap<(String,String), (String, String, u64)> keyed
by (record.kind.clone(), fingerprint) and adjust the entry logic in the loop
that references existing_by_fingerprint, fingerprint,
VaultService::record_logical_id and record.created_at; optionally, when you
match an existing record whose label differs, update that record's label (or
record the new label) so names stay in sync.

In `@src/components/layout/Sidebar.tsx`:
- Around line 570-583: Remove the unnecessary React fragment that wraps the
single <SidebarSection> in the else-branch: replace the fragment block with the
<SidebarSection> node itself (the block using SidebarSection with props
title="All Hosts", compactMode, onContextMenu which calls
setAllHostsContextMenu, and children allHostsContent) so the JSX is cleaner and
symmetric with the other branch.

In `@src/components/layout/sidebar/VaultNavSection.tsx`:
- Around line 36-40: The refreshGoogleSync callback currently swallows errors to
console only; add a local error state (e.g., googleSyncError via useState)
inside VaultNavSection and update it in refreshGoogleSync: clear googleSyncError
on successful syncIpc.status(...) / setGoogleSync(result), and set
googleSyncError with the caught error in the catch block. Use this
googleSyncError flag where the sidebar renders the Google sync nav item to show
a warning indicator and optionally a tooltip/message when status cannot be
retrieved. Ensure any other callers of setGoogleSync remain unchanged.

In `@src/components/modals/useAutoVault.ts`:
- Around line 60-68: deleteOldAuthItem currently fires-and-forgets
vaultIpc.itemDelete and reads the authRef.itemId from the live store, causing
race and wrong-item risks; make it async, capture the existing authRef.itemId
once when the hook/modal opens (e.g., store as initialAuthItemId), then inside
deleteOldAuthItem use that captured id and await vaultIpc.itemDelete(...) and
propagate failures (return false or throw) instead of swallowing them; update
callers (savePastedKey, autoVaultPassword, autoVaultKeyFile) to await
deleteOldAuthItem before creating the new vault item and handle a non-fatal
failure by calling showToast('warning', ...) so the UI surfaces the failure
while still proceeding if desired.

In `@src/components/settings/tabs/VaultTab.tsx`:
- Around line 122-149: The effect is re-running whenever the whole status object
reference changes; narrow its dependency to only the discriminant and identity
so it only runs when unlocking or vaultId changes. Replace the dependency on
status with either explicit properties (status?.status and status?.vaultId) or
create a memo like unlockedKey = status?.status === 'unlocked' ?
`unlocked:${status.vaultId}` : null and use unlockedKey in the useEffect deps;
keep the existing logic (refreshItems, loadSecurePreview,
vaultIpc.hasRecoveryKey, vaultIpc.backfillConnectionRefs and
backfilledVaultIdsRef) unchanged so backfill and recovery checks only fire when
the unlocked state or vaultId actually change.

In `@src/components/vault/RecoveryKeyModal.tsx`:
- Around line 47-67: The URL.revokeObjectURL call in handleDownload is executed
immediately after a.click(), which can race with download start; update
handleDownload to defer revocation (e.g., wrap URL.revokeObjectURL(url) in a
short setTimeout like 100–500ms) or revoke in the anchor's onclick/onblur
handler to ensure the browser has time to start the download while keeping the
rest of the logic (building content, creating Blob, creating anchor, setting
href/download, calling a.click()) unchanged; refer to handleDownload and
recoveryKey to locate the code to change.

In `@src/components/vault/VaultUnlockModal.tsx`:
- Around line 67-79: The current error formatting in the recovery branch builds
`${code ?? ''} ${message}` which can produce leading spaces or awkward
concatenation; update the catch block for unlockWithRecoveryKey to compose a
cleaner message by extracting code and message from extractError(e), then create
a result by joining only non-empty parts (e.g., [code,
message].filter(Boolean).join(': ') or just message if present), trimming the
final string, and pass that to setLocalError with the fallback 'Failed to unlock
with recovery key.' Keep references to unlockMode === 'recovery', recoveryKey,
extractError, unlockWithRecoveryKey, handleClose, and setLocalError when making
the change.

In `@src/components/vault/VaultWorkspacePanel.tsx`:
- Around line 22-35: The profile chips rendered from VAULT_PROFILES in
VaultWorkspacePanel.tsx currently only convey the active item visually; update
the mapping where the span is rendered (the VAULT_PROFILES.map block that checks
profileId === item.id) to expose programmatic state for assistive tech by adding
aria-current="true" to the active item (and either aria-current="false" or no
aria-current on non-active items), and consider adding a clear aria-label or
role (e.g., role="listitem" on each span and role="list" on the container) so
screen readers can announce the current profile; locate the span that renders
{item.label} and add these attributes accordingly.

In `@src/features/connections/domain/merge.ts`:
- Around line 17-36: preserveVaultCredentialOnUpdate currently treats an
existing authRef as authoritative and silently drops incoming plaintext
credentials (password/privateKeyPath), making vault refs "sticky"; either add an
explicit opt-in boolean parameter (e.g., allowVaultUnlink or
preferIncomingPlaintext) to preserve the current behavior unless callers opt in
to let incoming.password/incoming.privateKeyPath override and clear
existing.authRef, or if you intend stickiness always, add a clear JSDoc to
preserveVaultCredentialOnUpdate documenting that existing.authRef wins and
incoming plaintext will be discarded (referencing parameters existing, incoming,
authRef, password, privateKeyPath).

In `@src/features/connections/infrastructure/connectionIpc.ts`:
- Around line 80-82: The IPC channel name in disconnectVaultBackedIpc uses
snake_case ('ssh_disconnect_vault_backed') which is inconsistent with other SSH
channels that use colon-separated names (e.g., 'ssh:connect', 'ssh:disconnect',
'ssh:test'); update the invoked channel string in disconnectVaultBackedIpc to
'ssh:disconnectVaultBacked' (or another colon-separated variant consistent with
the file) so it matches the established IPC naming convention used by functions
like the existing ssh connect/disconnect/test invocations.

In `@src/vault/useVaultStore.ts`:
- Around line 28-44: The refresh function sets status from vaultIpc.status()
then calls get().refreshItems(), which can throw and leave the store briefly
showing unlocked with no items; to fix, wrap the call to get().refreshItems() in
its own try/catch inside refresh so failures clear items and set error without
reverting the already-determined status (or alternatively rollback status to
null on item-refresh failure), e.g. call vaultIpc.status() and set({ status }),
then try { await get().refreshItems(); } catch (e) { set({ items: [], error:
extractErrorMessage(e) }); throw e; } finally set({ isLoading: false }); ensure
unique symbols referenced: refresh, vaultIpc.status, get().refreshItems, set,
extractErrorMessage.

In `@tests/vaultNavState.test.mjs`:
- Around line 7-15: The test runner function runTest currently rethrows on the
first failure which stops subsequent tests; modify runTest to record failures
instead of throwing (e.g., push failure info into a shared failures array) and
continue running remaining tests, and after all tests have run (at file end)
inspect the failures array to log a summary and call process.exit(1) if any
failures were recorded; reference the runTest function and the file-level test
execution/termination logic when making this change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

Comment thread CHANGELOG.md
Comment on lines +8 to +20
- **Global Vault Workspace**: Introduced Vault as a first-class sidebar/workspace surface with profile-based navigation for Local Vault and Google Vault Sync. Added dedicated vault workspace UI, unlock/recovery flows, and vault-focused panels. ([6e8dd42], [d9d3663])
- **Vault Core Backend Foundation**: Added backend vault modules for crypto, schema, storage, lifecycle commands, and migration scaffolding to support encrypted credential storage and recovery workflows. ([6e8dd42])
- **Google Drive Vault Sync Module**: Added sync command module and IPC surface for connect/disconnect/backup/restore flows dedicated to vault data sync. ([8cdb20d])
- **Vault Credential Identity Model**: Added durable `credentialId`-based vault credential identity, key-first credential creation/assignment flows, self-healing auth-ref relinking/backfill, and dedicated docs for future rotation/history work. ([1d865ed], [bdbd81b], [bf01457])

### Changed
- **Settings Information Architecture**: Removed Vault from Settings navigation; vault management now lives in the dedicated Vault workspace flow. Updated related UX copy from “Settings → Vault” to “Vault tab/workspace” guidance. ([d9d3663])
- **Sync Status UX Resilience**: Sync IPC status events now use canonical post-operation status refresh and last-known-state fallback behavior to avoid false disconnected states when status refresh fails. ([8cdb20d])
- **Vault Credential Lifecycle UX**: Refined vault flows around secure-to-vault naming, modular vault panels/modals, bulk host assignment/unassignment, rotation prompts, repair actions, and reset tooling for clean vault test states. ([1d865ed], [bdbd81b], [bf01457])

### Fixed
- **Command Palette Vault Icon Semantics**: Corrected icon mapping so Local Vault and Google Vault Sync entries use appropriate visual semantics. ([d9d3663])
- **Pasted Key Vaulting Consistency**: Converted pasted private-key flow to controlled state handling and centralized private-key marker validation logic to keep save/validation behavior consistent. ([d9d3663])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Add missing footer link definitions for new commit references.

Line 8 through Line 20 introduce references ([6e8dd42], [d9d3663], [8cdb20d], [1d865ed], [bdbd81b], [bf01457]) that are not defined in the link section, so these changelog links will be broken.

🛠️ Suggested footer additions
+[6e8dd42]: https://github.com/zync-sh/zync/commit/6e8dd42
+[d9d3663]: https://github.com/zync-sh/zync/commit/d9d3663
+[8cdb20d]: https://github.com/zync-sh/zync/commit/8cdb20d
+[1d865ed]: https://github.com/zync-sh/zync/commit/1d865ed
+[bdbd81b]: https://github.com/zync-sh/zync/commit/bdbd81b
+[bf01457]: https://github.com/zync-sh/zync/commit/bf01457
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@CHANGELOG.md` around lines 8 - 20, The changelog contains unresolved commit
reference tokens ([6e8dd42], [d9d3663], [8cdb20d], [1d865ed], [bdbd81b],
[bf01457])—add corresponding footer link definitions at the end of CHANGELOG.md,
each mapping the short SHA token to the full commit URL or release comparison
URL (e.g., "[6e8dd42]: https://github.com/ORG/REPO/commit/6e8dd42"), so those
bracketed references in the entries resolve correctly; ensure each token exactly
matches the references used in the top section.

Comment on lines +178 to +196
## 8.1 Local-first semantics
- Local store is immediately updated.
- Sync engine asynchronously reconciles with each enabled profile.

## 8.2 Per-profile state machine
- `idle -> syncing -> success|conflict|retrying|error`

## 8.3 Conflict policy (phase 1)
- No cross-provider merge.
- Conflict resolution is **local vs specific provider**.
- User choices:
- Keep Local
- Keep Remote
- Duplicate as new item (optional safety path)

## 8.4 Retry policy
- Exponential backoff with jitter.
- Bounded retry budget per run.
- Persist retry reason and last failure code.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Inconsistent heading levels in Section 8.

Subsections 8.18.4 use ## (h2), same level as the parent section header ## 8) Sync Strategy. Other subsections in this doc (e.g., 17.1, 17.2, 17.3) correctly use ### for the nested level. Bump these to ### for consistent ToC/structure rendering.

♻️ Proposed fix
-## 8.1 Local-first semantics
+### 8.1 Local-first semantics
 - Local store is immediately updated.
 - Sync engine asynchronously reconciles with each enabled profile.

-## 8.2 Per-profile state machine
+### 8.2 Per-profile state machine
 - `idle -> syncing -> success|conflict|retrying|error`

-## 8.3 Conflict policy (phase 1)
+### 8.3 Conflict policy (phase 1)
 ...
-## 8.4 Retry policy
+### 8.4 Retry policy
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
## 8.1 Local-first semantics
- Local store is immediately updated.
- Sync engine asynchronously reconciles with each enabled profile.
## 8.2 Per-profile state machine
- `idle -> syncing -> success|conflict|retrying|error`
## 8.3 Conflict policy (phase 1)
- No cross-provider merge.
- Conflict resolution is **local vs specific provider**.
- User choices:
- Keep Local
- Keep Remote
- Duplicate as new item (optional safety path)
## 8.4 Retry policy
- Exponential backoff with jitter.
- Bounded retry budget per run.
- Persist retry reason and last failure code.
### 8.1 Local-first semantics
- Local store is immediately updated.
- Sync engine asynchronously reconciles with each enabled profile.
### 8.2 Per-profile state machine
- `idle -> syncing -> success|conflict|retrying|error`
### 8.3 Conflict policy (phase 1)
- No cross-provider merge.
- Conflict resolution is **local vs specific provider**.
- User choices:
- Keep Local
- Keep Remote
- Duplicate as new item (optional safety path)
### 8.4 Retry policy
- Exponential backoff with jitter.
- Bounded retry budget per run.
- Persist retry reason and last failure code.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@docs/VAULT_AND_SYNC_ARCHITECTURE.md` around lines 178 - 196, Update the
Markdown heading levels for the subsections under "8) Sync Strategy" so they are
nested correctly: change the headings "8.1 Local-first semantics", "8.2
Per-profile state machine", "8.3 Conflict policy (phase 1)", and "8.4 Retry
policy" from level-2 (##) to level-3 (###) to match the parent "## 8) Sync
Strategy" and the rest of the document (e.g., sections like "17.1" that use
###); ensure the text and any backticked labels (like `idle -> syncing ->
success|conflict|retrying|error`) remain unchanged.

Comment thread src-tauri/build.rs
Comment on lines +2 to +3
println!("cargo:rerun-if-changed=.env");
let mut file_google_client_id: Option<String> = None;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Missing rerun-if-env-changed directives for env-derived inputs.

build.rs reads GOOGLE_CLIENT_ID (and PROFILE) from std::env, but only .env file changes trigger rebuilds. If GOOGLE_CLIENT_ID is changed in CI/shell between builds without touching .env, Cargo will reuse the cached build artifact and the stale value gets baked in via cargo:rustc-env. The same applies to any other allowlisted key forwarded from process env in the future.

🔧 Proposed fix
 fn main() {
     println!("cargo:rerun-if-changed=.env");
+    println!("cargo:rerun-if-env-changed=GOOGLE_CLIENT_ID");
+    println!("cargo:rerun-if-env-changed=GOOGLE_CLIENT_SECRET");
+    println!("cargo:rerun-if-env-changed=PROFILE");
     let mut file_google_client_id: Option<String> = None;

Also applies to: 26-32

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src-tauri/build.rs` around lines 2 - 3, The build script currently only calls
println!("cargo:rerun-if-changed=.env") but reads env vars like GOOGLE_CLIENT_ID
and PROFILE (and any future allowlisted keys), so add matching cargo rerun
directives for those env-derived inputs: emit
println!("cargo:rerun-if-env-changed=GOOGLE_CLIENT_ID") and
println!("cargo:rerun-if-env-changed=PROFILE") (and any other forwarded keys) in
build.rs next to the existing rerun-if-changed line so Cargo will rebuild when
those environment variables change; update the logic that constructs
file_google_client_id / PROFILE handling to ensure each forwarded env var has a
corresponding rerun-if-env-changed call.

Comment thread src-tauri/build.rs
Comment on lines +26 to +36
if std::env::var("PROFILE").ok().as_deref() == Some("release") {
let env_google_client_id = std::env::var("GOOGLE_CLIENT_ID").ok();
let has_valid_client_id = file_google_client_id
.as_deref()
.or(env_google_client_id.as_deref())
.map(is_valid_google_client_id)
.unwrap_or(false);
if !has_valid_client_id {
panic!("GOOGLE_CLIENT_ID is missing or placeholder. Set a real client ID for release builds.");
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Release-only validation skips custom Cargo profiles.

PROFILE is set by Cargo to the profile name (debug, release, or a custom profile name like dist). If the release pipeline ever uses a custom profile (e.g., cargo build --profile dist), the placeholder check is silently skipped and a placeholder client ID can ship in a production binary. Consider whitelisting all non-debug profiles or driving this from DEBUG/OPT_LEVEL instead.

🛡️ Possible hardening
-    if std::env::var("PROFILE").ok().as_deref() == Some("release") {
+    let profile = std::env::var("PROFILE").unwrap_or_default();
+    if profile != "debug" && profile != "dev" {
         let env_google_client_id = std::env::var("GOOGLE_CLIENT_ID").ok();
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src-tauri/build.rs` around lines 26 - 36, The current check uses
std::env::var("PROFILE") == "release" which misses custom Cargo profiles; update
the guard so it validates GOOGLE_CLIENT_ID for any non-debug build (e.g., treat
PROFILE != "debug" as requiring a real client ID) or switch to checking a
build-time indicator like DEBUG or OPT_LEVEL instead; locate the block using
PROFILE, file_google_client_id, env_google_client_id, is_valid_google_client_id
and replace the conditional so has_valid_client_id is enforced for all
non-debug/optimized builds rather than only when PROFILE == "release".

Comment thread src-tauri/src/commands.rs
Comment on lines +1107 to +1127
let mut errors = Vec::new();
for id in &ids {
if let Err(error) = state.pty_manager.close_by_connection(id).await {
errors.push(format!("PTY close failed for {id}: {error}"));
}
}

if let Err(error) = stop_tunnels_for_connections(&app, &state, &ids).await {
errors.push(format!("Tunnel stop failed: {error}"));
}

let mut connections = state.connections.lock().await;
for id in &ids {
connections.remove(id);
}

if errors.is_empty() {
Ok(ids)
} else {
Err(errors.join("; "))
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Keep failed vault disconnects registered.

This path removes every vault-backed handle even when teardown failed. close_by_connection() errors are only accumulated, and stop_tunnels_for_connections() only reports file/read failures, not per-tunnel stop_tunnel() failures, so a live PTY/forward can be orphaned after the vault is locked. Only remove IDs whose cleanup actually succeeded.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src-tauri/src/commands.rs` around lines 1107 - 1127, The current code removes
all connection IDs unconditionally even if teardown failed; change the loop to
perform cleanup per-ID and only remove those that succeeded: for each id in ids,
call state.pty_manager.close_by_connection(id).await and then stop tunnels for
that specific id (either call a per-connection stop function like stop_tunnel or
refactor stop_tunnels_for_connections to accept a single id and return per-id
Result), collect errors per id when either call fails, and only call
connections.remove(id) when both cleanup calls returned Ok; aggregate and return
errors as before. Ensure you reference the existing symbols close_by_connection,
stop_tunnels_for_connections (or stop_tunnel if you add/use it), and
connections.remove in the change.

Comment on lines +26 to +51
<div className="rounded-xl border border-[var(--color-app-border)]/60 bg-[var(--color-app-surface)]/25 p-4">
<div className="flex items-center justify-between gap-4">
<div className="flex items-center gap-3">
<div className={cn(
'w-10 h-10 rounded-lg flex items-center justify-center shrink-0',
isUnlocked
? 'bg-emerald-500/15 text-emerald-400'
: status?.status === 'locked'
? 'bg-amber-500/15 text-amber-400'
: 'bg-[var(--color-app-surface)] text-[var(--color-app-muted)]'
)}>
<Shield size={18} />
</div>
<div>
<p className="text-sm font-semibold text-[var(--color-app-text)]">
{isUnlocked ? 'Vault Unlocked'
: status?.status === 'locked' ? 'Vault Locked'
: 'Vault Not Set Up'}
</p>
<p className="text-xs text-[var(--color-app-muted)] mt-0.5">
{isUnlocked
? `${unlockedStatus?.itemCount ?? 0} item(s) · XChaCha20-Poly1305 encrypted`
: status?.status === 'locked'
? 'Unlock to access and manage credentials'
: 'Create a vault to store SSH credentials securely'}
</p>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for old Tailwind v3 CSS variable syntax
rg -n 'className.*\[var\(--' --type=tsx --type=ts

Repository: zync-sh/zync

Length of output: 84


🌐 Web query:

Tailwind CSS v4 CSS variables syntax documentation

💡 Result:

Tailwind CSS v4 theme/CSS variables “syntax” is documented primarily through the @theme directive (Tailwind theme variables), plus Tailwind’s new arbitrary-value shorthand rules for referencing CSS variables. 1) Defining Tailwind v4 theme variables (CSS custom properties) - Import Tailwind, then define theme tokens using @theme with normal CSS custom-property declarations. Example structure: @import "tailwindcss"; @theme { --font-display: "Satoshi", "sans-serif"; --breakpoint-3xl: 1920px; --color-avocado-100: oklch(0.99 0 0); /*... / } [1][2] - Tailwind v4 turns these theme variables into regular CSS custom properties (e.g., under:root) when it compiles your CSS, so you can reference them at runtime via var(--...)::root { --font-display:...; --breakpoint-3xl:...; --color-avocado-100:...; } [3][1] 2) Where theme variables are required / special syntax - Theme variables are special CSS variables defined using the @theme directive, and they also control which utility classes/variants exist. [4] - Tailwind uses a special syntax and requires theme variables to be top-level (not nested under other selectors or media queries). [4] - If you want regular (non-utility-linked) CSS variables, Tailwind notes you can use:root instead; use @theme when you want tokens to map directly to utilities. [4] 3) Overriding or clearing theme variable namespaces - To override an entire namespace in the default theme, Tailwind documents using the “asterisk” syntax to unset the namespace, e.g. --font-: initial. [1] 4) Referencing CSS variables inside Tailwind arbitrary values (v4 syntax change) - Tailwind v4 changes the syntax for using CSS variables as arbitrary values, switching to parentheses rather than square brackets (to avoid ambiguity with newer CSS parsing rules). [1] - It also documents a shorthand where you can write class values using the custom property syntax (example given for fill): fill-(--my-brand-color) is shorthand for fill-[var(--my-brand-color)] (which adds var automatically). [2] 5) Theme variable scoping / Web Components - Tailwind’s theme variables are based on CSS variables defined in:root, and third-party documentation notes Tailwind can apply theme variables to both:root and:host for Web Component support. [5] 6) Theme function vs CSS variables - Tailwind’s docs recommend using the CSS theme variables directly instead of the theme function when possible. [1] Most authoritative up-to-date sources to read - Theme variables (core concepts): describes @theme, requirements, namespaces, and how theme variables relate to generated utilities. [4] - Tailwind CSS v4.0 “Beta” docs: includes @theme usage, runtime CSS variable generation, override syntax, and the v4 change to arbitrary-value CSS variable shorthand (parentheses). [1] - Adding custom styles (explains using @theme and includes the arbitrary-value CSS variable shorthand example). [2] If you want, tell me whether you’re looking for (A) the exact @theme “inline/reference” sub-syntax, or (B) the exact arbitrary-value examples for colors/sizing (bg-[...], text-(--var), etc.), and I’ll narrow to those specific syntax rules.

Citations:


🏁 Script executed:

#!/bin/bash
# Read the specific file to verify the code shown in the review
cat -n src/components/settings/tabs/vault/VaultStatusCard.tsx | sed -n '26,51p'

Repository: zync-sh/zync

Length of output: 1537


🏁 Script executed:

#!/bin/bash
# Search for [var(-- pattern in TypeScript/TSX files using correct rg syntax
rg '\[var\(--' -e '\.(ts|tsx)$' --max-count=20

Repository: zync-sh/zync

Length of output: 106


🏁 Script executed:

#!/bin/bash
# Search for [var(-- pattern in TypeScript/TSX files
rg '\[var\(--' --type ts --type tsx --max-count 50

Repository: zync-sh/zync

Length of output: 84


🏁 Script executed:

#!/bin/bash
# Also search more broadly for this pattern across all files
rg 'className.*\[var\(--' --max-count 50

Repository: zync-sh/zync

Length of output: 50368


Update Tailwind CSS v4 syntax for CSS variable arbitrary values.

The codebase uses the old Tailwind v3 bracket syntax [var(--...)] for CSS variables. Tailwind v4 switched to parentheses shorthand: (--variable-name).

Lines 26, 35, 40, and 45 in this file need updating:

🔧 Required fixes
-    <div className="rounded-xl border border-[var(--color-app-border)]/60 bg-[var(--color-app-surface)]/25 p-4">
+    <div className="rounded-xl border border-(--color-app-border)/60 bg-(--color-app-surface)/25 p-4">
       <div className="flex items-center justify-between gap-4">
         <div className="flex items-center gap-3">
           <div className={cn(
             'w-10 h-10 rounded-lg flex items-center justify-center shrink-0',
             isUnlocked
               ? 'bg-emerald-500/15 text-emerald-400'
               : status?.status === 'locked'
                 ? 'bg-amber-500/15 text-amber-400'
-                : 'bg-[var(--color-app-surface)] text-[var(--color-app-muted)]'
+                : 'bg-(--color-app-surface) text-(--color-app-muted)'
           )}>
             <Shield size={18} />
           </div>
           <div>
-            <p className="text-sm font-semibold text-[var(--color-app-text)]">
+            <p className="text-sm font-semibold text-(--color-app-text)">
               {isUnlocked ? 'Vault Unlocked'
                 : status?.status === 'locked' ? 'Vault Locked'
                   : 'Vault Not Set Up'}
             </p>
-            <p className="text-xs text-[var(--color-app-muted)] mt-0.5">
+            <p className="text-xs text-(--color-app-muted) mt-0.5">

Note: This pattern appears throughout the codebase (200+ occurrences across multiple files). Consider a bulk refactor using find-and-replace.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/settings/tabs/vault/VaultStatusCard.tsx` around lines 26 - 51,
Update Tailwind v3 bracketed CSS variable syntax to Tailwind v4 parentheses
shorthand in this component: replace occurrences like
'border-[var(--color-app-border)]/60', 'bg-[var(--color-app-surface)]/25',
'bg-[var(--color-app-surface)]', and 'text-[var(--color-app-muted)]' inside the
VaultStatusCard JSX (look for the className on the outer div and the nested
elements using isUnlocked, unlockedStatus, and the Shield icon) with the v4 form
using parentheses, e.g. change [var(--...)] to (--...), preserving any opacity
suffixes like /25 or /60 and keeping the rest of the class strings intact;
perform the same substitution for all className occurrences in this file (and
consider running the same find-and-replace across the repo for other files).

Comment on lines +113 to +120
<Button
variant="ghost"
size="icon"
onClick={onClose}
className="h-8 w-8 rounded-full text-app-muted hover:bg-app-accent hover:text-white transition-all hover:scale-110 active:scale-95 hover:shadow-lg hover:shadow-app-accent/20"
>
<X className="h-4 w-4" />
</Button>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Add an accessible name to the icon-only close button.

On Line 113 through Line 120, the close control is icon-only; add aria-label so screen readers can identify it reliably.

♿ Suggested fix
                 {effectiveShowCloseButton && (
                   <Button
                     variant="ghost"
                     size="icon"
                     onClick={onClose}
+                    aria-label="Close modal"
                     className="h-8 w-8 rounded-full text-app-muted hover:bg-app-accent hover:text-white transition-all hover:scale-110 active:scale-95 hover:shadow-lg hover:shadow-app-accent/20"
                   >
                     <X className="h-4 w-4" />
                   </Button>
                 )}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<Button
variant="ghost"
size="icon"
onClick={onClose}
className="h-8 w-8 rounded-full text-app-muted hover:bg-app-accent hover:text-white transition-all hover:scale-110 active:scale-95 hover:shadow-lg hover:shadow-app-accent/20"
>
<X className="h-4 w-4" />
</Button>
<Button
variant="ghost"
size="icon"
onClick={onClose}
aria-label="Close modal"
className="h-8 w-8 rounded-full text-app-muted hover:bg-app-accent hover:text-white transition-all hover:scale-110 active:scale-95 hover:shadow-lg hover:shadow-app-accent/20"
>
<X className="h-4 w-4" />
</Button>
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/ui/Modal.tsx` around lines 113 - 120, The close button in
Modal.tsx (the Button with props variant="ghost" size="icon", onClick={onClose}
and child <X />) is icon-only and needs an accessible name; add an aria-label
(e.g., aria-label="Close" or aria-label={t('close')}) directly to that Button
element so screen readers can announce its purpose while preserving existing
props and event handler.

Comment on lines 59 to +69
for (const incoming of imported) {
const matches = existingMap.get(incoming.name);
// Duplicate names are matched in order: each imported entry consumes the
// next existing match, then preserveVaultCredentialOnUpdate keeps vault
// secrets while matchedIds/mergedImported track the update target.
const match = matches && matches.length > 0 ? matches.shift() : undefined;
if (match) {
updated += 1;
matchedIds.add(match.id);
const preservedMetadata: Partial<Connection> = {};
if (match.isFavorite !== undefined) preservedMetadata.isFavorite = match.isFavorite;
if (match.pinnedFeatures !== undefined) preservedMetadata.pinnedFeatures = match.pinnedFeatures;
if (match.icon !== undefined) preservedMetadata.icon = match.icon;
if (match.lastConnected !== undefined) preservedMetadata.lastConnected = match.lastConnected;
if (match.homePath !== undefined) preservedMetadata.homePath = match.homePath;
if (match.createdAt !== undefined) preservedMetadata.createdAt = match.createdAt;
if (match.folder !== undefined) preservedMetadata.folder = match.folder;
if (match.theme !== undefined) preservedMetadata.theme = match.theme;
if (match.tags !== undefined) preservedMetadata.tags = match.tags;
mergedImported.push({ ...incoming, ...preservedMetadata, id: match.id, status: match.status });
const secureIncoming = preserveVaultCredentialOnUpdate(match, incoming);
mergedImported.push({ ...match, ...secureIncoming, id: match.id, status: match.status });
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | 🏗️ Heavy lift

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Locate Connection type & the export serializer to see which fields are included.
fd -e ts types.ts --search-path src/features/connections/domain | xargs -I{} sh -c 'echo "=== {} ==="; cat "{}"'
rg -nP -C3 '\b(favorite|pinned|icon|theme|tags|folder|path|createdAt|updatedAt)\b' --type=ts -g 'src/features/connections/**'

Repository: zync-sh/zync

Length of output: 23400


🏁 Script executed:

# Find the preserveVaultCredentialOnUpdate function and export serializers
rg -nA 15 'preserveVaultCredentialOnUpdate' --type=ts src/features/connections/
fd -e ts -path 'src/features/connections/**' | xargs rg -l 'export|toJSON|serialize' | head -10

Repository: zync-sh/zync

Length of output: 3587


🏁 Script executed:

# Read merge.ts more broadly to understand context
cat -n src/features/connections/domain/merge.ts

Repository: zync-sh/zync

Length of output: 3318


🏁 Script executed:

# Find export/serialization logic for connections
rg -nB 5 -A 10 'format.*json|toJSON|serialize' --type=ts src/features/connections/ | head -100

Repository: zync-sh/zync

Length of output: 38


🏁 Script executed:

# Search for export/import formatters
rg -n 'tojson|JSON.stringify|toJSON|format.*connection' --type=ts -i src/features/connections/ | grep -i export

Repository: zync-sh/zync

Length of output: 164


🏁 Script executed:

# Search for connectionTransfer implementation which handles import/export
fd -e ts connectionTransfer src/features/connections/

Repository: zync-sh/zync

Length of output: 118


🏁 Script executed:

# Look for any serialization or formatting of Connection objects for export
rg -n -B 3 -A 10 'ConnectionExchangeExportFormat|toFile|fromFile' src/features/connections/

Repository: zync-sh/zync

Length of output: 1779


🏁 Script executed:

# Read the full connectionTransfer.ts to find serialization logic
cat -n src/features/connections/infrastructure/connectionTransfer.ts

Repository: zync-sh/zync

Length of output: 1305


🏁 Script executed:

# Search for export/serialization functions
rg -n 'export.*function|export.*const.*=' src/features/connections/infrastructure/connectionTransfer.ts

Repository: zync-sh/zync

Length of output: 164


🏁 Script executed:

# Look for other serializer files
fd -e ts 'serial|format|export' src/features/connections/

Repository: zync-sh/zync

Length of output: 38


🏁 Script executed:

# Search for the actual main-process handler that does the export serialization
rg -rn 'connections:exportToFile|connections:importFromFile' --type=ts

Repository: zync-sh/zync

Length of output: 396


🏁 Script executed:

# Look for main process files that handle connection export/import
fd -e ts 'ipc|main|electron' src/ | xargs rg -l 'exportToFile\|importFromFile' 2>/dev/null | head -5

Repository: zync-sh/zync

Length of output: 38


🏁 Script executed:

# Check if there's any field filtering/stripping before export
rg -n 'Omit|Pick|exclude|strip|filter.*field' src/features/connections/ --type=ts -B 2 -A 2

Repository: zync-sh/zync

Length of output: 2659


Verify removed metadata-preservation does not regress favorite/pin/icon/etc. on import.

Exports include all connection fields except status (per toPersistedConnections), including UI-only attributes like isFavorite, pinnedFeatures, icon, theme, folder, tags, createdAt, lastConnected, and homePath. The merge at line 69 spreads {...match, ...secureIncoming}, which means any of these fields present in the imported connection will overwrite existing local values—a behavioral change from explicit field preservation.

Confirm that exports are filtered to exclude UI-only metadata before serialization, or update preserveVaultCredentialOnUpdate (or a sibling helper) to strip these fields from incoming before the merge.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/features/connections/domain/merge.ts` around lines 59 - 69, The merge
logic in merge.ts is allowing UI-only metadata from imported connections to
overwrite local fields because the spread {...match, ...secureIncoming} merges
those attributes; update the import flow to strip UI-only fields (isFavorite,
pinnedFeatures, icon, theme, folder, tags, createdAt, lastConnected, homePath,
etc.) before merging by either: (a) ensuring toPersistedConnections excludes
these fields when producing `imported`, or (b) adding a sanitizer in
preserveVaultCredentialOnUpdate (or a new helper called by merge) that takes
`incoming` and returns a copy without UI-only metadata so mergedImported
receives only allowed persisted fields; keep the existing
credential-preservation behavior and ensure status/id remain from `match`.

Comment thread src/vault/syncIpc.ts
Comment on lines +24 to +40
function fallbackStatus(
provider: SyncProvider,
fallback: SyncProviderStatus,
error: unknown,
): SyncProviderStatus {
const message = error instanceof Error ? error.message : String(error);
const lastKnown = lastKnownStatusByProvider[provider];
return lastKnown ? { ...lastKnown, error: message } : { ...fallback, error: message };
}

export const syncIpc = {
/** Get connection status for a sync provider.
* @param provider Cloud sync provider.
* @returns Current provider connection metadata.
*/
status: (provider: SyncProvider): Promise<SyncProviderStatus> =>
invoke<SyncProviderStatus>('sync_status', { provider }),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Don't let stale cache override the outcome of the latest sync action.

If sync_connect succeeds but the follow-up sync_status call fails, fallbackStatus() can still emit the old cached connected: false state. The same inversion happens for disconnects. status() also never refreshes lastKnownStatusByProvider, so that stale state lingers.

Suggested fix
 function fallbackStatus(
   provider: SyncProvider,
   fallback: SyncProviderStatus,
   error: unknown,
 ): SyncProviderStatus {
   const message = error instanceof Error ? error.message : String(error);
   const lastKnown = lastKnownStatusByProvider[provider];
-  return lastKnown ? { ...lastKnown, error: message } : { ...fallback, error: message };
+  return { ...(lastKnown ?? {}), ...fallback, error: message };
 }

 export const syncIpc = {
@@
-  status: (provider: SyncProvider): Promise<SyncProviderStatus> =>
-    invoke<SyncProviderStatus>('sync_status', { provider }),
+  status: async (provider: SyncProvider): Promise<SyncProviderStatus> => {
+    const status = await invoke<SyncProviderStatus>('sync_status', { provider });
+    lastKnownStatusByProvider[provider] = status;
+    return status;
+  },
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/vault/syncIpc.ts` around lines 24 - 40, fallbackStatus can return stale
lastKnownStatusByProvider and status() never refreshes that cache; update the
cache on successful calls and avoid letting older cached entries override newer
results: in status(), after invoke<SyncProviderStatus>('sync_status', ...) store
the returned status into lastKnownStatusByProvider[provider] before returning;
likewise ensure the code paths that perform sync actions (e.g., handlers for
'sync_connect' and 'sync_disconnect') update lastKnownStatusByProvider on
success; change fallbackStatus(provider, fallback, error) to only return
lastKnown if it is strictly newer than fallback (compare a timestamp/version
field like updatedAt on SyncProviderStatus) otherwise return the fallback with
the error — if no timestamp/version exists, prefer the fallback over lastKnown.

Comment on lines +109 to +130
runTest('buildSessionData preserves vault tab profile metadata', () => {
const defaultProfileData = buildSessionData({
activeTabId: 'vault-tab',
activeConnectionId: null,
tabs: [makeVaultTab()],
terminals: {},
activeTerminalIds: {},
});

assert.equal(defaultProfileData.tabs.length, 1);
assert.equal(defaultProfileData.tabs[0].tabType, 'vault');
assert.equal(defaultProfileData.tabs[0].vaultProfileId, 'google');

const customProfileData = buildSessionData({
activeTabId: 'vault-tab',
activeConnectionId: null,
tabs: [makeVaultTab('vault-tab', 'custom-profile')],
terminals: {},
activeTerminalIds: {},
});
assert.equal(customProfileData.tabs[0].vaultProfileId, 'custom-profile');
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find the default vault profile ID constant definition
rg -n "DEFAULT_VAULT_PROFILE_ID|default.*vault.*profile" --type=ts --type=tsx -C2

Repository: zync-sh/zync

Length of output: 84


🏁 Script executed:

# Search for DEFAULT_VAULT_PROFILE_ID and related constants
rg -n "DEFAULT_VAULT_PROFILE_ID" -A 2 -B 2

# Also search for the makeVaultTab function definition to see current default
rg -n "function makeVaultTab|makeVaultTab\s*=" -A 3 -B 1

# Search for 'google' as vault profile to understand context
rg -n "vaultProfileId.*google|google.*profile" -C 2

Repository: zync-sh/zync

Length of output: 6560


🏁 Script executed:

# Find VAULT_PROFILE_IDS definition to see valid values
rg -n "VAULT_PROFILE_IDS" -B 2 -A 3

# Check if 'local' and 'google' are both valid profile IDs
rg -n "local.*profile|google.*profile" --type=ts -C 2

Repository: zync-sh/zync

Length of output: 671


🏁 Script executed:

# Check imports in test files to understand build setup
head -30 tests/sessionPersistence.test.mjs

# Look for any imports from src in test files
rg -n "^import.*from.*['\"].*src/" tests/ --type=mjs

Repository: zync-sh/zync

Length of output: 970


Use application's default vault profile ID constant in test helper.

The test helper makeVaultTab() uses 'google' as the default vaultProfileId, but the application defines DEFAULT_VAULT_PROFILE_ID = 'local' in src/vault/profileTypes.ts. Import and use the constant for consistency:

+import { DEFAULT_VAULT_PROFILE_ID } from '../.tmp-agent-tests/src/vault/profileTypes.js';

-function makeVaultTab(id = 'vault-tab', profile = 'google') {
+function makeVaultTab(id = 'vault-tab', profile = DEFAULT_VAULT_PROFILE_ID) {
   return { id, type: 'vault', title: 'Vault', vaultProfileId: profile, view: 'terminal' };
 }

This also requires updating the assertion on line 120 from 'google' to match the new default.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@tests/sessionPersistence.test.mjs` around lines 109 - 130, The test uses a
hardcoded default vaultProfileId ('google') in the helper makeVaultTab() and
assertions; update the test to import and use the application's
DEFAULT_VAULT_PROFILE_ID constant (from src/vault/profileTypes) when calling
makeVaultTab() without an explicit profile and when asserting the default value
for buildSessionData; keep explicit custom-profile cases unchanged
(makeVaultTab('vault-tab', 'custom-profile') and its assertion).

@gajendraxdev gajendraxdev merged commit 36e05a7 into zync-sh:staging May 12, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant