Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
148 changes: 148 additions & 0 deletions lib/codex-manager.ts
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ import {
getCodexCliConfigPath,
loadCodexCliState,
} from "./codex-cli/state.js";
import { getLatestCodexCliSyncRollbackPlan } from "./codex-cli/sync.js";
import { setCodexCliActiveSelection } from "./codex-cli/writer.js";
import {
applyUiThemeFromDashboardSettings,
Expand Down Expand Up @@ -80,6 +81,7 @@ import {
getActionableNamedBackupRestores,
getNamedBackupsDirectoryPath,
getStoragePath,
listAccountSnapshots,
listNamedBackups,
listRotatingBackups,
loadAccounts,
Expand Down Expand Up @@ -3574,6 +3576,7 @@ async function runDoctor(args: string[]): Promise<number> {

setStoragePath(null);
const storagePath = getStoragePath();
const walPath = `${storagePath}.wal`;
const checks: DoctorCheck[] = [];
const addCheck = (check: DoctorCheck): void => {
checks.push(check);
Expand Down Expand Up @@ -3608,6 +3611,79 @@ async function runDoctor(args: string[]): Promise<number> {
}
}

addCheck({
key: "storage-journal",
severity: existsSync(walPath) ? "ok" : "warn",
Comment on lines 3611 to +3616
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Double existsSync — TOCTOU race on Windows

existsSync(walPath) is evaluated twice in the same addCheck call — once for severity and once for message. on windows, antivirus scanners routinely lock or delete files between consecutive syscalls, which can yield a mismatched severity+message pair (e.g. "ok" severity with a "journal missing" message). cache the result in a boolean before constructing the check object.

Prompt To Fix With AI
This is a comment left during a code review.
Path: lib/codex-manager.ts
Line: 3611-3616

Comment:
**Double `existsSync` — TOCTOU race on Windows**

`existsSync(walPath)` is evaluated twice in the same `addCheck` call — once for `severity` and once for `message`. on windows, antivirus scanners routinely lock or delete files between consecutive syscalls, which can yield a mismatched severity+message pair (e.g. `"ok"` severity with a "journal missing" message). cache the result in a boolean before constructing the check object.

How can I resolve this? If you propose a fix, please make it concise.

Fix in Codex

message: existsSync(walPath)
? "Write-ahead journal found"
: "Write-ahead journal missing; recovery will rely on backups",
details: walPath,
});
Comment on lines +3614 to +3621
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

flip the wal check semantics and snapshot the file state once.

lib/codex-manager.ts:3614-3621 and lib/codex-manager.ts:3673-3685 treat a missing .wal as degraded, but lib/storage.ts:2154-2177 deletes that journal after a successful save. that means the normal healthy state will usually show a warning here. the repeated existsSync() calls also let severity, message, and details drift if the file changes between checks on windows. please cache the booleans once and make wal presence the exceptional state. add a vitest regression in test/codex-manager-cli.test.ts for healthy storage with no wal.

suggested fix
 	setStoragePath(null);
 	const storagePath = getStoragePath();
 	const walPath = `${storagePath}.wal`;
+	const storageExists = existsSync(storagePath);
+	const walExists = existsSync(walPath);
 	const checks: DoctorCheck[] = [];
 	const addCheck = (check: DoctorCheck): void => {
 		checks.push(check);
 	};
@@
 	addCheck({
 		key: "storage-journal",
-		severity: existsSync(walPath) ? "ok" : "warn",
-		message: existsSync(walPath)
-			? "Write-ahead journal found"
-			: "Write-ahead journal missing; recovery will rely on backups",
+		severity: walExists ? "warn" : "ok",
+		message: walExists
+			? "Write-ahead journal present; a previous write may have been interrupted"
+			: "No write-ahead journal present",
 		details: walPath,
 	});
@@
 	const hasAnyRecoveryArtifact =
-		existsSync(storagePath) ||
-		existsSync(walPath) ||
+		storageExists ||
+		walExists ||
 		validRotatingBackups.length > 0 ||
 		validSnapshots.length > 0;
@@
-		details: `storage=${existsSync(storagePath)}, wal=${existsSync(walPath)}, rotating=${validRotatingBackups.length}, snapshots=${validSnapshots.length}`,
+		details: `storage=${storageExists}, wal=${walExists}, rotating=${validRotatingBackups.length}, snapshots=${validSnapshots.length}`,
 	});
As per coding guidelines, `lib/**`: focus on auth rotation, windows filesystem IO, and concurrency. verify every change cites affected tests (vitest) and that new queues handle EBUSY/429 scenarios.

Also applies to: 3673-3685

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/codex-manager.ts` around lines 3614 - 3621, The addCheck blocks that call
existsSync(walPath) should cache the result once (e.g., const walExists =
existsSync(walPath)) and use that single boolean for severity, message and
details to avoid race windows IO drift; flip the semantics so missing .wal is
treated as normal/ok and presence is treated as exceptional/warn (since
lib/storage.ts deletes the journal on successful save), apply this fix to both
addCheck occurrences that reference walPath, and add a vitest regression in
test/codex-manager-cli.test.ts asserting that storage without a .wal reports
healthy/ok; ensure the change handles potential EBUSY concurrency on Windows by
reading the boolean once rather than multiple existsSync calls.


const rotatingBackups = await listRotatingBackups();
const validRotatingBackups = rotatingBackups.filter((backup) => backup.valid);
const invalidRotatingBackups = rotatingBackups.filter(
(backup) => !backup.valid,
);
addCheck({
key: "rotating-backups",
severity:
validRotatingBackups.length > 0
? "ok"
: rotatingBackups.length > 0
? "error"
: "warn",
message:
validRotatingBackups.length > 0
? `${validRotatingBackups.length} rotating backup(s) available`
: rotatingBackups.length > 0
? "Rotating backups are unreadable"
: "No rotating backups found yet",
details:
invalidRotatingBackups.length > 0
? `${invalidRotatingBackups.length} invalid backup(s); recreate by saving accounts`
: dirname(storagePath),
});

const snapshotBackups = await listAccountSnapshots();
const validSnapshots = snapshotBackups.filter((snapshot) => snapshot.valid);
const invalidSnapshots = snapshotBackups.filter(
(snapshot) => !snapshot.valid,
);
addCheck({
key: "snapshot-backups",
severity:
validSnapshots.length > 0
? "ok"
: snapshotBackups.length > 0
? "error"
: "warn",
message:
validSnapshots.length > 0
? `${validSnapshots.length} recovery snapshot(s) available`
: snapshotBackups.length > 0
? "Snapshot backups are unreadable"
: "No recovery snapshots found",
details:
invalidSnapshots.length > 0
? `${invalidSnapshots.length} invalid snapshot(s); create a fresh snapshot before destructive actions`
: getNamedBackupsDirectoryPath(),
});

const hasAnyRecoveryArtifact =
existsSync(storagePath) ||
existsSync(walPath) ||
validRotatingBackups.length > 0 ||
validSnapshots.length > 0;
addCheck({
key: "recovery-chain",
severity: hasAnyRecoveryArtifact ? "ok" : "warn",
message: hasAnyRecoveryArtifact
? "Recovery artifacts present"
: "No recovery artifacts found; create a snapshot or backup before destructive actions",
details: `storage=${existsSync(storagePath)}, wal=${existsSync(walPath)}, rotating=${validRotatingBackups.length}, snapshots=${validSnapshots.length}`,
});

const codexAuthPath = getCodexCliAuthPath();
const codexConfigPath = getCodexCliConfigPath();
let codexAuthEmail: string | undefined;
Expand Down Expand Up @@ -3720,8 +3796,80 @@ async function runDoctor(args: string[]): Promise<number> {
details: codexCliState?.path,
});

const rollbackPlan = await getLatestCodexCliSyncRollbackPlan();
const rollbackReason = (rollbackPlan.reason ?? "").trim();
const rollbackSnapshotPath = rollbackPlan.snapshot?.path;
const rollbackSnapshotName = rollbackPlan.snapshot?.name;
const rollbackReasonLower = rollbackReason.toLowerCase();
if (rollbackPlan.status === "ready") {
const count =
rollbackPlan.accountCount ?? rollbackPlan.storage?.accounts.length;
addCheck({
key: "codex-cli-rollback-checkpoint",
severity: "ok",
message: `Latest manual Codex CLI rollback checkpoint ready (${count ?? "?"} account${count === 1 ? "" : "s"})`,
details: rollbackSnapshotPath,
});
} else if (rollbackPlan.snapshot) {
const isMissing =
rollbackReasonLower.includes("missing") ||
rollbackReasonLower.includes("not found") ||
rollbackReasonLower.includes("enoent");
const key = isMissing
? "codex-cli-rollback-checkpoint-missing"
: "codex-cli-rollback-checkpoint-invalid";
const recoveryAction = isMissing
? "Action: Recreate the rollback checkpoint by running a manual Codex CLI sync with storage backups enabled."
: "Action: Recreate the rollback checkpoint with a fresh manual Codex CLI sync before attempting rollback.";
const detailParts = [
rollbackSnapshotPath ?? rollbackSnapshotName,
rollbackReason || undefined,
recoveryAction,
].filter(Boolean);
addCheck({
key,
severity: "error",
message: isMissing
? "Latest manual Codex CLI rollback checkpoint is missing; rollback is blocked."
: "Latest manual Codex CLI rollback checkpoint cannot be restored.",
details: detailParts.join(" | ") || undefined,
});
} else {
addCheck({
key: "codex-cli-rollback-checkpoint",
severity: "warn",
message: "No manual Codex CLI rollback checkpoint has been recorded yet",
details:
"Action: Run a manual Codex CLI sync with backups enabled to capture a rollback checkpoint before applying changes.",
});
}
Comment on lines +3799 to +3845
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

keep the rollback checkpoint key stable across states.

lib/codex-manager.ts:3807-3844 emits different key values for one logical doctor check. that makes the json harder to automate against, and it already conflicts with the existing vitest expectation called out for test/codex-manager-cli.test.ts. keep one stable key and move ready / missing / invalid into severity, details, or a dedicated status field.

suggested fix
 	} else if (rollbackPlan.snapshot) {
 		const isMissing =
 			rollbackReasonLower.includes("missing") ||
 			rollbackReasonLower.includes("not found") ||
 			rollbackReasonLower.includes("enoent");
-		const key = isMissing
-			? "codex-cli-rollback-checkpoint-missing"
-			: "codex-cli-rollback-checkpoint-invalid";
 		const recoveryAction = isMissing
 			? "Action: Recreate the rollback checkpoint by running a manual Codex CLI sync with storage backups enabled."
 			: "Action: Recreate the rollback checkpoint with a fresh manual Codex CLI sync before attempting rollback.";
 		const detailParts = [
+			`status=${isMissing ? "missing" : "invalid"}`,
 			rollbackSnapshotPath ?? rollbackSnapshotName,
 			rollbackReason || undefined,
 			recoveryAction,
 		].filter(Boolean);
 		addCheck({
-			key,
+			key: "codex-cli-rollback-checkpoint",
 			severity: "error",
 			message: isMissing
 				? "Latest manual Codex CLI rollback checkpoint is missing; rollback is blocked."
 				: "Latest manual Codex CLI rollback checkpoint cannot be restored.",
 			details: detailParts.join(" | ") || undefined,
 		});
As per coding guidelines, `lib/**`: verify every change cites affected tests (vitest).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@lib/codex-manager.ts` around lines 3799 - 3845, The doctor check emits
different keys for one logical check; make the key stable (e.g., always
"codex-cli-rollback-checkpoint") instead of switching to
"codex-cli-rollback-checkpoint-missing" or "-invalid". Update the logic around
getLatestCodexCliSyncRollbackPlan() and the addCheck(...) calls so they always
use the same key, move state specifics into severity (ready -> ok, missing ->
error, invalid -> error) and/or into details or a new status field, and ensure
message/details include the original isMissing/rollbackReason info; also update
the affected vitest (test/codex-manager-cli.test.ts) expectations to assert the
stable key and the new place where state is recorded.


const storage = await loadAccounts();
let fixChanged = false;

const actionableNamedBackupRestores = await getActionableNamedBackupRestores({
currentStorage: storage,
});
const actionableCount = actionableNamedBackupRestores.assessments.length;
const backupRecoveryAction =
actionableCount > 0
? undefined
: `Action: Add or copy a named backup into ${getNamedBackupsDirectoryPath()} before attempting recovery.`;
addCheck({
key: "named-backup-restores",
severity: actionableCount > 0 ? "ok" : "warn",
message:
actionableCount > 0
? `Found ${actionableCount} actionable named backup restore${actionableCount === 1 ? "" : "s"}`
: "No actionable named backup restores available",
details: [
`total backups: ${actionableNamedBackupRestores.totalBackups}`,
backupRecoveryAction,
]
.filter(Boolean)
.join(" | "),
});

let fixActions: DoctorFixAction[] = [];
if (options.fix && storage && storage.accounts.length > 0) {
const fixed = applyDoctorFixes(storage);
Expand Down
Loading