Conversation
Attach MCP shutdown handlers before async startup, add broken-pipe and crash handling, and explicitly stop the Xcode watcher during teardown. This closes lifecycle gaps that could leave MCP processes running after clients disappear and adds concrete shutdown reasons for investigation. Add lifecycle Sentry metrics, richer session diagnostics, and a repro script so process age, peer counts, memory, and active runtime state are visible when this issue recurs in the wild. Fixes #273 Co-Authored-By: Codex <noreply@openai.com>
commit: |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| server: state.server, | ||
| }); | ||
| state.phase = 'stopped'; | ||
| })(); |
There was a problem hiding this comment.
Unhandled snapshot error in shutdown prevents process exit
Medium Severity
The async IIFE inside shutdown calls buildMcpLifecycleSnapshot without a try-catch. If snapshot collection throws, onShutdown (which contains process.exit()) never runs. All signal/stdin handlers invoke void coordinator.shutdown(...), discarding the rejected promise. The unhandledRejection .once() handler fires and re-calls shutdown, which returns the same already-rejected state.shutdownPromise — again discarded via void. The .once() handler is now consumed, so the process hangs indefinitely. This is the exact orphaned-process scenario the PR aims to prevent.


Harden MCP server shutdown so orphaned XcodeBuildMCP processes are easier to prevent and much easier to diagnose.
Issue #273 reports multiple long-lived MCP server processes, with some instances growing to very high memory usage. I could not get a plain fast-stdin-close loop on this machine to reproduce lingering processes on
main, which suggests the original client-specific failure mode is timing and environment sensitive. Even so, the server had real lifecycle blind spots: shutdown listeners were attached only after async startup had already progressed, the Xcode watcher relied on process exit instead of explicit teardown, and there was no MCP-specific telemetry to tell us how old a process was, how much memory it was using, or how many sibling MCP processes were alive when shutdown did or did not happen.This change moves MCP lifecycle handling into an explicit coordinator that attaches shutdown handlers before async startup, adds crash and broken-pipe shutdown paths, and always stops the Xcode watcher during teardown. It also records MCP lifecycle metrics and anomaly counters in Sentry, extends the existing session-status resource with process/activity diagnostics, and adds a repro script plus focused lifecycle regression tests. I considered broader speculative signal handling, but kept the fix narrow and evidence-based.
Validation is from the final branch state, not just the first pass:
npm run typecheck,npm run test, andnpm run test:smokeall pass. I also addednpm run repro:mcp-lifecycle-leak, which reports zero lingering MCP processes after repeated spawn/close cycles on this branch. In addition, the new lifecycle logs and Sentry metrics capture process age, RSS, active runtime state, and peer MCP process counts so we can confirm or disprove recurrence remotely if the issue appears again.Fixes #273