Handle config file diagnostics updates via file watching#3731
Handle config file diagnostics updates via file watching#3731
Conversation
| if _, ok := s.nodeModulesRealpathAliases.Load(path); ok { | ||
| return true | ||
| } | ||
| if isWatchedConfigFile(uri, s.toPath, currentDirectoryPath, oldCustomConfigFileName, newCustomConfigFileName) { |
There was a problem hiding this comment.
I am not sure if we need this. If we have excessive file watch events, we may already have dropped some config changes, and we cannot do much about it unless we keep track of every config we pushed diagnostics for and re-do all of those from scratch.
|
|
||
| // Schedule a debounced diagnostics refresh | ||
| // Schedule a debounced snapshot update and diagnostics refresh | ||
| s.scheduleSnapshotUpdate() |
There was a problem hiding this comment.
I figured we shouldn't necessarily be waiting to get a request to update those diagnostics, so I added a scheduled update similar to how we schedule pull diagnostics refresh, etc.
| } | ||
| } | ||
|
|
||
| // Register the permanent config file watcher if not yet registered or pending retry. |
There was a problem hiding this comment.
This makes sure we always watch the workspace.
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Adds a durable file-watching and snapshot-update path so config-file (tsconfig/jsconfig) diagnostics keep updating even when their owning configured project is closed/unloaded.
Changes:
- Introduces a permanent “workspace” watcher for config files and debounced snapshot updates on
DidChangeWatchedFiles. - Extends snapshot cloning to treat watched config files as relevant watch events and to request processing of configured projects for config changes/creates.
- Updates push diagnostics behavior and tests to validate config diagnostics update/clear across project lifetime and deletions.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/testutil/projecttestutil/projecttestutil.go | Makes WatchesFile account for UnwatchFiles calls when determining active watchers. |
| internal/project/watchtimeout_test.go | Adjusts fake-time advancement to cover sequential WatchFiles timeouts. |
| internal/project/snapshotfs.go | Treats watched config files as relevant watch changes and forwards config-file URIs into resource requests. |
| internal/project/snapshot.go | Threads config-file watch events into snapshot cloning and logs a new update reason. |
| internal/project/session.go | Adds debounced snapshot updates on file watch events, permanent workspace config watcher, and config-diagnostics clearing on deletes. |
| internal/project/projectlifetime_test.go | Adds regression test verifying the workspace remains watched across project churn. |
| internal/project/project_test.go | Expands push-diagnostics tests for closed projects, created configs, and deletion scenarios. |
| internal/project/configfileregistrybuilder.go | Extracts isConfigBaseName helper for reuse. |
| for uri := range change.fileChanges.Deleted.Keys() { | ||
| deletedPath := s.toPath(uri.FileName()) | ||
| for configPath := range s.publishedDiagConfigPaths.Keys() { | ||
| if configPath == deletedPath || deletedPath.ContainsPath(configPath) { | ||
| s.publishProjectDiagnostics(ctx, string(configPath), nil, nil /*converters*/) | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Do we already have something like this in the repo? @andrewbranch
There was a problem hiding this comment.
We already expand directory deletions into file deletions for all known contents of those directories in snapshotFSBuilder.Finalize. Maybe that's useful?
There was a problem hiding this comment.
Yeah, I saw that, but we don't expect to always have that information on disk for config files or their directories... I'm tempted to just leave this like that and optimize later if needed. We can have quite a large number of config files with errors (e.g. monorepo with baseUrl usage), but maybe not that many deletions, so not sure how blocking this is.
There was a problem hiding this comment.
I haven't dug into the code yet so maybe this is already there, but I think as long as a mass deletion triggers purging all published diagnostics rather than evaluating, this is fine. You may want to specifically look at how deletions of node_modules are handled, though, as some of the excessive watch event handling splits out node_modules changes from non-node_modules changes, and we would expect to normally have zero tsconfigs inside node_modules. In a giant monorepo, we wouldn't want to process 1000 tsconfigs against 50k deleted node_modules files.
There was a problem hiding this comment.
Ok, we're handling excessive file changes now and only looking through file changes for deletion if we're within the threshold, so this should now be O(#configs) with a possibly 1k constant. I'm also excluding node_modules changes right away, so maybe this is good enough for now while we don't have anything trie-like in the repo.
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
| } | ||
| } | ||
| } | ||
| assert.Assert(t, hasBaseUrlDiag, "expected 'baseUrl' option removed to remain after project cleanup, got: %v", lastCall.Params.Diagnostics) |
There was a problem hiding this comment.
Well, this is kind of the opposite scenario of the issue, right? I didn't think of this until seeing the test, but I think the problem with this approach is, let's say you're working in a monorepo, and you happen to open a file in a project that you're not planning to change, and that tsconfig triggers an error. Now, you can never clear that error without actually fixing the tsconfig. It seems like with the limitations of not knowing when a tsconfig file is opened and closed, we might actually want to limit what we do here to clearing config errors on project close.
There was a problem hiding this comment.
I think another scenario that has a problem with this approach is something like a git pull that changes the formatting of every tsconfig file in the workspace. If I'm reading right, assuming we're already watching the workspace and get those events, this would trigger a configured project load of every project?
There was a problem hiding this comment.
Yes, both things are true, but we can't do better without handling open .json files. I do think it's also weird for you to have a tsconfig.json open and not get errors for it, so it's just a question of what's weirder.
There was a problem hiding this comment.
I think, given the constraints we have, the expected behavior for #3676 would be that the baseUrl error disappears at step 3 (closing the last project file, even though the tsconfig is still invalid).
I think it’s generally more important to clear errors at the right time than to show errors, so I think just clearing errors on project close is an ok stopgap for right now. I do think it's worth looking into handling .json files so we can know when tsconfigs are open/closed. But for sure, opening every configured project on git switch and having no way to close them isn't tenable.
Fixes #3676.
Since our LS is not (yet) registered to handle .json files, we currently need to handle tsconfig changes via push diagnostics, and get updates via file watcher events.
To make tsconfig diagnostics work, we cannot assume that if a project is closed, we won't need to produce updated diagnostics for it, so in this PR we handle tsconfig changes even when its project is currently unopened (i.e. missing from project collection and config file registry). It does so by ensuring during snapshot clone, we collect config file changes and creations and later process configured projects for those. We exclude config files that don't belong to the current workspace or that belong to
node_modules, and we also ensure we always watch the current workspace for file changes.