Skip to content

Experimental: Unified LSP server in rewatch#8243

Draft
nojaf wants to merge 10 commits intorescript-lang:masterfrom
nojaf:rewatch-lsp
Draft

Experimental: Unified LSP server in rewatch#8243
nojaf wants to merge 10 commits intorescript-lang:masterfrom
nojaf:rewatch-lsp

Conversation

@nojaf
Copy link
Member

@nojaf nojaf commented Feb 9, 2026

This branch explores embedding a full LSP server directly into the rescript binary (rescript lsp), replacing the current architecture where a Node.js extension mediates between the editor and separate build/analysis processes.

The core idea

Today, the ReScript editor experience involves three processes: a Node.js VS Code extension, the rescript build watcher, and the rescript-editor-analysis.exe binary. They communicate through files on disk — the editor extension launches builds, waits for artifacts, then shells out to the analysis binary for each request.

This branch collapses the build system and LSP server into a single Rust process using tower-lsp. The build state lives in memory, and analysis requests shell out to the same rescript-editor-analysis.exe but with source code passed via stdin instead of being read from disk.

No temp files — stdin everywhere

Both bsc and the analysis binary receive source code via stdin rather than through temporary files. For didChange (unsaved edits), bsc -bs-read-stdin produces diagnostics without writing anything to disk. For analysis requests (hover, completion, code actions, etc.), the analysis binary receives a JSON blob on stdin containing the source text, cursor position, and package metadata. The OCaml analysis code was refactored with FromSource variants that parse from a string rather than opening files — so everything works correctly on unsaved editor buffers.

Separate build profile: lib/lsp

The LSP server writes its build artifacts to lib/lsp/ instead of lib/bs/. This means it doesn't conflict with rescript build or rescript build -w running in a terminal — both can operate independently on the same project without stepping on each other's artifacts.

Initial build: typecheck only

On initialized, the server runs a full build but only goes as far as producing .cmt/.cmi files (the TypecheckOnly profile). It deliberately skips JS emission. This gets the editor operational as fast as possible — type information for hover, completion, go-to-definition etc. is all available, without paying the cost of generating JavaScript for every module upfront.

Smart incremental builds on save

When a file is saved, the server runs a two-phase incremental build:

  1. Emit JS for the dependency closure — the server computes the transitive imports of the saved file and only emits JavaScript for that file and its dependencies. Modules outside this closure are skipped entirely. So saving a module produces JS for it and any imports that haven't been compiled yet — not the entire project.

  2. Typecheck reverse dependencies — modules that transitively depend on the saved file are re-typechecked to surface errors caused by API changes (e.g. a removed export). This gives you project-wide diagnostics on save — if you rename a function, you immediately see errors in every file that uses it, even files you don't have open. No JS is emitted for these — they get their JS when they are themselves saved.

What's implemented

All standard analysis endpoints are wired up: completion (with resolve), hover, signature help, go to definition, type definition, references, rename (with prepare), document symbols, code lens, inlay hints, semantic tokens, code actions, and formatting.

Observability

Every LSP request and build operation is traced with OpenTelemetry spans, viewable in Jaeger. This makes it straightforward to profile request latency and understand what the server is doing.

Test infrastructure

Each endpoint has integration tests using vscode-languageserver-protocol that boot a real LSP server in a sandbox, send requests, and snapshot both the results and the OTEL trace structure.

What's not here yet

  • workspace/didChangeWatchedFiles — handling external file changes (git checkout, etc.)
  • Multi-workspace / monorepo support
  • createInterface and openCompiled custom commands

This is an experiment to validate the architecture. If it proves useful, individual pieces can be split into focused PRs.

@pkg-pr-new
Copy link

pkg-pr-new bot commented Feb 9, 2026

Open in StackBlitz

rescript

npm i https://pkg.pr.new/rescript@8243

@rescript/darwin-arm64

npm i https://pkg.pr.new/@rescript/darwin-arm64@8243

@rescript/darwin-x64

npm i https://pkg.pr.new/@rescript/darwin-x64@8243

@rescript/linux-arm64

npm i https://pkg.pr.new/@rescript/linux-arm64@8243

@rescript/linux-x64

npm i https://pkg.pr.new/@rescript/linux-x64@8243

@rescript/runtime

npm i https://pkg.pr.new/@rescript/runtime@8243

@rescript/win32-x64

npm i https://pkg.pr.new/@rescript/win32-x64@8243

commit: bdffbeb

@nojaf
Copy link
Member Author

nojaf commented Feb 25, 2026

A bit of an update on this PR:

I'm currently working on a new side project in ReScript where an AI/LLM/Claude drives all things coding. I boss it around and it writes the ReScript code for me. While scratching that itch, I'm using the LSP server from this PR in Zed:

.zed/settings.json:

{
  "lsp": {
    "rescript-language-server": {
      "binary": {
        "path": "/Users/nojaf/.bun/bin/bun",
        "arguments": [
          "--bun",
          "/Users/nojaf/Projects/rescript/cli/rescript.js",
          "lsp",
        ],
        "env": {
          "OTEL_EXPORTER_OTLP_ENDPOINT": "http://localhost:4707",
        },
      },
      "initialization_options": {
        "queue_debounce_ms": 50,
        "diagnostics_http": 12307,
      },
    },
  },
}

The nice thing about this LSP server is that it has a diagnostics endpoint the LLM can call. The LLM calls this after making edits and has a way to clean up after itself. I wish this was more of an industry standard, but since I use ACP with Claude/Zed, I lack support for this (please upvote).

This way of working also revealed a lot of use cases to consider. LLMs will make frequent file edits and update files in a certain order, for example creating an API change in one file and updating other files a few steps later. LLMs also tend to delete and create files. These patterns can be tricky to handle.

Another great UX/DX thing in the PR is that saved files compile to JS. In practice I start vite in a shell and don't think about it anymore. The LLM makes changes and the whole thing just updates accordingly. Very productive with React Fast Refresh.

When the LSP doesn't work, I have the LLM report a problem to another endpoint on the internal HTTP server. That puts a marker in the OTel trace showing something unusual happened. I built a small custom OTel debug tool that digests all telemetry data, saves it to a local SQLite database, and exposes it in a useful way. Another LLM can then investigate when a llm.report span id is passed. In essence you say: "hey look at the trace, weird stuff happened here" and it will investigate what happened based on the trace.

This is a very effective way to troubleshoot, but I still find genuine gaps on a weekly basis. The LSP has a lot of new scenarios Rewatch never had to account for.

In conclusion, I'm still experimenting and learning a lot with this PR. I can't say where this will end or what I'll do with it afterward. This is also why I haven't circled back to #8241. The test infra has always been a carved-out part of that PR, but overall I'm not sure I want to keep it as a separate thing. Things are still too much in flux right now.

Having a lot of fun though!

nojaf and others added 5 commits March 16, 2026 10:17
…g#8291)

* Fix rewatch panic when package.json has no "name" field

* CHANGELOG
The modulePath for child modules used the child's own name instead of
the parent's name. Since qualifiedName prepends structure.name, this
caused the child name to appear twice (e.g. "Impl.Impl" instead of
"Event-WebAPI.Impl"), making every module with an identically-named
sub-module collide on the UNIQUE constraint in rescript.db.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants