feat(vllm): add delta-compressed collective refit#2444
Conversation
Adds optional delta-compressed weight transfer for non-colocated vLLM collective refit. This introduces a delta-aware packed weight transfer protocol that can send either full weights or additive deltas, with support for `dense`, `sparse_indices`, and `sparse_bitmask` delta encodings. The trainer source rank keeps a pinned CPU baseline of the last successfully synced HF-format weights, computes deltas against that baseline, and periodically sends full syncs based on `full_sync_interval`. The feature is disabled by default and only applies to non-colocated vLLM refit. Colocated CUDA IPC, vLLM FP8 weights, and ModelOpt quantized vLLM paths are rejected. Signed-off-by: Hollow Man <hollowman@opensuse.org>
|
Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
There was a problem hiding this comment.
Pull request overview
Adds an optional delta-compressed weight transfer protocol for non-colocated vLLM collective refit, enabling the trainer source rank to send full weights or additive deltas (dense / sparse_indices / sparse_bitmask) and apply deltas additively through existing vLLM weight loaders.
Changes:
- Introduces a delta-aware packed weight transfer protocol (
full/delta/done) with sparse delta encodings and a trainer-sideDeltaCompressionTrackerbaseline. - Integrates the new transfer path into DTensor v1/v2 and Megatron policy workers via a shared
dispatch_packed_weight_transfer(...)helper. - Updates vLLM collective refit to optionally consume the new full/delta protocol and adds unit tests + example configs.
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
tests/unit/utils/test_weight_transfer.py |
Adds unit coverage for delta tracker behavior, sparse transports, additive load context, and producer/consumer roundtrips. |
nemo_rl/utils/weight_transfer.py |
Implements delta tracking, sparse encodings, packed full/delta broadcast protocol, and additive load context. |
nemo_rl/utils/weight_transfer_types.py |
Defines shared literal types/constants for delta compression and transfer kinds. |
nemo_rl/utils/torch_dtypes.py |
Centralizes dtype string→torch.dtype mappings (canonical + aliases). |
nemo_rl/models/policy/workers/megatron_policy_worker.py |
Switches collective weight broadcast to the delta-aware dispatcher when enabled. |
nemo_rl/models/policy/workers/dtensor_policy_worker.py |
Switches collective weight broadcast to the delta-aware dispatcher when enabled. |
nemo_rl/models/policy/workers/dtensor_policy_worker_v2.py |
Switches collective weight broadcast to the delta-aware dispatcher when enabled. |
nemo_rl/models/generation/vllm/vllm_worker.py |
Determines whether to use delta transfer and forwards that flag to the vLLM worker extension. |
nemo_rl/models/generation/vllm/vllm_worker_async.py |
Forwards the delta-transfer enablement flag in the async prepare_refit_info path. |
nemo_rl/models/generation/vllm/vllm_backend.py |
Adds delta-aware collective consumer path and additive-delta loading through existing loaders. |
nemo_rl/models/generation/vllm/config.py |
Extends vLLM generation config typing with delta_compression settings. |
nemo_rl/models/automodel/setup.py |
Reuses canonical dtype mapping from torch_dtypes instead of duplicating it. |
examples/configs/grpo_math_1B.yaml |
Documents/introduces the new delta_compression config block (disabled by default). |
examples/configs/distillation_math.yaml |
Documents/introduces the new delta_compression config block (disabled by default). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Awesome @HollowMan6 I found delta weight transfer has its own weight transfer function, which seems a duplicated one compared with the full weight transfer. It is out of the scope of this PR, but is there any block to have delt and full weight transfer shared the same communication function while have their independent protocol to pack, unpack the model weights? |
|
Thank you @ZhiyuLi-Nvidia for pointing this out, I just did some refactoring according to your suggestion, and it looks fine. |
…nication function Signed-off-by: Hollow Man <hollowman@opensuse.org>
dce52fe to
857e72f
Compare
…lling load_delta_weights_func Signed-off-by: Hollow Man <hollowman@opensuse.org>
What does this PR do ?
Adds optional delta-compressed weight transfer for non-colocated vLLM collective refit.
This introduces a delta-aware packed weight transfer protocol that can send either full weights or additive deltas, with support for
dense,sparse_indices, andsparse_bitmaskdelta encodings. The trainer source rank keeps a pinned CPU baseline of the last successfully synced HF-format weights, computes deltas against that baseline, and periodically sends full syncs based onfull_sync_interval.The feature is disabled by default and only applies to non-colocated vLLM refit. Colocated CUDA IPC, vLLM FP8 weights, and ModelOpt quantized vLLM paths are rejected.
Issues
N/A
Usage
Enable under the vLLM generation config:
Before your PR is "Ready for review"
Pre checks:
Additional Information
DeltaCompressionTrackerand delta-aware packed transfer utilities.fullordeltachunks.dispatch_packed_weight_transfer(...)helper.E2E test results with: