[Optimization] Incremental checkpoint save for dcp on torch 2.7.x (ARM CPU optimization)#1525
Open
tina-wen wants to merge 4 commits intoInternLM:mainfrom
Open
[Optimization] Incremental checkpoint save for dcp on torch 2.7.x (ARM CPU optimization)#1525tina-wen wants to merge 4 commits intoInternLM:mainfrom
tina-wen wants to merge 4 commits intoInternLM:mainfrom
Conversation
CyCle1024
reviewed
Mar 5, 2026
Contributor
There was a problem hiding this comment.
Pull request overview
This PR targets faster distributed checkpoint (DCP) saves on ARM CPUs (torch 2.7.1) by introducing incremental/cached planning and write-result handling, plus an optional monkeypatch to reduce finish-time overhead.
Changes:
- Add a
patch_for_dcp_finishconfig flag to optionally monkeypatch torch DCP internals. - Switch
TrainEngine.save_dcp()to usestorage_writer+planneron torch 2.7.x via newXtunnerWriterandXtunerCacheSavePlanner. - Introduce new engine utilities for caching save plans/metadata and write results.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| xtuner/v1/train/trainer.py | Adds patch_for_dcp_finish config/plumbing to enable a DCP finish monkeypatch. |
| xtuner/v1/patch/torch_dcp_planner.py | Adds a patched _save_state_dict implementation and a function to apply the monkeypatch. |
| xtuner/v1/patch/init.py | Exposes the new patch function from the patch package. |
| xtuner/v1/engine/xtuner_storage.py | New FileSystemWriter subclass that can cache write results to reduce repeated overhead. |
| xtuner/v1/engine/xtuner_cache_planner.py | New DefaultSavePlanner subclass that caches global plan/metadata to support incremental saves. |
| xtuner/v1/engine/train_engine.py | Uses the new writer/planner for torch 2.7.x DCP saves. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
CyCle1024
approved these changes
Mar 9, 2026
42b4328 to
26e7aa7
Compare
This reverts commit f29d8d9.
…s parameter and fix mypy errors
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR optimizes
dcp.saveperformance on ARM CPUs by implementing incremental metadata saving for torch 2.7.1.Implementation
patch_for_dcp_finishconfig flagstorage_writer/plannerfor dcp.savePerformance
Checkpoint saving performance improved by up to 85%
Compatibility
✅ Works with existing ckpt_save
✅ No precision issues on recovery
✅ No PyTorch/PTA source changes