fix(megatron): destroy NCCL process groups on training exit#8385
fix(megatron): destroy NCCL process groups on training exit#8385inzamam-iqbal wants to merge 1 commit intomodelscope:mainfrom
Conversation
MoE models (e.g. GLM-4.5-Air) create many expert-parallel NCCL process groups. Without explicit cleanup, Python's uncoordinated GC teardown causes NCCL watchdog timeouts and process hangs on exit. This adds `dist.destroy_process_group()` at the end of the `finally` block in `MegatronSft.run()`, ensuring all NCCL groups are torn down cleanly after training completes (or fails). Since `MegatronPretrain` and `MegatronRLHF` inherit `run()`, this covers all Megatron training pipelines.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue causing training processes to hang indefinitely when using Megatron with Mixture-of-Experts (MoE) models. By introducing a coordinated destruction of NCCL process groups upon training exit, it ensures a clean and reliable shutdown, preventing watchdog timeouts and improving the stability of distributed training environments. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a change to swift/megatron/pipelines/train/sft.py to address a reported process hang issue after Megatron training. The change adds a call to torch.distributed.destroy_process_group() within the finally block of the MegatronSft.run method. This call is conditional, executing only if a distributed process group is initialized. A corresponding import for torch.distributed is also added. The intent of this change is to ensure that NCCL process groups are cleanly torn down upon the completion or failure of the training process.
Summary
dist.destroy_process_group()call at the end ofMegatronSft.run()to cleanly tear down all NCCL process groups after training completes or fails.MegatronPretrainandMegatronRLHFinheritrun()fromMegatronSft, this fix covers all Megatron training pipelines.Motivation
When training MoE models with Megatron, the process hangs indefinitely after training completes. This is because NCCL's internal watchdog thread detects that peers have stopped responding during uncoordinated Python GC cleanup of the many expert-parallel process groups. The fix ensures all ranks coordinately destroy their process groups before exiting.
Fixes #7992
Related to #4643
Test plan