Skip to content

fix(megatron): destroy NCCL process groups on training exit#8385

Open
inzamam-iqbal wants to merge 1 commit intomodelscope:mainfrom
inzamam-iqbal:fix/destroy-process-group-on-exit
Open

fix(megatron): destroy NCCL process groups on training exit#8385
inzamam-iqbal wants to merge 1 commit intomodelscope:mainfrom
inzamam-iqbal:fix/destroy-process-group-on-exit

Conversation

@inzamam-iqbal
Copy link
Contributor

@inzamam-iqbal inzamam-iqbal commented Mar 20, 2026

Summary

  • Add explicit dist.destroy_process_group() call at the end of MegatronSft.run() to cleanly tear down all NCCL process groups after training completes or fails.
  • MoE models (e.g. GLM-4.5-Air, DeepSeek-MoE) create many expert-parallel NCCL groups that cause watchdog timeouts and process hangs during Python's uncoordinated GC teardown on exit.
  • Since MegatronPretrain and MegatronRLHF inherit run() from MegatronSft, this fix covers all Megatron training pipelines.

Motivation

When training MoE models with Megatron, the process hangs indefinitely after training completes. This is because NCCL's internal watchdog thread detects that peers have stopped responding during uncoordinated Python GC cleanup of the many expert-parallel process groups. The fix ensures all ranks coordinately destroy their process groups before exiting.

Fixes #7992
Related to #4643

Test plan

  • Pre-commit checks pass (flake8, isort, yapf)
  • Verified fix resolves exit hang with MoE model training on multi-node setup

MoE models (e.g. GLM-4.5-Air) create many expert-parallel NCCL
process groups. Without explicit cleanup, Python's uncoordinated
GC teardown causes NCCL watchdog timeouts and process hangs on exit.

This adds `dist.destroy_process_group()` at the end of the `finally`
block in `MegatronSft.run()`, ensuring all NCCL groups are torn down
cleanly after training completes (or fails). Since `MegatronPretrain`
and `MegatronRLHF` inherit `run()`, this covers all Megatron training
pipelines.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue causing training processes to hang indefinitely when using Megatron with Mixture-of-Experts (MoE) models. By introducing a coordinated destruction of NCCL process groups upon training exit, it ensures a clean and reliable shutdown, preventing watchdog timeouts and improving the stability of distributed training environments.

Highlights

  • NCCL Process Group Teardown: Implemented an explicit call to "dist.destroy_process_group()" within "MegatronSft.run()" to ensure clean termination of NCCL process groups.
  • MoE Model Hangs Resolved: Addressed an issue where Mixture-of-Experts (MoE) models would hang indefinitely after training completion due to uncoordinated teardown of numerous expert-parallel NCCL groups.
  • Broad Applicability: The fix extends to "MegatronPretrain" and "MegatronRLHF" pipelines, as they inherit the "run()" method from "MegatronSft", ensuring comprehensive coverage for Megatron training.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a change to swift/megatron/pipelines/train/sft.py to address a reported process hang issue after Megatron training. The change adds a call to torch.distributed.destroy_process_group() within the finally block of the MegatronSft.run method. This call is conditional, executing only if a distributed process group is initialized. A corresponding import for torch.distributed is also added. The intent of this change is to ensure that NCCL process groups are cleanly torn down upon the completion or failure of the training process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Model training has ended, but there was an error when releasing resources.

1 participant