Skip to content

Enhance rotary positional embedding version checks#3887

Merged
yaox12 merged 2 commits intoNVIDIA:devfrom
huvunvidia:huvu/updating_rope_fused_thd
Mar 18, 2026
Merged

Enhance rotary positional embedding version checks#3887
yaox12 merged 2 commits intoNVIDIA:devfrom
huvunvidia:huvu/updating_rope_fused_thd

Conversation

@huvunvidia
Copy link
Copy Markdown
Contributor

Added support for interleaved fused RoPE for TE >= 2.3.0 and adjusted version checks.

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Added support for interleaved fused RoPE for TE >= 2.3.0 and adjusted version checks.
@huvunvidia huvunvidia requested review from a team as code owners March 16, 2026 17:08
@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 16, 2026
@huvunvidia
Copy link
Copy Markdown
Contributor Author

/ok to test 4544b85

@yaox12 yaox12 enabled auto-merge March 16, 2026 17:22
@yaox12 yaox12 added this pull request to the merge queue Mar 16, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23159181832

@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23161079797

@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks Mar 16, 2026
@yaox12
Copy link
Copy Markdown
Member

yaox12 commented Mar 17, 2026

/ok to test d129643

@yaox12 yaox12 added this pull request to the merge queue Mar 17, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/23220483580

Merged via the queue into NVIDIA:dev with commit 51299c5 Mar 18, 2026
47 of 48 checks passed
@Phlip79
Copy link
Copy Markdown
Member

Phlip79 commented Mar 24, 2026

/claude copy

@svcnvidia-nemo-ci
Copy link
Copy Markdown

The changes from this PR are already present on main — the patch applied cleanly but produced no diff. No new PR was created.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants