Skip to content

Add RL token throughput and packing metrics#3877

Merged
tdene merged 11 commits intoNVIDIA:mainfrom
tdene:tde/observability_metrics
Apr 22, 2026
Merged

Add RL token throughput and packing metrics#3877
tdene merged 11 commits intoNVIDIA:mainfrom
tdene:tde/observability_metrics

Conversation

@tdene
Copy link
Copy Markdown
Contributor

@tdene tdene commented Mar 15, 2026

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Co-authored-by: Jorge Albericio <jalbericiola@nvidia.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Mar 15, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@tdene tdene marked this pull request as ready for review March 15, 2026 22:31
@tdene tdene requested a review from a team as a code owner March 15, 2026 22:31
@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Mar 15, 2026
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team March 15, 2026 22:31
Comment thread megatron/rl/rl_utils.py Outdated
Comment thread megatron/training/training.py Outdated
Comment thread megatron/training/training.py Outdated
Comment thread megatron/training/training.py Outdated
Comment thread megatron/rl/sequence_packing_utils.py Outdated
Returns:
Total compute tokens (num_bins * bin_size) on this rank.
"""
if packing_context is None or packing_context.packed_trajs is None:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your typing says that PackingContext cannot be None

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed

Comment thread megatron/rl/sequence_packing_utils.py
@tdene
Copy link
Copy Markdown
Contributor Author

tdene commented Mar 16, 2026

/claude review

Comment thread megatron/training/training.py Outdated

# Add tokens/sec to log string
log_string += f' toks/s: {tokens_per_sec:.0f} |'
log_string += f' toks/s/gpu: {tokens_per_sec_per_gpu:.0f} |'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

compute_tokens is assigned here but never used. Was this intended for something (e.g., a log line or the packing_efficiency calculation)? If not, it should be removed to avoid confusion.

Suggested change
log_string += f' toks/s/gpu: {tokens_per_sec_per_gpu:.0f} |'
actual_tokens = rl_utils.get_packing_actual_tokens(runtime_state.packing_context)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed

Comment thread megatron/training/training.py Outdated
packing_efficiency = rl_utils.get_packing_efficiency(runtime_state.packing_context)

# Add tokens/sec to log string
log_string += f' toks/s: {tokens_per_sec:.0f} |'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this going to add this metric to the log for all training? I'm not sure we use this metric a lot in pretraining, so nervous it might just be adding noise to the log.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've moved all the extra metrics in training.py into a single if-block guarded by args.perform_rl_step; does that look good?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed from training.py altogether now.

@tdene tdene force-pushed the tde/observability_metrics branch from b215575 to 2b2a0d3 Compare March 19, 2026 21:21
Comment thread megatron/rl/rl_utils.py
self.sequences_this_iteration_on_rank = 0
self.latest_batch_num_sequences = 0
# Derived throughput metrics (set by training_log, read by RLProfiler)
self.tokens_per_sec = None
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, add the field descriptions here.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed.

Comment thread megatron/rl/rl_utils.py Outdated
self.tokens_per_sec = None
self.tokens_per_sec_per_gpu = None
self.actual_tokens_per_sec = None
self.actual_tokens_per_sec_per_gpu = None
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we actually need _per_gpu variables here? How about we store tokens/actual_tokens and world_size and have a method that does the actual division?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed.

log_string += ' number of nan iterations: {:3d} |'.format(total_loss_dict[nan_iters_key])

# RL token throughput metrics.
if args.perform_rl_step:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we move this to a function in the RL folder? training.py becomes unreadable.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Copy link
Copy Markdown
Contributor

@cuichenx cuichenx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

approved on behalf of training-nemo

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Approved All necessary approvals have been made label Apr 18, 2026
@yaox12 yaox12 added this pull request to the merge queue Apr 20, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24648555700

@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24649423843

@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks Apr 20, 2026
@yaox12
Copy link
Copy Markdown
Member

yaox12 commented Apr 20, 2026

A functional test failed.

@tdene tdene enabled auto-merge April 20, 2026 17:53
@tdene tdene added this pull request to the merge queue Apr 20, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24688437315

@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks Apr 20, 2026
@tdene tdene added this pull request to the merge queue Apr 22, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24757321492

@tdene tdene removed this pull request from the merge queue due to a manual request Apr 22, 2026
@tdene tdene force-pushed the tde/observability_metrics branch from 24f0572 to 907a107 Compare April 22, 2026 10:10
@tdene tdene enabled auto-merge April 22, 2026 10:16
@tdene tdene added this pull request to the merge queue Apr 22, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24775413570

Merged via the queue into NVIDIA:main with commit 7597a0d Apr 22, 2026
68 checks passed
@tdene tdene deleted the tde/observability_metrics branch April 22, 2026 11:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made complexity: low

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants