Skip to content

fix: avoid variable shadowing of timestep t in compute_loss#1826

Open
Mr-Neutr0n wants to merge 1 commit intoFunAudioLLM:mainfrom
Mr-Neutr0n:fix/compute-loss-variable-shadowing
Open

fix: avoid variable shadowing of timestep t in compute_loss#1826
Mr-Neutr0n wants to merge 1 commit intoFunAudioLLM:mainfrom
Mr-Neutr0n:fix/compute-loss-variable-shadowing

Conversation

@Mr-Neutr0n
Copy link

Summary

In cosyvoice/flow/flow_matching.py, the compute_loss() method has a variable shadowing issue where t (the sequence length integer from shape unpacking) is immediately overwritten by t (the random timestep tensor):

b, _, t = mu.shape          # t = sequence length (integer)
t = torch.rand([b, 1, 1], ...)  # t = random timestep (tensor) — immediately overwrites the above

The integer t from the shape unpacking is never used before being reassigned. While this doesn't cause a runtime error in the current code, it is misleading and fragile — a future developer might assume t still holds the sequence length after the unpacking line, leading to subtle bugs.

Fix

Replace b, _, t = mu.shape with b, _, _ = mu.shape to make it explicit that only the batch dimension b is needed from the shape, and that the subsequent t is solely the random timestep tensor.

Test plan

  • Verified that t (sequence length) is not referenced anywhere between the unpacking and the reassignment
  • Confirmed the change is semantically equivalent — no behavioral change, only improved clarity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant