Issue with calculating gradients during back propagation #679
Unanswered
RamakrishnaChaitanya
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, I'm encountering the following error when calling loss.backward() in the train function:
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.
It seems like the issue might be related to missing optimizer.zero_grad() before the backward pass. Could someone please help me understand the root cause and suggest how to resolve this error?
I'm using the decoder like this
However, after replacing
state = torch.stack([h,c])_with_state = torch.stack([h.detach(), c.detach()])resolved the issue. But I'm not sure whether this breaks anything!Beta Was this translation helpful? Give feedback.
All reactions