Skip to content

problem about pytorch inplace and can not training #5

@zhougoodman

Description

@zhougoodman

it seems to be some issue with the code in sam.py forward function:

    if self.linear:
        output = self.input_layer(data, noise, adj_matrix * self.skeleton)

it change the graph cause autobackward could not work. but this just my surmise.

i think that pytorch version maybe a problem,would you provide your pytorch version? (i did not find the version you used in the code)
or maybe it has something wrong in the other place, could you have a look on this problem? i would be grateful, thanks!

0%| | 0/11000 [00:00<?, ?it/s, disc=0.43, gen=-.373, regul_loss=0.719, tot=-2.64]Traceback (most recent call last):
File "D:\PyCharm 2021.3.2\plugins\python\helpers\pydev\pydevd.py", line 1483, in exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "D:\PyCharm 2021.3.2\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "E:/study/pycharm_project/dzx_policy/SAM-master/est_sam.py", line 19, in
m.predict(data, nruns=1, )
File "E:\study\pycharm_project\dzx_policy\SAM-master\sam\sam.py", line 352, in predict
device='cuda:0' if gpus else 'cpu')
File "E:\study\pycharm_project\dzx_policy\SAM-master\sam\sam.py", line 232, in run_SAM
loss.backward(retain_graph=True)
File "E:\study\pycharm_project\mental_bert\bert\lib\site-packages\torch\tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "E:\study\pycharm_project\mental_bert\bert\lib\site-packages\torch\autograd_init
.py", line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [200, 1]], which is output 0 of TBackward, is at version 3; expected version 2 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions