Skip to content

fix: Optimize autograd backward tensor lifetime#149

Open
Chamberlain0w0 wants to merge 1 commit intomasterfrom
feat/autograd_improved
Open

fix: Optimize autograd backward tensor lifetime#149
Chamberlain0w0 wants to merge 1 commit intomasterfrom
feat/autograd_improved

Conversation

@Chamberlain0w0
Copy link
Copy Markdown
Contributor

优化 autograd backward 流程中对于 leaf 参数梯度的释放时机,降低训练过程的峰值显存占用。

设计文档详见:https://gxtctab8no8.feishu.cn/wiki/SJBKwmoCRiRQpAkAygmcXt0UnCd?from=from_copylink

// layers. PyTorch's non-recursive engine does not have that stack
// retention pattern, so flush AccumulateGrad edges before recursing
// into non-leaf activation edges.
for (size_t idx = 0; idx < grad_inputs.size(); ++idx) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里要遍历两次吗?如果后续希望实现更接近 PyTorch 的 queue-based autograd engine,要不要现在直接放一个用于排序 next_functions_ 的函数,之后可以继续完善这个函数的逻辑

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感觉目前的状况不太好弄,主要 grad_inputs[idx] 和 next_functions_[idx] 是按 input index 对齐的,而 input index 又跟 module/模型构造有关,如果这要改的话就得一连串地改,只改这一处的话会对不上位置。

如果要排序的话,最多也就是额外再维护一个 list(而不是原地修改 next_functions_),根据这个规则重排得到一个新的顺序,然后再遍历一次。但其实目前 grad_inputs 最多也就两三个,遍历两次也不会有很大开销。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants