Skip to content

redefine tmp_workspace using full tensor in append_attn#6999

Open
lizhenyun01 wants to merge 8 commits intoPaddlePaddle:developfrom
lizhenyun01:full_buffer
Open

redefine tmp_workspace using full tensor in append_attn#6999
lizhenyun01 wants to merge 8 commits intoPaddlePaddle:developfrom
lizhenyun01:full_buffer

Conversation

@lizhenyun01
Copy link
Collaborator

Motivation

将append_attn算子 split_kv场景下中使用的tmp_workspace以及tmp_m,tmp_d buffer改为由backend传入并在层间共享

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Mar 24, 2026

Thanks for your contribution!

Copy link

@fastdeploy-bot fastdeploy-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI CI Agent | skill: pr_review_agent

Review completed (解析失败)

Jiang-Jia-Jun added a commit that referenced this pull request Mar 25, 2026
…end_attn(#6999) (#7002)

* [Cherry-Pick][BugFix][APIServer] Enable control socket disable option in API server (#6551) (#6554)

* Initial plan

* [BugFix][APIServer] Add control_socket_disable to gunicorn options (cherry-pick of #6551)

Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>

* [Cherry-Pick][BugFix] Fix AttributeError in recycle_gpu_blocks when prefix_tree_status_signal not initialized(#6531) (#6559)

* fix mtp acceptance rate decline

* [BugFix] Fix AttributeError in recycle_gpu_blocks when prefix_tree_status_signal not initialized

- Add hasattr check before accessing prefix_tree_status_signal
- The signal is only initialized in launch_cache_messager, not in __init__
- Fixes CI test failure in test_prefix_cache_manager.py

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [BugFix] Reset prefix cache when model weights are updating

- Call self.reset() before setting status to NORMAL in UPDATING state
- Ensure cache consistency when model weights change
- Consistent with CLEARING state handling

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* [RL] Clear Requests status of R3 (#6569)

* [Cherry-Pick] [BugFix] fix prefix tree updating timeout (#6615)(#6617)

* [Cherry-Pick][BugFix] fix mtp_config in rl (#6595)(#6597)

Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>

* [BugFix][MTP] Skip empty_input_forward during dummy run (#6655)

When `is_dummy_run=True`, calling `empty_input_forward` can cause
unexpected behavior. Add `and not is_dummy_run` guard for both
`_propose_cuda` and `_propose_xpu` paths.

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* redefine tmp_workspace using full tensor in append_attn

* fix test

* fix pre-commit

---------

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Jiang-Jia-Jun <163579578+Jiang-Jia-Jun@users.noreply.github.com>
Co-authored-by: kevin <chengyf112@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: RAM <gstian5555@outlook.com>
Co-authored-by: Yonghua Li <39643373+liyonghua0910@users.noreply.github.com>
Co-authored-by: GoldPancake <56388518+Deleter-D@users.noreply.github.com>
Co-authored-by: YuBaoku <49938469+EmmonsCurse@users.noreply.github.com>
Co-authored-by: Yuanle Liu <yuanlehome@163.com>
@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (develop@6cff780). Learn more about missing BASE report.

Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #6999   +/-   ##
==========================================
  Coverage           ?   73.96%           
==========================================
  Files              ?      399           
  Lines              ?    56060           
  Branches           ?     8850           
==========================================
  Hits               ?    41467           
  Misses             ?    11646           
  Partials           ?     2947           
Flag Coverage Δ
GPU 73.96% <100.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants