-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTLLM-10308][feat] AutoTuner Cache: reorganize cache file for distributed tuning #10956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ibuted tuning Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
|
/bot run --disable-fail-fast --add-multi-gpu-test |
📝 WalkthroughWalkthroughIntroduces distributed-aware cache partitioning in AutoTunerProfilingCache, enabling operations to be tracked with a distributed strategy and cache entries to be split between rank-specific (INDEPENDENT) and shared (BROADCAST, MERGE, PARALLEL) sections, with corresponding persistence adaptations to save and load partitioned cache data. Changes
Sequence DiagramsequenceDiagram
participant AutoTuner
participant ProfilingCache
participant StrategyMap as Strategy Map
participant FileSystem as File System
AutoTuner->>ProfilingCache: choose_one(custom_op, config)
ProfilingCache->>StrategyMap: update_op_strategy(custom_op, strategy)
AutoTuner->>ProfilingCache: save_cache(file_path, rank)
ProfilingCache->>ProfilingCache: _partition_cache_by_strategy()
Note over ProfilingCache: Separate INDEPENDENT<br/>from BROADCAST/MERGE/PARALLEL
ProfilingCache->>ProfilingCache: _serialize_cache_data(shared_cache)<br/>_serialize_cache_data(rank_cache)
ProfilingCache->>FileSystem: Write shared entry<br/>(SHARED_CACHE_KEY)
ProfilingCache->>FileSystem: Write rank-specific entry<br/>(rank_N)
AutoTuner->>ProfilingCache: load_cache(file_path, rank)
ProfilingCache->>FileSystem: Read shared entry
ProfilingCache->>FileSystem: Read rank-specific entry
ProfilingCache->>ProfilingCache: _deserialize_cache_data()<br/>(merge shared + rank)
ProfilingCache-->>AutoTuner: Unified cache restored
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@tests/unittest/_torch/misc/test_autotuner.py`:
- Line 832: The test contains an assertion using "assert False, f\"Rank {rank}
got unknown strategy: {strategy}\"" which will be skipped under Python -O;
replace that with an explicit exception by raising AssertionError with the same
message (e.g., in the test function containing the line, change to raise
AssertionError(f"Rank {rank} got unknown strategy: {strategy}")) so the failure
is always raised regardless of optimization.
- Line 760: The call to tuner.choose_one uses an unnecessary f-string for the
literal "test_distributed_normal_gemm"; update the argument in tuner.choose_one
(custom_op) to use a plain string without the f prefix (i.e., replace
f"test_distributed_normal_gemm" with "test_distributed_normal_gemm") to remove
the extraneous formatter.
- Line 775: The variable selected_runner returned from tuner.choose_one is
unused; rename it to start with an underscore (e.g., _selected_runner or _ ) in
the test call to tuner.choose_one inside test_autotuner.py to mark it as
intentionally unused and avoid linter warnings; ensure any occurrence of
selected_runner in that test is updated to the new underscored name.
🧹 Nitpick comments (1)
tensorrt_llm/_torch/autotuner.py (1)
529-537: Consider consistent merge behavior for rank-specific entries.The shared cache uses
update()to merge entries (line 533-534), but rank-specific cache uses direct assignment (line 537), which overwrites previous entries. Ifsave_cacheis called multiple times for the same rank, earlier INDEPENDENT op entries will be lost.If incremental saves are expected, consider merging rank entries too:
♻️ Suggested fix
# Save rank-specific cache entries (INDEPENDENT ops) - current_cache[f"rank_{rank}"] = serialized_rank_cache + if f"rank_{rank}" not in current_cache: + current_cache[f"rank_{rank}"] = {} + current_cache[f"rank_{rank}"].update(serialized_rank_cache)
| tuning_config=config, | ||
| inputs=inputs) | ||
| # run another normal gemm with INDEPENDENT strategy | ||
| tuner.choose_one(custom_op=f"test_distributed_normal_gemm", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove extraneous f prefix from string literal.
This string has no placeholders, so the f prefix is unnecessary.
🔧 Suggested fix
- tuner.choose_one(custom_op=f"test_distributed_normal_gemm",
+ tuner.choose_one(custom_op="test_distributed_normal_gemm",🧰 Tools
🪛 Ruff (0.14.13)
760-760: f-string without any placeholders
Remove extraneous f prefix
(F541)
🤖 Prompt for AI Agents
In `@tests/unittest/_torch/misc/test_autotuner.py` at line 760, The call to
tuner.choose_one uses an unnecessary f-string for the literal
"test_distributed_normal_gemm"; update the argument in tuner.choose_one
(custom_op) to use a plain string without the f prefix (i.e., replace
f"test_distributed_normal_gemm" with "test_distributed_normal_gemm") to remove
the extraneous formatter.
| AutoTuner.get().profiling_cache.clear() | ||
| AutoTuner.get().profiling_cache.load_cache(cache_path, rank) | ||
|
|
||
| selected_runner, best_tactic = tuner.choose_one( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefix unused variable with underscore.
The selected_runner variable is never used. Prefix it with _ to indicate it's intentionally unused.
🔧 Suggested fix
- selected_runner, best_tactic = tuner.choose_one(
+ _selected_runner, best_tactic = tuner.choose_one(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| selected_runner, best_tactic = tuner.choose_one( | |
| _selected_runner, best_tactic = tuner.choose_one( |
🧰 Tools
🪛 Ruff (0.14.13)
775-775: Unpacked variable selected_runner is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
🤖 Prompt for AI Agents
In `@tests/unittest/_torch/misc/test_autotuner.py` at line 775, The variable
selected_runner returned from tuner.choose_one is unused; rename it to start
with an underscore (e.g., _selected_runner or _ ) in the test call to
tuner.choose_one inside test_autotuner.py to mark it as intentionally unused and
avoid linter warnings; ensure any occurrence of selected_runner in that test is
updated to the new underscored name.
| assert best_tactic % 2 == 1, f"Rank {rank} with {strategy} should select tactic 1, got {best_tactic}" | ||
| else: | ||
| assert False, f"Unknown strategy: {strategy}" | ||
| assert False, f"Rank {rank} got unknown strategy: {strategy}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace assert False with raise AssertionError().
assert False is removed when Python runs with -O optimization flag, which would silently pass this check. Use raise AssertionError() instead.
🔧 Suggested fix
- assert False, f"Rank {rank} got unknown strategy: {strategy}"
+ raise AssertionError(f"Rank {rank} got unknown strategy: {strategy}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert False, f"Rank {rank} got unknown strategy: {strategy}" | |
| raise AssertionError(f"Rank {rank} got unknown strategy: {strategy}") |
🧰 Tools
🪛 Ruff (0.14.13)
832-832: Do not assert False (python -O removes these calls), raise AssertionError()
Replace assert False
(B011)
🤖 Prompt for AI Agents
In `@tests/unittest/_torch/misc/test_autotuner.py` at line 832, The test contains
an assertion using "assert False, f\"Rank {rank} got unknown strategy:
{strategy}\"" which will be skipped under Python -O; replace that with an
explicit exception by raising AssertionError with the same message (e.g., in the
test function containing the line, change to raise AssertionError(f"Rank {rank}
got unknown strategy: {strategy}")) so the failure is always raised regardless
of optimization.
|
PR_Github #33390 [ run ] triggered by Bot. Commit: |
|
PR_Github #33390 [ run ] completed with state
|
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
36d96ca to
8c852af
Compare
|
/bot run --disable-fail-fast --add-multi-gpu-test |
|
PR_Github #33433 [ run ] triggered by Bot. Commit: |
|
PR_Github #33433 [ run ] completed with state
|
Summary by CodeRabbit
Release Notes
New Features
Tests
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.