Skip to content

Conversation

@yiqingy0
Copy link
Collaborator

@yiqingy0 yiqingy0 commented Jan 22, 2026

Summary by CodeRabbit

  • Bug Fixes

    • Improved error handling in test scripts—placeholder processing now emits warnings instead of errors when placeholders are missing.
  • Refactor

    • Restructured test rerun workflow with modular helper functions for improved script execution, result tracking, and report generation.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 22, 2026

📝 Walkthrough

Walkthrough

The changes refactor test orchestration and rerun handling across Jenkins and supporting scripts. New Slurm job submission and rerun-specific helper methods are introduced in L0_Test.groovy. Error handling in slurm_run.sh is relaxed for missing placeholder values. The test_rerun.py script adds test_type parameter support for improved test categorization during reruns.

Changes

Cohort / File(s) Summary
Slurm Job and Rerun Orchestration
jenkins/L0_Test.groovy
Six new helper methods introduced: submitSlurmJobAndGetId() for job submission, generateRerunTestList() and updatePytestCommandForRerun() for rerun preparation, generateAndUploadRerunReport() for report generation, updateResultsXmlWithRerunResults() for result merging, and rerunFailedTestsForSlurm() for end-to-end rerun execution. Inline submission and rerun logic replaced with modular helpers. Results.xml download now skipped if already present.
Shell Script Error Handling
jenkins/scripts/slurm_run.sh
In set_value_in_command(), missing placeholder detection changed from error exit to warning message with command echo and exit code 0. Treats missing placeholders as no-op pass-through.
Test Rerun Classification
jenkins/scripts/test_rerun.py
Function signature updated: generate_rerun_tests_list() now accepts test_type parameter. CLI argument --test-type added. Early-exit logic introduced for non-rerunnable and edge-case test counts. Test categorization (rerun_2, rerun_1, rerun_0) now incorporates test_type in user-facing messages.

Sequence Diagram(s)

sequenceDiagram
    actor Jenkins as Jenkins Pipeline
    participant Helper as Rerun Helpers
    participant Slurm as Slurm Scheduler
    participant FS as File System
    participant Python as test_rerun.py

    Jenkins->>Helper: rerunFailedTestsForSlurm(...)
    activate Helper
    
    Helper->>Slurm: submitSlurmJobAndGetId(scriptSubmit)
    activate Slurm
    Slurm-->>Helper: jobId
    deactivate Slurm
    
    Helper->>FS: Read failed test results
    FS-->>Helper: results.xml
    
    Helper->>Python: generateRerunTestList(inputFile, test_type)
    activate Python
    Python->>FS: Parse XML, categorize tests
    Python-->>Helper: rerun_test_list
    deactivate Python
    
    Helper->>Helper: updatePytestCommandForRerun(command, testList)
    
    Helper->>Slurm: Execute updated pytest command
    activate Slurm
    Slurm-->>Helper: new results
    deactivate Slurm
    
    Helper->>FS: Upload rerun report
    
    Helper->>Python: updateResultsXmlWithRerunResults(results)
    activate Python
    Python->>FS: Merge results into final XML
    deactivate Python
    
    Helper-->>Jenkins: Final results
    deactivate Helper
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely the repository template with no actual content filled in—no explanation of changes, rationale, test coverage, or checklist items addressed. Replace the template boilerplate with a substantive description explaining what the PR does, why it's needed, what tests cover these changes, and confirm the PR checklist items.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding rerun capability for failed tests in Slurm stages, which aligns with the substantial refactoring of test rerun logic across multiple files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
jenkins/scripts/test_rerun.py (1)

1-4: Add the standard NVIDIA copyright header.

This .py file should include the repo-standard NVIDIA copyright header with the latest year of meaningful modification (2026). As per coding guidelines, please add the standard NVIDIA copyright header.

🤖 Fix all issues with AI agents
In `@jenkins/L0_Test.groovy`:
- Around line 2408-2574: The rerun logic in rerunFailedTestsForSlurm uploads the
rerun test list (currentRerunTestList) after calling submitSlurmJobAndGetId,
which can start the Slurm job without the test list; move the
Utils.copyFileToRemoteHost call that uploads currentRerunTestList (destination
rerunTestListPathNode) to occur before calling submitSlurmJobAndGetId so the job
has the list when launched, keeping the rest of the flow (writing
scriptLaunchPathLocal, copying scripts, then submitSlurmJobAndGetId, then
updating scriptTrack/jobId) the same.

Comment on lines +2408 to +2574
def rerunFailedTestsForSlurm(
pipeline,
stageName,
jobWorkspace,
remote,
llmSrcLocal,
pytestCommandList,
scriptLaunchPathLocal,
scriptLaunchPathNode,
scriptTrackPathLocal,
scriptTrackPathNode,
scriptSubmitLocalPath,
scriptSubmitPathNode,
scriptLaunchPrefixPathLocal,
scriptLaunchSrunArgsPathLocal,
scriptLaunchDraftPathLocal,
scriptInstallPathNode,
scriptRunPathNode,
testListPathLocal,
scriptTrack,
scriptLaunchPrefix,
disaggMode,
srunArgs,
testType="regular"
) {
// Download results.xml file from remote node
withCredentials([usernamePassword(credentialsId: 'svc_tensorrt', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
def resultFileRemotePath = "${jobWorkspace}/results.xml"
def downloadResultSucceed = Utils.exec(pipeline, script: "sshpass -p '${remote.passwd}' scp -P ${remote.port} -r -p ${COMMON_SSH_OPTIONS} ${remote.user}@${remote.host}:${resultFileRemotePath} ${stageName}/", returnStatus: true, numRetries: 3) == 0
if (!downloadResultSucceed) {
echo "There is no results.xml file, skip the rerun step"
return true
}
}

def rerunDir = "${stageName}/rerun"
sh "mkdir -p ${rerunDir}"

// Generate rerun test lists
def exitCode = generateRerunTestList(llmSrcLocal, rerunDir, "${stageName}/results.xml", testType)
if (exitCode != 0) {
echo "Failed to generate rerun test lists."
return true
}

// Rerun tests
def isRerunFailed = false
for (times in [1, 2]) {
def currentRerunTestList = "${rerunDir}/rerun_${times}.txt"
if (!fileExists(currentRerunTestList)) {
echo "No failed tests need to be rerun ${times} time(s)"
continue
}
sh "cat ${currentRerunTestList}"

def rerunTestListPathNode = "${jobWorkspace}/rerun_${times}.txt"
def xmlFilePathNode = "${jobWorkspace}/rerun_results_${times}.xml"
def newPytestCommand = updatePytestCommandForRerun(pytestCommandList, rerunTestListPathNode, jobWorkspace, xmlFilePathNode, times)

// Match and replace the entire pytestCommand export line
scriptLaunchPrefix = scriptLaunchPrefix.replaceFirst(/export pytestCommand=.*/, """
export pytestCommand="$newPytestCommand"
""".replaceAll("(?m)^\\s*", ""))
echo "ScriptLaunchPrefix: ${scriptLaunchPrefix}"

if (disaggMode) {
pipeline.writeFile(file: scriptLaunchPrefixPathLocal, text: scriptLaunchPrefix)

// Output is the corresponding scriptLaunchPathLocal script under the disaggMode
sh """
python3 ${scriptSubmitLocalPath} \\
--run-ci \\
--llm-src ${llmSrcLocal} \\
--test-list ${testListPathLocal} \\
--draft-launch-sh ${scriptLaunchDraftPathLocal} \\
--launch-sh ${scriptLaunchPathLocal} \\
--run-sh ${scriptRunPathNode} \\
--install-sh ${scriptInstallPathNode} \\
--script-prefix ${scriptLaunchPrefixPathLocal} \\
--srun-args ${scriptLaunchSrunArgsPathLocal}
"""
} else {
def scriptContent = """
${scriptLaunchPrefix}
srun --kill-on-bad-exit=1 ${srunArgs.join(" ")} ${scriptRunPathNode}
""".replaceAll("(?m)^\\s*", "")
pipeline.writeFile(file: scriptLaunchPathLocal, text: scriptContent)
}

Utils.exec(pipeline, script: "echo \"Script for Slurm sbatch job to submit: \" && cat ${scriptLaunchPathLocal}")
Utils.copyFileToRemoteHost(
pipeline,
remote,
scriptLaunchPathLocal,
scriptLaunchPathNode,
true
)

// Submit the Slurm job
def slurmJobId = submitSlurmJobAndGetId(pipeline, remote, scriptSubmitPathNode, jobWorkspace)

// Copy rerun test list to job workspace
Utils.copyFileToRemoteHost(
pipeline,
remote,
currentRerunTestList,
rerunTestListPathNode
)

// Update the scriptTrack to track the new Slurm job
scriptTrack = scriptTrack.replaceFirst(/jobId=.*/, """
jobId="$slurmJobId"
""".replaceAll("(?m)^\\s*", ""))
echo "ScriptTrack: ${scriptTrack}"

pipeline.writeFile(file: scriptTrackPathLocal, text: scriptTrack)
Utils.exec(pipeline, script: "echo \"Script to track Slurm job and pull the log: \" && cat ${scriptTrackPathLocal}")
Utils.copyFileToRemoteHost(
pipeline,
remote,
scriptTrackPathLocal,
scriptTrackPathNode,
true
)

def error = null
try {
// Track the Slurm job
Utils.exec(
pipeline,
timeout: false,
script: Utils.sshUserCmd(
remote,
scriptTrackPathNode
),
numRetries: 3
)
} catch (InterruptedException e) {
throw e
} catch (Exception e) {
error = e
isRerunFailed = true
} finally {
// Download results.xml file from remote node
withCredentials([usernamePassword(credentialsId: 'svc_tensorrt', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
def downloadRerunResultSucceed = Utils.exec(pipeline, script: "sshpass -p '${remote.passwd}' scp -P ${remote.port} -r -p ${COMMON_SSH_OPTIONS} ${remote.user}@${remote.host}:${xmlFilePathNode} ${rerunDir}/", returnStatus: true, numRetries: 3) == 0
if (!downloadRerunResultSucceed) {
echo "The ${testType} tests crashed when rerun attempt, Failed to download rerun results.xml file."
throw error
} else if (error != null) {
echo "The ${testType} tests still failed after rerun attempt."
}
}
}
}

// Generate rerun report and upload to artifact server
def inputFiles = ["${stageName}/results.xml",
"${rerunDir}/rerun_results_1.xml",
"${rerunDir}/rerun_results_2.xml"]
generateAndUploadRerunReport(llmSrcLocal, inputFiles, stageName, rerunDir)

updateResultsXmlWithRerunResults(llmSrcLocal, inputFiles, stageName)

echo "isRerunFailed for Slurm: ${isRerunFailed}"
return isRerunFailed
}
Copy link
Contributor

@coderabbitai coderabbitai bot Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Copy rerun test lists before submitting the Slurm job.

Submitting the job before uploading rerun_*.txt can start the job without the list, leading to flaky reruns. Upload the list first, then submit.

🔧 Suggested reordering
-        // Submit the Slurm job
-        def slurmJobId = submitSlurmJobAndGetId(pipeline, remote, scriptSubmitPathNode, jobWorkspace)
-
-        // Copy rerun test list to job workspace
-        Utils.copyFileToRemoteHost(
-            pipeline,
-            remote,
-            currentRerunTestList,
-            rerunTestListPathNode
-        )
+        // Copy rerun test list to job workspace before submission
+        Utils.copyFileToRemoteHost(
+            pipeline,
+            remote,
+            currentRerunTestList,
+            rerunTestListPathNode
+        )
+
+        // Submit the Slurm job
+        def slurmJobId = submitSlurmJobAndGetId(pipeline, remote, scriptSubmitPathNode, jobWorkspace)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def rerunFailedTestsForSlurm(
pipeline,
stageName,
jobWorkspace,
remote,
llmSrcLocal,
pytestCommandList,
scriptLaunchPathLocal,
scriptLaunchPathNode,
scriptTrackPathLocal,
scriptTrackPathNode,
scriptSubmitLocalPath,
scriptSubmitPathNode,
scriptLaunchPrefixPathLocal,
scriptLaunchSrunArgsPathLocal,
scriptLaunchDraftPathLocal,
scriptInstallPathNode,
scriptRunPathNode,
testListPathLocal,
scriptTrack,
scriptLaunchPrefix,
disaggMode,
srunArgs,
testType="regular"
) {
// Download results.xml file from remote node
withCredentials([usernamePassword(credentialsId: 'svc_tensorrt', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
def resultFileRemotePath = "${jobWorkspace}/results.xml"
def downloadResultSucceed = Utils.exec(pipeline, script: "sshpass -p '${remote.passwd}' scp -P ${remote.port} -r -p ${COMMON_SSH_OPTIONS} ${remote.user}@${remote.host}:${resultFileRemotePath} ${stageName}/", returnStatus: true, numRetries: 3) == 0
if (!downloadResultSucceed) {
echo "There is no results.xml file, skip the rerun step"
return true
}
}
def rerunDir = "${stageName}/rerun"
sh "mkdir -p ${rerunDir}"
// Generate rerun test lists
def exitCode = generateRerunTestList(llmSrcLocal, rerunDir, "${stageName}/results.xml", testType)
if (exitCode != 0) {
echo "Failed to generate rerun test lists."
return true
}
// Rerun tests
def isRerunFailed = false
for (times in [1, 2]) {
def currentRerunTestList = "${rerunDir}/rerun_${times}.txt"
if (!fileExists(currentRerunTestList)) {
echo "No failed tests need to be rerun ${times} time(s)"
continue
}
sh "cat ${currentRerunTestList}"
def rerunTestListPathNode = "${jobWorkspace}/rerun_${times}.txt"
def xmlFilePathNode = "${jobWorkspace}/rerun_results_${times}.xml"
def newPytestCommand = updatePytestCommandForRerun(pytestCommandList, rerunTestListPathNode, jobWorkspace, xmlFilePathNode, times)
// Match and replace the entire pytestCommand export line
scriptLaunchPrefix = scriptLaunchPrefix.replaceFirst(/export pytestCommand=.*/, """
export pytestCommand="$newPytestCommand"
""".replaceAll("(?m)^\\s*", ""))
echo "ScriptLaunchPrefix: ${scriptLaunchPrefix}"
if (disaggMode) {
pipeline.writeFile(file: scriptLaunchPrefixPathLocal, text: scriptLaunchPrefix)
// Output is the corresponding scriptLaunchPathLocal script under the disaggMode
sh """
python3 ${scriptSubmitLocalPath} \\
--run-ci \\
--llm-src ${llmSrcLocal} \\
--test-list ${testListPathLocal} \\
--draft-launch-sh ${scriptLaunchDraftPathLocal} \\
--launch-sh ${scriptLaunchPathLocal} \\
--run-sh ${scriptRunPathNode} \\
--install-sh ${scriptInstallPathNode} \\
--script-prefix ${scriptLaunchPrefixPathLocal} \\
--srun-args ${scriptLaunchSrunArgsPathLocal}
"""
} else {
def scriptContent = """
${scriptLaunchPrefix}
srun --kill-on-bad-exit=1 ${srunArgs.join(" ")} ${scriptRunPathNode}
""".replaceAll("(?m)^\\s*", "")
pipeline.writeFile(file: scriptLaunchPathLocal, text: scriptContent)
}
Utils.exec(pipeline, script: "echo \"Script for Slurm sbatch job to submit: \" && cat ${scriptLaunchPathLocal}")
Utils.copyFileToRemoteHost(
pipeline,
remote,
scriptLaunchPathLocal,
scriptLaunchPathNode,
true
)
// Submit the Slurm job
def slurmJobId = submitSlurmJobAndGetId(pipeline, remote, scriptSubmitPathNode, jobWorkspace)
// Copy rerun test list to job workspace
Utils.copyFileToRemoteHost(
pipeline,
remote,
currentRerunTestList,
rerunTestListPathNode
)
// Update the scriptTrack to track the new Slurm job
scriptTrack = scriptTrack.replaceFirst(/jobId=.*/, """
jobId="$slurmJobId"
""".replaceAll("(?m)^\\s*", ""))
echo "ScriptTrack: ${scriptTrack}"
pipeline.writeFile(file: scriptTrackPathLocal, text: scriptTrack)
Utils.exec(pipeline, script: "echo \"Script to track Slurm job and pull the log: \" && cat ${scriptTrackPathLocal}")
Utils.copyFileToRemoteHost(
pipeline,
remote,
scriptTrackPathLocal,
scriptTrackPathNode,
true
)
def error = null
try {
// Track the Slurm job
Utils.exec(
pipeline,
timeout: false,
script: Utils.sshUserCmd(
remote,
scriptTrackPathNode
),
numRetries: 3
)
} catch (InterruptedException e) {
throw e
} catch (Exception e) {
error = e
isRerunFailed = true
} finally {
// Download results.xml file from remote node
withCredentials([usernamePassword(credentialsId: 'svc_tensorrt', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
def downloadRerunResultSucceed = Utils.exec(pipeline, script: "sshpass -p '${remote.passwd}' scp -P ${remote.port} -r -p ${COMMON_SSH_OPTIONS} ${remote.user}@${remote.host}:${xmlFilePathNode} ${rerunDir}/", returnStatus: true, numRetries: 3) == 0
if (!downloadRerunResultSucceed) {
echo "The ${testType} tests crashed when rerun attempt, Failed to download rerun results.xml file."
throw error
} else if (error != null) {
echo "The ${testType} tests still failed after rerun attempt."
}
}
}
}
// Generate rerun report and upload to artifact server
def inputFiles = ["${stageName}/results.xml",
"${rerunDir}/rerun_results_1.xml",
"${rerunDir}/rerun_results_2.xml"]
generateAndUploadRerunReport(llmSrcLocal, inputFiles, stageName, rerunDir)
updateResultsXmlWithRerunResults(llmSrcLocal, inputFiles, stageName)
echo "isRerunFailed for Slurm: ${isRerunFailed}"
return isRerunFailed
}
def rerunFailedTestsForSlurm(
pipeline,
stageName,
jobWorkspace,
remote,
llmSrcLocal,
pytestCommandList,
scriptLaunchPathLocal,
scriptLaunchPathNode,
scriptTrackPathLocal,
scriptTrackPathNode,
scriptSubmitLocalPath,
scriptSubmitPathNode,
scriptLaunchPrefixPathLocal,
scriptLaunchSrunArgsPathLocal,
scriptLaunchDraftPathLocal,
scriptInstallPathNode,
scriptRunPathNode,
testListPathLocal,
scriptTrack,
scriptLaunchPrefix,
disaggMode,
srunArgs,
testType="regular"
) {
// Download results.xml file from remote node
withCredentials([usernamePassword(credentialsId: 'svc_tensorrt', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
def resultFileRemotePath = "${jobWorkspace}/results.xml"
def downloadResultSucceed = Utils.exec(pipeline, script: "sshpass -p '${remote.passwd}' scp -P ${remote.port} -r -p ${COMMON_SSH_OPTIONS} ${remote.user}@${remote.host}:${resultFileRemotePath} ${stageName}/", returnStatus: true, numRetries: 3) == 0
if (!downloadResultSucceed) {
echo "There is no results.xml file, skip the rerun step"
return true
}
}
def rerunDir = "${stageName}/rerun"
sh "mkdir -p ${rerunDir}"
// Generate rerun test lists
def exitCode = generateRerunTestList(llmSrcLocal, rerunDir, "${stageName}/results.xml", testType)
if (exitCode != 0) {
echo "Failed to generate rerun test lists."
return true
}
// Rerun tests
def isRerunFailed = false
for (times in [1, 2]) {
def currentRerunTestList = "${rerunDir}/rerun_${times}.txt"
if (!fileExists(currentRerunTestList)) {
echo "No failed tests need to be rerun ${times} time(s)"
continue
}
sh "cat ${currentRerunTestList}"
def rerunTestListPathNode = "${jobWorkspace}/rerun_${times}.txt"
def xmlFilePathNode = "${jobWorkspace}/rerun_results_${times}.xml"
def newPytestCommand = updatePytestCommandForRerun(pytestCommandList, rerunTestListPathNode, jobWorkspace, xmlFilePathNode, times)
// Match and replace the entire pytestCommand export line
scriptLaunchPrefix = scriptLaunchPrefix.replaceFirst(/export pytestCommand=.*/, """
export pytestCommand="$newPytestCommand"
""".replaceAll("(?m)^\\s*", ""))
echo "ScriptLaunchPrefix: ${scriptLaunchPrefix}"
if (disaggMode) {
pipeline.writeFile(file: scriptLaunchPrefixPathLocal, text: scriptLaunchPrefix)
// Output is the corresponding scriptLaunchPathLocal script under the disaggMode
sh """
python3 ${scriptSubmitLocalPath} \\
--run-ci \\
--llm-src ${llmSrcLocal} \\
--test-list ${testListPathLocal} \\
--draft-launch-sh ${scriptLaunchDraftPathLocal} \\
--launch-sh ${scriptLaunchPathLocal} \\
--run-sh ${scriptRunPathNode} \\
--install-sh ${scriptInstallPathNode} \\
--script-prefix ${scriptLaunchPrefixPathLocal} \\
--srun-args ${scriptLaunchSrunArgsPathLocal}
"""
} else {
def scriptContent = """
${scriptLaunchPrefix}
srun --kill-on-bad-exit=1 ${srunArgs.join(" ")} ${scriptRunPathNode}
""".replaceAll("(?m)^\\s*", "")
pipeline.writeFile(file: scriptLaunchPathLocal, text: scriptContent)
}
Utils.exec(pipeline, script: "echo \"Script for Slurm sbatch job to submit: \" && cat ${scriptLaunchPathLocal}")
Utils.copyFileToRemoteHost(
pipeline,
remote,
scriptLaunchPathLocal,
scriptLaunchPathNode,
true
)
// Copy rerun test list to job workspace before submission
Utils.copyFileToRemoteHost(
pipeline,
remote,
currentRerunTestList,
rerunTestListPathNode
)
// Submit the Slurm job
def slurmJobId = submitSlurmJobAndGetId(pipeline, remote, scriptSubmitPathNode, jobWorkspace)
// Update the scriptTrack to track the new Slurm job
scriptTrack = scriptTrack.replaceFirst(/jobId=.*/, """
jobId="$slurmJobId"
""".replaceAll("(?m)^\\s*", ""))
echo "ScriptTrack: ${scriptTrack}"
pipeline.writeFile(file: scriptTrackPathLocal, text: scriptTrack)
Utils.exec(pipeline, script: "echo \"Script to track Slurm job and pull the log: \" && cat ${scriptTrackPathLocal}")
Utils.copyFileToRemoteHost(
pipeline,
remote,
scriptTrackPathLocal,
scriptTrackPathNode,
true
)
def error = null
try {
// Track the Slurm job
Utils.exec(
pipeline,
timeout: false,
script: Utils.sshUserCmd(
remote,
scriptTrackPathNode
),
numRetries: 3
)
} catch (InterruptedException e) {
throw e
} catch (Exception e) {
error = e
isRerunFailed = true
} finally {
// Download results.xml file from remote node
withCredentials([usernamePassword(credentialsId: 'svc_tensorrt', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
def downloadRerunResultSucceed = Utils.exec(pipeline, script: "sshpass -p '${remote.passwd}' scp -P ${remote.port} -r -p ${COMMON_SSH_OPTIONS} ${remote.user}@${remote.host}:${xmlFilePathNode} ${rerunDir}/", returnStatus: true, numRetries: 3) == 0
if (!downloadRerunResultSucceed) {
echo "The ${testType} tests crashed when rerun attempt, Failed to download rerun results.xml file."
throw error
} else if (error != null) {
echo "The ${testType} tests still failed after rerun attempt."
}
}
}
}
// Generate rerun report and upload to artifact server
def inputFiles = ["${stageName}/results.xml",
"${rerunDir}/rerun_results_1.xml",
"${rerunDir}/rerun_results_2.xml"]
generateAndUploadRerunReport(llmSrcLocal, inputFiles, stageName, rerunDir)
updateResultsXmlWithRerunResults(llmSrcLocal, inputFiles, stageName)
echo "isRerunFailed for Slurm: ${isRerunFailed}"
return isRerunFailed
}
🤖 Prompt for AI Agents
In `@jenkins/L0_Test.groovy` around lines 2408 - 2574, The rerun logic in
rerunFailedTestsForSlurm uploads the rerun test list (currentRerunTestList)
after calling submitSlurmJobAndGetId, which can start the Slurm job without the
test list; move the Utils.copyFileToRemoteHost call that uploads
currentRerunTestList (destination rerunTestListPathNode) to occur before calling
submitSlurmJobAndGetId so the job has the list when launched, keeping the rest
of the flow (writing scriptLaunchPathLocal, copying scripts, then
submitSlurmJobAndGetId, then updating scriptTrack/jobId) the same.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Submit job will clean up the rerun_*.txt if uploading the test lists before submitting the slurm job.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yiqingy0, thank you for the clarification! You're absolutely right. Looking at the scriptSubmit code that gets executed inside submitSlurmJobAndGetId, I can see it cleans up workspace files that aren't in the keep list:

find "${jobWorkspace}" -maxdepth 1 -mindepth 1 ${findKeepWhenRetryArgs} -exec rm -rf {} +

Since rerun_*.txt is not in filesToKeepWhenRetry, uploading it before calling submitSlurmJobAndGetId would indeed result in it being deleted during the cleanup phase of the submit script. The current order (submit → upload test list → track job) is correct because:

  1. The submit script cleans up old files
  2. The new job is submitted (but not yet running)
  3. The rerun test list is uploaded safely after cleanup
  4. The job starts executing with the test list available

I apologize for the confusion. The current implementation is correct as-is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant