Skip to content

Update how we get test_tools and be more specific on return codes.#68

Merged
dvalinrh merged 5 commits intomainfrom
rerun
Mar 17, 2026
Merged

Update how we get test_tools and be more specific on return codes.#68
dvalinrh merged 5 commits intomainfrom
rerun

Conversation

@dvalinrh
Copy link
Contributor

Description

Uses error_codes found in test_tools/error_codes to provide more specific error indication.

Before/After Comparison

Before: Returned a 0 or 1, making determination of having to rerun very course
After: Error codes are now bases on test_tools/error_codes, making it easier to limit when we do a rerun of the test.
Also, changed how we load in test_tools, we try curl, git, and wget, to avoid a failure due to the program not
being installed.

Clerical Stuff

This closes #67

Relates to JIRA: RPOPC-879

Testing done

Ran following scenario
global:
ssh_key_file: replace_your_ssh_key
terminate_cloud: 1
os_vendor: rhel
results_prefix: pyperf_results
os_vendor: rhel
system_type: aws
cloud_os_id: ami-035032ea878eca201
systems:
system1:
tests: "pyperf"
host_config: "m5.xlarge"

Generated the following csv file
Test,Avg,Unit,Start_Date,End_Date
2to3,440.55,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
async_generators,543.88,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
async_tree_none,901.17,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
async_tree_cpu_io_mixed,1.23,sec,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
async_tree_io,2.34,sec,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
async_tree_memoization,1.23,sec,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
asyncio_tcp,338.56,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
asyncio_tcp_ssl,2.91,sec,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
asyncio_websockets,716.32,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
chameleon,13.39,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
chaos,155.55,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
comprehensions,36.63,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
bench_mp_pool,16.13,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
bench_thread_pool,1.58,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
coroutines,56.83,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
coverage,78.47,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
crypto_pyaes,157.17,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
dask,732.55,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
deepcopy,613.48,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
deepcopy_reduce,5.28,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
deepcopy_memo,75.53,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
deltablue,9.95,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
docutils,3.93,sec,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
dulwich_log,99.45,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
fannkuch,683.33,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
float,167.18,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
create_gc_cycles,1.78,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
gc_traversal,3.86,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
generators,86.43,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
genshi_text,42.91,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
genshi_xml,85.81,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
go,354.10,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
hexiom,13.82,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
html5lib,114.23,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
json_dumps,18.00,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
json_loads,36.62,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
logging_format,12.49,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
logging_silent,272.30,ns,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
logging_simple,11.40,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
mako,23.41,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
mdp,3.75,sec,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
meteor_contest,146.68,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
nbody,197.10,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
nqueens,136.17,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
pathlib,24.25,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
pickle,13.01,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
pickle_dict,36.76,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
pickle_list,5.88,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
pickle_pure_python,613.73,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
pidigits,230.02,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
pprint_pformat,2.04,sec,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
pyflate,944.85,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
python_startup,12.29,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
python_startup_no_site,7.71,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
raytrace,693.78,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
regex_compile,241.43,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
regex_dna,287.92,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
regex_effbot,4.34,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
regex_v8,32.71,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
richards,98.45,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
richards_super,118.42,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
scimark_fft,550.57,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
scimark_lu,225.72,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
scimark_monte_carlo,155.62,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
scimark_sor,282.68,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
scimark_sparse_mat_mult,7.28,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
spectral_norm,213.53,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sqlalchemy_declarative,217.13,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sqlalchemy_imperative,28.10,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sqlglot_normalize,178.22,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sqlglot_optimize,86.39,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sqlglot_parse,3.07,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sqlglot_transpile,3.55,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sqlite_synth,3.51,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sympy_expand,698.25,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sympy_integrate,32.31,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sympy_sum,242.22,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
sympy_str,425.12,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
telco,8.58,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
tomli_loads,3.93,sec,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
tornado_http,186.17,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
typing_runtime_protocols,686.43,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
unpack_sequence,95.92,ns,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
unpickle,17.30,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
unpickle_list,6.01,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
unpickle_pure_python,444.43,us,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
xml_etree_parse,210.67,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
xml_etree_iterparse,141.45,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
xml_etree_generate,128.22,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z
xml_etree_process,102.67,ms,2026-03-16T12:04:52Z,2026-03-16T13:11:26Z

@dvalinrh dvalinrh requested a review from kdvalin March 16, 2026 14:33
@github-actions
Copy link

This relates to RPOPC-879

@qodo-code-review
Copy link
Contributor

Review Summary by Qodo

Add semantic error codes and improve test_tools installation resilience

✨ Enhancement 🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Replace hardcoded exit codes with semantic error code constants
• Add fallback mechanisms for test_tools installation (wget, curl, git)
• Improve error handling with specific error codes from test_tools
• Remove redundant error handling logic in pyperf dump operation
Diagram
flowchart LR
  A["Exit Code Constants"] -->|"Replace hardcoded 1"| B["Semantic Error Codes"]
  C["Single Installation Method"] -->|"Add fallback attempts"| D["wget/curl/git attempts"]
  E["Error Handling"] -->|"Simplify logic"| F["Remove redundant checks"]
  B --> G["Better error determination"]
  D --> G
  F --> G
Loading

Grey Divider

File Changes

1. pyperf/pyperf_run Enhancement, error handling +50/-24

Semantic error codes and resilient tool installation

• Replace all hardcoded exit code 1 with semantic constants like $E_USAGE and $E_GENERAL
• Add three new functions (attempt_tools_wget, attempt_tools_curl, attempt_tools_git) to
 provide fallback installation methods for test_tools
• Refactor install_tools() to attempt multiple download methods instead of single git clone
• Remove redundant error handling in pyperf dump operation that checked exit status and wrote to
 test_results_report
• Update all exit_out calls throughout the script to use specific error codes instead of hardcoded
 values

pyperf/pyperf_run


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Mar 16, 2026

Code Review by Qodo

🐞 Bugs (1) 📘 Rule violations (0) 📎 Requirement gaps (1)

Grey Divider


Action required

1. attempt_tools_git uses 101 📎 Requirement gap ⛯ Reliability
Description
The new wrapper-download path introduces a hardcoded exit code 101, which is inconsistent with the
move to standardized error-code variables. This makes rerun decision logic harder to keep
machine-actionable and consistent across failure modes.
Code

pyperf/pyperf_run[R131-134]

+                git clone $tools_git "$TOOLS_BIN"
+                if [ $? -ne 0 ]; then
+                        exit_out "Error: pulling git $tools_git failed." 101
+                fi
Evidence
PR Compliance ID 2 requires return codes to be standardized and used consistently; the newly-added
attempt_tools_git() exits via exit_out with a hardcoded 101 rather than a standardized
error-code constant used elsewhere in this PR.

Improve/standardize return codes (RTCs) to support automatic reruns
pyperf/pyperf_run[131-134]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`attempt_tools_git()` exits with a hardcoded return code `101`, which breaks the effort to standardize return codes for machine-actionable rerun logic.
## Issue Context
Other error paths in `pyperf_run` were updated to use standardized variables like `E_GENERAL`/`E_USAGE`, but this new code path still uses a literal `101`.
## Fix Focus Areas
- pyperf/pyperf_run[131-134]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Unchecked pyperf dump🐞 Bug ✓ Correctness
Description
pyperf_run no longer checks whether python -m pyperf dump succeeded, but still calls
generate_csv_file which always reads ${pyresults}.results and divides by res_count. If the
dump fails or produces an empty .results, CSV/JSON generation can break or produce invalid output
while the script continues.
Code

pyperf/pyperf_run[R447-449]

$python_exec -m pyperf dump  ${pyresults}.json > ${pyresults}.results
-if [ $? -ne 0 ]; then
-	echo "Failed: $python_exec -m pyperf dump  ${pyresults}.json > ${pyresults}.results" 1
-	echo Failed > test_results_report
-else
-	echo Ran > test_results_report
-fi
generate_csv_file ${pyresults} ${start_time} ${end_time}
Evidence
The main flow runs pyperf dump and immediately proceeds to CSV generation without checking $?.
generate_csv_file unconditionally reads the .results file and performs division by res_count,
which will be zero if no values were parsed (e.g., missing/empty .results).

pyperf/pyperf_run[440-450]
pyperf/pyperf_run[195-206]
pyperf/pyperf_run[223-234]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`pyperf_run` runs `python -m pyperf dump ... > ...results` and then unconditionally parses the results file. When the dump fails or yields no parsable values, `generate_csv_file` can divide by zero and/or generate invalid CSV/JSON while the script continues.
### Issue Context
The PR removed the previous `if [ $? -ne 0 ]` guard around the dump step. `generate_csv_file` assumes `${1}.results` exists and contains at least one parsed `value` line.
### Fix Focus Areas
- pyperf/pyperf_run[440-450]
- pyperf/pyperf_run[185-245]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. usage() exits $E_USAGE📎 Requirement gap ⛯ Reliability
Description
usage() now exits with $E_USAGE, but this script does not define E_USAGE locally, so the
actual exit status may be ambiguous if the sourced setup does not provide it. Ambiguous exit
statuses undermine consistent return-code signaling needed for automatic reruns.
Code

pyperf/pyperf_run[57]

+        exit $E_USAGE
Evidence
PR Compliance ID 2 requires consistent, machine-actionable return codes; the modified line uses
$E_USAGE without any local definition in this script, creating a risk of inconsistent/ambiguous
exit behavior depending on runtime environment/sourcing.

Improve/standardize return codes (RTCs) to support automatic reruns
pyperf/pyperf_run[56-57]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`usage()` exits with `$E_USAGE` but `E_USAGE` is not defined within this script, so the resulting exit code may be inconsistent.
## Issue Context
For standardized return codes, the script should guarantee that the error-code variable is defined (or fall back deterministically) before exiting.
## Fix Focus Areas
- pyperf/pyperf_run[56-57]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


4. Missing results report file🐞 Bug ⛯ Reliability
Description
pyperf_run still passes test_results_report to save_results, but the script no longer creates
test_results_report anywhere in the main execution path. As a result, the saved artifacts will not
contain the status marker that save_results is instructed to include.
Code

pyperf/pyperf_run[R447-449]

$python_exec -m pyperf dump  ${pyresults}.json > ${pyresults}.results
-if [ $? -ne 0 ]; then
-	echo "Failed: $python_exec -m pyperf dump  ${pyresults}.json > ${pyresults}.results" 1
-	echo Failed > test_results_report
-else
-	echo Ran > test_results_report
-fi
generate_csv_file ${pyresults} ${start_time} ${end_time}
Evidence
The script instructs save_results to include test_results_report in --other_files, but in the
current flow after the dump step there is no write to that file before save_results is called.

pyperf/pyperf_run[447-450]
pyperf/pyperf_run[466-470]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`save_results` is called with `--other_files "...,test_results_report"`, but `test_results_report` is never created in the current script flow.
### Issue Context
Previously, the script wrote `test_results_report` around the `pyperf dump` step. That block was removed, leaving a dangling reference.
### Fix Focus Areas
- pyperf/pyperf_run[447-453]
- pyperf/pyperf_run[466-470]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Unverified unzip/mv install 🐞 Bug ⛯ Reliability
Description
attempt_tools_wget/attempt_tools_curl only check the download command exit status but do not
validate unzip or mv, and they download/unzip into the current working directory. This can leave
$TOOLS_BIN unpopulated without a clear failure before later steps rely on
$TOOLS_BIN/general_setup.
Code

pyperf/pyperf_run[R104-125]

+attempt_tools_wget()
+{
+        if [[ ! -d "$TOOLS_BIN" ]]; then
+                wget ${tools_git}/archive/refs/heads/main.zip
+                if [[ $? -eq 0 ]]; then
+                        unzip -q main.zip
+                        mv test_tools-wrappers-main ${TOOLS_BIN}
+                        rm main.zip
+                fi
+        fi
+}
+
+attempt_tools_curl()
+{
+        if [[ ! -d "$TOOLS_BIN" ]]; then
+                curl -L -O ${tools_git}/archive/refs/heads/main.zip
+                if [[ $? -eq 0 ]]; then
+                        unzip -q main.zip
+                        mv test_tools-wrappers-main ${TOOLS_BIN}
+                        rm main.zip
+                fi
+        fi
Evidence
The tool bootstrap functions run unzip and mv without checking their exit codes, and do so
in-place (no temp directory). Later, the script sources $TOOLS_BIN/general_setup, so an incomplete
install can break subsequent initialization.

pyperf/pyperf_run[104-125]
pyperf/pyperf_run[325-326]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Tool bootstrap via wget/curl does not verify `unzip`/`mv` success and operates in the current directory, which can result in a silent partial install and later failures when sourcing `$TOOLS_BIN/general_setup`.
### Issue Context
The script’s subsequent steps assume a functional `$TOOLS_BIN` tree.
### Fix Focus Areas
- pyperf/pyperf_run[104-136]
- pyperf/pyperf_run[325-326]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

6. Usage sources inconsistent path🐞 Bug ✓ Correctness
Description
usage() sources test_tools/general_setup --usage instead of sourcing from $TOOLS_BIN, even
though the script’s runtime setup uses $TOOLS_BIN/general_setup. This can make the help output
omit the common/general options depending on the current working directory layout.
Code

pyperf/pyperf_run[R56-57]

source test_tools/general_setup --usage
-        exit 1
+        exit $E_USAGE
Evidence
The script defines $TOOLS_BIN as $HOME/test_tools and sources $TOOLS_BIN/general_setup in the
main path, but usage() uses a different relative path (test_tools/...).

pyperf/pyperf_run[30-31]
pyperf/pyperf_run[49-58]
pyperf/pyperf_run[325-326]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`usage()` sources `test_tools/general_setup --usage` while the script otherwise uses `$TOOLS_BIN/general_setup`, leading to inconsistent help output.
### Issue Context
Tools are installed/expected under `$TOOLS_BIN`.
### Fix Focus Areas
- pyperf/pyperf_run[49-58]
- pyperf/pyperf_run[30-31]
- pyperf/pyperf_run[325-326]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

@dvalinrh
Copy link
Contributor Author

E_USAGE is defined in error_codes, which is part of test_tools

@dvalinrh
Copy link
Contributor Author

Remove test_results_report from save_results, file is no longer present.

@dvalinrh
Copy link
Contributor Author

Unverified unzip/mv install
The failure will be caught at the end of the installation of test_tools. If they fail, we will drop into installing via git. This is the intended behavior.

@dvalinrh
Copy link
Contributor Author

test_tools/general_setup has been changed to $TOOLS_BIN

@dvalinrh dvalinrh requested a review from kdvalin March 16, 2026 15:30
Copy link
Member

@kdvalin kdvalin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dvalinrh dvalinrh merged commit ce57a6c into main Mar 17, 2026
1 check passed
@dvalinrh dvalinrh deleted the rerun branch March 17, 2026 08:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update test_tools upload and better rtcs

2 participants