Skip to content

docs: update README#77

Open
sayalibhavsar wants to merge 3 commits into
redhat-performance:mainfrom
sayalibhavsar:add-comprehensive-readme
Open

docs: update README#77
sayalibhavsar wants to merge 3 commits into
redhat-performance:mainfrom
sayalibhavsar:add-comprehensive-readme

Conversation

@sayalibhavsar
Copy link
Copy Markdown

@sayalibhavsar sayalibhavsar commented May 11, 2026

Description

changes made to pyperf-wrapper documentation

Before/After Comparison

Changes include a template followed across all wrapper

Solves issue: #76
Relates to JIRA: RPOPC-1039

@qodo-code-review
Copy link
Copy Markdown
Contributor

Review Summary by Qodo

Comprehensive README with standardized wrapper template and detailed documentation

📝 Documentation

Grey Divider

Walkthroughs

Description
• Comprehensive README rewrite with standardized wrapper template
• Detailed command-line options and usage examples documentation
• Complete workflow explanation covering all 9 execution stages
• Extensive benchmark suite reference with 104 benchmarks categorized
• Result processing, virtual environment setup, and PCP metrics documentation
• Troubleshooting guide and performance optimization tips
Diagram
flowchart LR
  A["Old README<br/>39 lines"] -->|"Expand & Restructure"| B["New README<br/>363 lines"]
  B --> C["Description & Features"]
  B --> D["Command-Line Options"]
  B --> E["Workflow Steps 1-9"]
  B --> F["Dependencies & Setup"]
  B --> G["Benchmark Categories"]
  B --> H["Examples & Usage"]
  B --> I["Result Processing"]
  B --> J["Troubleshooting Guide"]
Loading

Grey Divider

File Changes

1. README.md 📝 Documentation +353/-29

Complete README restructure with comprehensive documentation

• Expanded from 39 to 363 lines following standardized wrapper documentation template
• Added comprehensive description of pyperformance wrapper capabilities and architecture support
• Documented all command-line options with detailed parameter explanations and defaults
• Added 9-step workflow breakdown covering environment setup, Python validation, PCP setup,
 benchmark validation, venv creation, test execution, result processing, validation, and output
• Included complete 104-benchmark reference organized by categories (async, web, serialization,
 scientific, games, database, text, symbolic, startup, deep copy, crypto)
• Added result processing explanation with unit conversion and CSV output format details
• Documented virtual environment setup and setuptools compatibility fixes for pyperformance <=
 1.11.0
• Added PCP metrics tracking and return codes documentation
• Included 6 practical usage examples with command syntax and expected behavior
• Added extensive troubleshooting section covering common issues and performance optimization tips

README.md


Grey Divider

ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown
Contributor

qodo-code-review Bot commented May 11, 2026

Code Review by Qodo

🐞 Bugs (0) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. python_exec example fails ✓ Resolved 🐞 Bug ≡ Correctness
Description
README’s example --python_exec /usr/bin/python3.12 (and the claim that any Python 3 interpreter
works) contradicts the wrapper’s dependency-file lookup, which exits unless `python_deps/<python
basename>.json exists. In this repo only python_deps/python3.json` exists, so the documented
example will error with “Unsupported python binary python3.12”.
Code

README.md[R214-218]

+### Run with a specific Python executable
+```bash
+./pyperf_run --python_exec /usr/bin/python3.12
Evidence
The README recommends --python_exec /usr/bin/python3.12, but the script constructs
python_deps/python3.12.json and exits if it doesn’t exist; the repo only contains python3.json
so the example fails by default.

README.md[214-218]
README.md[326-329]
pyperf/pyperf_run[403-408]
python_deps/python3.json[1-18]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
README documents using arbitrary Python executables (e.g., `/usr/bin/python3.12`) via `--python_exec`, but `pyperf_run` requires a matching `python_deps/<basename>.json` file and exits otherwise. With only `python3.json` present, users following the README will hit a hard failure.
## Issue Context
`pyperf_run` derives the dependency config filename from `basename($python_exec)` and aborts if it does not exist.
## Fix Focus Areas
- pyperf/pyperf_run[395-408]
- python_deps/python3.json[1-18]
- README.md[214-218]
- README.md[326-329]
## Suggested fix
Choose one:
1) Update README to state that only executables with a corresponding `python_deps/<basename>.json` are supported by default, and adjust the example to use `--python_exec python3` (or add instructions to create `python_deps/python3.12.json`).
2) Update `pyperf_run` to fall back to `python_deps/python3.json` when `python_deps/<basename>.json` is missing (if that’s acceptable for your packaging assumptions).
3) Add additional `python_deps/python3.X.json` files for the versions you want to support and keep the README example as-is.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. Misstated pyperf.json location ✓ Resolved 🐞 Bug ≡ Correctness
Description
README says pyperf.json is in python_results/, but the wrapper writes it to the current working
directory (--output_file pyperf.json). Users following the README won’t find the validated JSON
where documented.
Code

README.md[R186-193]

+The `python_results/` directory contains:
+
+- **pyperf_out_\<timestamp\>.json**: Raw pyperformance JSON output containing all benchmark runs with statistical data.
+- **pyperf_out_\<timestamp\>.results**: Human-readable pyperf dump output showing per-run values for each benchmark.
+- **pyperf_out_\<timestamp\>.csv**: Processed CSV file with averaged results (Test, Avg, Unit, Start_Date, End_Date).
+- **pyperf.json**: Final validated JSON results checked against the Pydantic schema.
+- **/tmp/pyperf.out**: Complete execution log capturing all wrapper output.
+- **PCP data** (if `--use_pcp` option used): Performance Co-Pilot monitoring data with per-benchmark metric values.
Evidence
The README lists pyperf.json under python_results/ outputs, but the wrapper’s csv_to_json
invocation writes pyperf.json without a python_results/ path.

README.md[184-193]
pyperf/pyperf_run[461-463]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
README documents `pyperf.json` as being inside `python_results/`, but the wrapper currently writes `pyperf.json` to the working directory.
## Issue Context
The wrapper writes the run artifacts (`pyperf_out_<timestamp>.*`) under `python_results/`, but uses `csv_to_json ... --output_file pyperf.json` without a `python_results/` prefix.
## Fix Focus Areas
- README.md[184-193]
- pyperf/pyperf_run[425-463]
## Suggested fix
Pick one and keep docs+code consistent:
1) Change the script to write `pyperf.json` into `python_results/` (e.g., `${pyresults}.json`-adjacent or `python_results/pyperf.json`), and update any downstream references accordingly.
2) Update README to state `pyperf.json` is written to the run directory (CWD) rather than `python_results/`.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Misleading --usage output ✓ Resolved 🐞 Bug ⚙ Maintainability
Description
README documents --python_exec, but the script’s --usage output advertises a different flag name
(--python_exec_path). Users relying on --usage will pass the wrong option and fail argument
parsing.
Code

README.md[R21-28]

+pyperf Options:
+  --pyperf_version <value>: Version of pyperformance to install and run.
+      Defaults to 1.11.0.
+  --python_exec <path>: Python executable to use for running benchmarks.
+      Defaults to python3.
+  --python_pkgs <packages>: Comma-separated list of additional Python packages to install.
+  --pyperf_benchmarks <benchmarks>: Comma-separated list of specific benchmarks to run.
+      Defaults to all benchmarks. Example: "2to3,nbody,go".
Evidence
The README’s option is --python_exec, and the script parses --python_exec, but the usage()
text prints --python_exec_path, which does not exist in the getopt/case handling.

README.md[20-28]
pyperf/pyperf_run[49-55]
pyperf/pyperf_run[334-366]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The wrapper parses `--python_exec`, but `usage()` prints `--python_exec_path`, which is not a recognized option. This creates self-contradicting CLI documentation between `--usage` output and README.
## Issue Context
`usage()` is the authoritative help text many users will consult.
## Fix Focus Areas
- pyperf/pyperf_run[49-57]
- pyperf/pyperf_run[334-366]
- README.md[20-28]
## Suggested fix
Update `usage()` to print `--python_exec <path>` (and update the description accordingly) so it matches the getopt long option (`python_exec`) and the README.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Qodo Logo

@sayalibhavsar sayalibhavsar self-assigned this May 11, 2026
@sayalibhavsar sayalibhavsar added the documentation Improvements or additions to documentation label May 11, 2026
Comment thread README.md Outdated
@sayalibhavsar sayalibhavsar requested a review from dvalinrh May 12, 2026 11:17
Comment thread README.md Outdated
Comment thread README.md
@sayalibhavsar sayalibhavsar requested a review from dvalinrh May 12, 2026 12:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants