-
Notifications
You must be signed in to change notification settings - Fork 0
Benchmarks v2 #60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Benchmarks v2 #60
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 issues found across 8 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="benchmarks2/benchmark.py">
<violation number="1" location="benchmarks2/benchmark.py:41">
P2: Baseline parsing reads the "ops/s" label instead of the numeric ops/s value, so `parse_results` will fail on valid benchmark output. Use `parts[4]` for the numeric ops/s token.</violation>
<violation number="2" location="benchmarks2/benchmark.py:113">
P3: The error return code from `main()` is ignored, so the script exits successfully even when the server is unreachable. Propagate the exit status from `main()`.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| if line.startswith('Benchmark_'): | ||
| parts = line.split() | ||
| name = parts[0] | ||
| ops_s = float(parts[5]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: Baseline parsing reads the "ops/s" label instead of the numeric ops/s value, so parse_results will fail on valid benchmark output. Use parts[4] for the numeric ops/s token.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At benchmarks2/benchmark.py, line 41:
<comment>Baseline parsing reads the "ops/s" label instead of the numeric ops/s value, so `parse_results` will fail on valid benchmark output. Use `parts[4]` for the numeric ops/s token.</comment>
<file context>
@@ -0,0 +1,113 @@
+ if line.startswith('Benchmark_'):
+ parts = line.split()
+ name = parts[0]
+ ops_s = float(parts[5])
+ results[name] = ops_s
+ return results
</file context>
| print_comparison(baseline, results) | ||
|
|
||
| if __name__ == '__main__': | ||
| main() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P3: The error return code from main() is ignored, so the script exits successfully even when the server is unreachable. Propagate the exit status from main().
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At benchmarks2/benchmark.py, line 113:
<comment>The error return code from `main()` is ignored, so the script exits successfully even when the server is unreachable. Propagate the exit status from `main()`.</comment>
<file context>
@@ -0,0 +1,113 @@
+ print_comparison(baseline, results)
+
+if __name__ == '__main__':
+ main()
</file context>
|
@podocarp does not have an active Tusk seat. Activate it before triggering test generation. |
No description provided.