- What it does: Forces PyOB to write a failing test before writing any feature code, ensuring functional correctness instead of just "it didn't crash."
- Implementation:
- Create a new prompt template
UT.md(Unit Test). - In Phase 3, the AI must generate a
tests/test_feature.pyfile first. - Update
run_pipelineto executepytest. Success is only declared if the new test passes.
- Create a new prompt template
- What it does: Automatically manages the project's Git lifecycle, creating branches for every feature and opening Pull Requests for you to review.
- Implementation:
- Use
subprocessto executegit checkout -b,git commit, andgit push. - Trigger this in
execute_targeted_iterationimmediately after a successfulFINAL VERIFICATION. - Optionally use the GitHub CLI (
gh pr create) to automate the PR submission.
- Use
- What it does: Uses a second, different AI model (e.g., Qwen 30B) to "sanity check" the code generated by the first model (e.g., Gemini 1.5) before it is ever applied.
- Implementation:
- In
get_valid_edit, once a patch is generated, send the diff to the other provider. - Use a prompt: "Review this diff for logical flaws or security holes. Respond 'VALID' or provide a critique."
- If the critique is negative, the system auto-regenerates using the critique as feedback.
- In
- What it does: Expands the target application's (System Monitor) capabilities to include GPU tracking, network packet monitoring, and audio visualization.
- Implementation:
- Install and integrate
GPUtil(GPU metrics),scapy(Network), andpyaudio(Sound). - Update the
SystemMetricsEngineto provide these data streams and create newDetailModeEnums ingui.pyto visualize the "pulsing" hardware data.
- Install and integrate