idk i did a lot so look at the changelog file#63
idk i did a lot so look at the changelog file#63sixtysixx wants to merge 33 commits intoChipaDevTeam:masterfrom
Conversation
- Add threading.RLock() to synchronous.py for thread-safe buy() calls - Remove unused resend_connection_messages method - Fix chrono::Duration -> std::time::Duration for PyO3 compatibility - Update CI workflow for stable PyO3 - Apply clippy fixes in candle.rs, subscriptions.rs, state.rs
Fix concurrent test failure, PyO3 compatibility, and code quality
📝 WalkthroughWalkthroughAdds thread-safe synchronous wrappers, a compile_candles API across Rust and Python, per-subscription IDs with multi-subscriber subscriptions, public PendingTradesHandle and extensive tests/examples, CI/workflow and build adjustments, many docs/formatting and crate manifest updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (Python)
participant SyncWrapper as Sync PocketOption
participant AsyncClient as Async PocketOption
participant WS as WebSocket / Transport
participant State as In-memory State
Client->>SyncWrapper: call compile_candles(asset, period, lookback)
SyncWrapper->>AsyncClient: (under lock) compile_candles(...)
AsyncClient->>WS: request ticks(asset, lookback)
WS-->>AsyncClient: return ticks
AsyncClient->>AsyncClient: compile_candles_from_tuples(ticks, period)
AsyncClient->>State: validate asset presence (optional)
AsyncClient-->>SyncWrapper: return candles
SyncWrapper-->>Client: return result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @sixtysixx, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a substantial update to the BinaryOptionsTools project, primarily focusing on enhancing the core Rust library and its Python bindings. The changes introduce new functionalities like custom candlestick compilation from tick data, significant improvements to the trading framework's event handling, and a major overhaul of dependency versions for better stability and performance. The project structure has been refactored to better organize the Rust components, and comprehensive documentation updates ensure clarity for both developers and users. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a wide range of changes, including dependency updates, project structure refactoring, new features like compile_candles, and significant improvements to thread safety in the synchronous Python client. It also adds extensive docstrings and tests, which is great. However, there are a few critical issues, such as a build script that will fail to compile and the removal of pre-commit hooks, which could impact code quality. I've also noted some minor documentation and cleanup issues.
crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 16
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
docs/guides/python-pystrategy-trading-bot.md (2)
181-183:⚠️ Potential issue | 🟠 MajorAsync call shown in a sync callback will mislead users and fail at runtime.
Line 182 calls
ctx.client.balance()(documented as async) from syncon_startwithoutawait. This example should usectx.market.balance()or schedule an async task.Suggested doc fix
def on_start(self, ctx): - self.initial_balance = ctx.client.balance() + # on_start is sync; use virtual market sync API here + self.initial_balance = ctx.market.balance() self.trades = [] self.win_count = 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/guides/python-pystrategy-trading-bot.md` around lines 181 - 183, The on_start callback calls the async ctx.client.balance() from a synchronous function which will fail; update the example in the on_start implementation so it does not call the async client method directly — either replace ctx.client.balance() with the synchronous helper ctx.market.balance() (preferred) or convert on_start to schedule an async task and await ctx.client.balance() inside that async task; ensure you reference the on_start function and use ctx.market.balance() when keeping it synchronous or properly await ctx.client.balance() inside an async context.
620-620:⚠️ Potential issue | 🟡 MinorGuard profit percentage against zero initial balance.
Line 620 divides by
self.initial_balancewithout a guard. A zero balance edge case will crash the shutdown summary.Suggested doc fix
- print(f"Net P/L: ${net_profit:.2f} ({net_profit/self.initial_balance*100:.2f}%)") + pnl_pct = (net_profit / self.initial_balance * 100) if self.initial_balance else 0.0 + print(f"Net P/L: ${net_profit:.2f} ({pnl_pct:.2f}%)")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/guides/python-pystrategy-trading-bot.md` at line 620, The shutdown summary prints net profit percentage using net_profit/self.initial_balance which will raise on zero initial balance; update the print logic around that line to guard against division by zero by computing the percentage only when self.initial_balance is non-zero (e.g., set profit_pct = net_profit / self.initial_balance * 100 if self.initial_balance != 0 else None or "N/A") and then include that guarded value in the f-string so the summary prints a safe placeholder when initial balance is zero; locate the usage of self.initial_balance and net_profit in the shutdown/summary routine (the print statement shown) and replace the direct division with this conditional computation.docs/examples/python/async/strategy_example.py (1)
43-43:⚠️ Potential issue | 🟠 MajorIncorrect
start_tracingcall signature.The function requires 4 arguments:
(path: str, level: str, terminal: bool, layers: List[StreamLogsLayer]), but the call at line 43 passes only one:start_tracing("info"). This will raise aTypeErrorat runtime.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/examples/python/async/strategy_example.py` at line 43, The call to start_tracing currently passes only one argument and will raise a TypeError; update the call to provide all four required parameters for start_tracing(path: str, level: str, terminal: bool, layers: List[StreamLogsLayer]) — supply a string path for the trace file, the desired level (e.g., "info"), a boolean for terminal output, and a list of StreamLogsLayer instances (or an empty list if none) so the start_tracing invocation uses the full signature.
🟡 Minor comments (16)
docs/project/breaking-changes-0.2.6.md-13-15 (1)
13-15:⚠️ Potential issue | 🟡 MinorClarify
payout_percentageunits to avoid miscalculationLine 13 is ambiguous:
payout_percentagecould be interpreted as80or0.8. Please specify whether the formula expects a decimal rate or divide by 100 when using percent values.Suggested doc wording
-- **Win**: `profit = stake * payout_percentage` (e.g., $1.00 stake at 80% returns $0.80 profit). +- **Win**: `profit = stake * payout_rate` where `payout_rate` is decimal (e.g., `0.8` for 80%), so a $1.00 stake returns $0.80 profit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/project/breaking-changes-0.2.6.md` around lines 13 - 15, Update the docs to clarify the units of payout_percentage used in the formulas for Win/Loss/Draw: state explicitly that payout_percentage is a decimal fraction (e.g., 0.8 for 80%) and that the Win formula uses profit = stake * payout_percentage; alternatively note the equivalent if using percent values (profit = stake * (payout_percentage / 100) when payout_percentage is given as 80). Reference the `payout_percentage` symbol and the three formulas (`profit = stake * payout_percentage`, `profit = -stake`, `profit = 0.00`) so readers know which lines to update.crates/core-pre/LICENSE-7-90 (1)
7-90:⚠️ Potential issue | 🟡 MinorSection numbering is broken throughout the license.
All major sections are numbered "1." instead of sequential numbers (1-9). This appears to be a copy-paste or markdown rendering issue that should be corrected for clarity and professionalism.
Affected lines:
- Line 7: "1. DEFINITIONS" ✓
- Line 20: "1. GRANT OF LICENSE" → should be "2."
- Line 38: "1. DISCLAIMER OF WARRANTY" → should be "3."
- Line 44: "1. LIMITATION OF LIABILITY" → should be "4."
- Line 59: "1. RISK WARNING" → should be "5."
- Line 69: "1. DISTRIBUTION" → should be "6."
- Line 77: "1. MODIFICATIONS" → should be "7."
- Line 85: "1. TERMINATION" → should be "8."
- Line 90: "1. CONTACT" → should be "9."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/core-pre/LICENSE` around lines 7 - 90, Fix the broken section numbering by renumbering each major heading sequentially: change "1. DEFINITIONS" to "1. DEFINITIONS", "1. GRANT OF LICENSE" to "2. GRANT OF LICENSE", "1. DISCLAIMER OF WARRANTY" to "3. DISCLAIMER OF WARRANTY", "1. LIMITATION OF LIABILITY" to "4. LIMITATION OF LIABILITY", "1. RISK WARNING" to "5. RISK WARNING", "1. DISTRIBUTION" to "6. DISTRIBUTION", "1. MODIFICATIONS" to "7. MODIFICATIONS", "1. TERMINATION" to "8. TERMINATION" and "1. CONTACT" to "9. CONTACT" so the headings (DEFINITIONS, GRANT OF LICENSE, DISCLAIMER OF WARRANTY, LIMITATION OF LIABILITY, RISK WARNING, DISTRIBUTION, MODIFICATIONS, TERMINATION, CONTACT) follow proper sequential numbering and preserve the existing text and clauses.CITATION.cff-26-27 (1)
26-27:⚠️ Potential issue | 🟡 MinorAlign release metadata across version/year fields.
Line 26 and Line 44 move the project to
0.2.8/2026, but Line 27 still saysdate-released: "2024-01-01". Please updatedate-releasedto the actual0.2.8release date (or remove it until release) to keep CFF metadata consistent.Also applies to: 44-44
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CITATION.cff` around lines 26 - 27, The CITATION.cff metadata is inconsistent: the version field "version: \"0.2.8\"" was updated but the "date-released" field still shows "2024-01-01"; either update the date-released value to the actual release date for version 0.2.8 or remove the "date-released" field until that release date is known—edit the "version" and "date-released" entries in CITATION.cff so they match (or omit date-released) to keep the CFF metadata consistent.BinaryOptionsToolsV2/Readme.md-45-45 (1)
45-45:⚠️ Potential issue | 🟡 MinorFix bash comment syntax.
The code blocks use
//for comments, which is invalid in bash. Bash comments should start with#.🐚 Proposed fix
```bash -// Inside the root folder +# Inside the root folder cd BinaryOptionsToolsV2 maturin build -r -// Once the command is executed it should print a path to a .whl file, copy it and then run +# Once the command is executed it should print a path to a .whl file, copy it and then run pip install path/to/file.whlTo install the library in a local virtual environment
-// Inside the root folder +# Inside the root folder cd BinaryOptionsToolsV2 -// Activate the virtual environment if not done already +# Activate the virtual environment if not done already -// Execute the following command and it should automatically install the library in the VM +# Execute the following command and it should automatically install the library in the VM maturin develop</details> Also applies to: 56-56 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@BinaryOptionsToolsV2/Readme.mdat line 45, Replace invalid bash comment
syntax inside the README's bash code blocks by changing lines that start with
"//" (e.g. "// Inside the root folder", "// Once the command is executed...",
"// Activate the virtual environment...", "// Execute the following command...")
to use "#" comments; ensure the code fences remain marked as bash and update all
occurrences (including the second instance referenced) so that every "// ..."
comment in those bash blocks becomes "# ...".</details> </blockquote></details> <details> <summary>BinaryOptionsToolsV2/Readme.md-37-37 (1)</summary><blockquote> `37-37`: _⚠️ Potential issue_ | _🟡 Minor_ **Add missing hyperlink for Rust installation.** The text "Check here" lacks a hyperlink. Users need a link to instructions for installing Rust and Cargo. <details> <summary>🔗 Proposed fix</summary> ```diff -- Make sure you have `rust` and `cargo` installed (Check here) +- Make sure you have `rust` and `cargo` installed ([Check here](https://www.rust-lang.org/tools/install))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@BinaryOptionsToolsV2/Readme.md` at line 37, Update the README sentence that mentions Rust and Cargo so the "Check here" text becomes a clickable hyperlink to Rust's installation page; locate the line containing "Make sure you have `rust` and `cargo` installed (Check here)" and replace the parenthetical with a Markdown link to the official Rust install/docs (e.g., link "Check here" to https://www.rust-lang.org/tools/install) so users can click through for installation instructions..serena/project.yml-101-108 (1)
101-108:⚠️ Potential issue | 🟡 MinorAlign
base_modesanddefault_modeswith documented expectations.Lines 101 and 108 set
base_modes:anddefault_modes:to YAML null. According to the file's own comments (lines 99–100, 105–107), you should either set them to[](to explicitly disable modes) or to a list of mode names. Leaving them null is ambiguous—the keys are present, so they will override the global configuration with an undefined value, rather than inheriting from global config. Remove the keys to inherit defaults, or set them explicitly to[]or a list.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.serena/project.yml around lines 101 - 108, The YAML keys base_modes and default_modes are currently present but set to null, which unintentionally overrides the global defaults; either remove those keys entirely to inherit the global configuration, or explicitly set them to an empty list (default_modes: [] and/or base_modes: []) or to an explicit list of mode names to make the intention clear—update the .serena/project.yml entries for base_modes and default_modes accordingly.tutorials/scripts/howto.txt-31-35 (1)
31-35:⚠️ Potential issue | 🟡 MinorClarify “events-po.com” rationale and scope.
The note says it ignores
events-po.com“to prevent incorrect SSID parsing”. In the script, it bypasses all send-hook logic for that host. Consider stating that it bypasses interception entirely for that domain (not only parsing).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tutorials/scripts/howto.txt` around lines 31 - 35, Update the note about events-po.com to clarify that the script completely bypasses interception for that domain (not just SSID parsing): explicitly state that any send-hook logic, request interception, or modification is skipped for hosts matching "events-po.com" so no data is intercepted or altered; reference the existing mention of "events-po.com" and the send-hook logic in the text and rephrase the bullet to read that the domain is excluded from all interception/processing, not only parsing, for safety and correctness.tutorials/scripts/SSID_Fetcher_UserScript.user.js-68-68 (1)
68-68:⚠️ Potential issue | 🟡 MinorConsole log wording doesn’t match behavior.
The log says “ignoring auth msgs from events-po.com”, but the code bypasses all send-hook logic for
events-po.com(not just auth messages). Consider adjusting the message to reflect the broader bypass.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tutorials/scripts/SSID_Fetcher_UserScript.user.js` at line 68, The console message "Hooked. ignoring auth msgs from events-po.com." is misleading because the code bypasses all send-hook logic for events-po.com rather than only auth messages; update the console.log message (the one that currently prints "Hooked. ignoring auth msgs from events-po.com.") to accurately state that send-hook handling for events-po.com is entirely bypassed (e.g., "Hooked. bypassing send-hook for events-po.com.") so the log matches the actual behavior in the script.tutorials/scripts/SSID_Fetcher_UserScript.user.js-46-49 (1)
46-49:⚠️ Potential issue | 🟡 MinorRemove the trailing comma in the
confirm()call for broader compatibility.The trailing comma in function calls is supported by Chrome 58+ and Firefox 52+, but Violentmonkey officially supports Chrome 57+ — which does not support this syntax. While Tampermonkey (Firefox 78+) and modern Violentmonkey deployments are unaffected, removing the trailing comma requires no trade-offs and ensures the script won't fail on older or edge-case userscript runtime environments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tutorials/scripts/SSID_Fetcher_UserScript.user.js` around lines 46 - 49, Remove the trailing comma inside the confirm() call so the function invocation is compatible with older userscript runtimes: locate the confirm(...) used to set userWantsToProceed and change the argument list to end without a trailing comma after the template string; ensure no other commas remain after the final argument in that confirm call.AUTHORS.md-68-68 (1)
68-68:⚠️ Potential issue | 🟡 MinorIncomplete date placeholder.
The "Last updated" timestamp contains
XXas a placeholder instead of an actual day. Consider using the actual date (e.g.,2026-02-26for today) or removing the specific date in favor of a more maintainable approach.Suggested fix
-Last updated: 2026-01-XX +Last updated: 2026-02-26🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AUTHORS.md` at line 68, The "Last updated: 2026-01-XX" placeholder in AUTHORS.md needs to be updated; locate the "Last updated: 2026-01-XX" line and either replace the "XX" with the actual day (for example "Last updated: 2026-02-26") or remove the specific date entirely (or replace it with a more maintainable format like a full YYYY-MM-DD timestamp or omit the date) so the file no longer contains a placeholder.BinaryOptionsToolsUni/README.md-168-182 (1)
168-182:⚠️ Potential issue | 🟡 MinorSection header mislabeled as "C" instead of "C#".
The heading says
### Cbut the code block contains C# (.NET) code. This could confuse users looking for C# examples.📝 Proposed fix
-### C +### C#🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@BinaryOptionsToolsUni/README.md` around lines 168 - 182, Update the README section header from "### C" to "### C#" and ensure the fenced code block language matches C# (e.g., use csharp) so the example for PocketOption.InitAsync, BalanceAsync and BuyAsync is correctly labeled; change only the header text and the code fence language tag around the sample that references PocketOption.InitAsync, BalanceAsync and BuyAsync.AGENTS.md-136-136 (1)
136-136:⚠️ Potential issue | 🟡 MinorFix malformed inline code span.
Line [136] has an extra leading space inside the code span:
` darling`.📝 Proposed fix
-- Rust macros in `crates/macros` with ` darling` for attribute parsing +- Rust macros in `crates/macros` with `darling` for attribute parsing🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` at line 136, The inline code span contains an extra leading space: change the text snippet "` darling`" to "`darling`" so the phrase reads "Rust macros in `crates/macros` with `darling` for attribute parsing"; locate and update the malformed backtick-delimited span in the AGENTS.md content where that sentence appears (search for the exact substring "` darling`") and remove the leading space inside the code span..github/workflows/CI.yml-140-144 (1)
140-144:⚠️ Potential issue | 🟡 MinorSummary step contains inaccurate platform information.
Line 142 references "Windows 2025" which does not exist. The
windows-latestrunner currently uses Windows Server 2022. This could confuse users reading the workflow summary.Proposed fix
- name: Summary run: | echo "### Automatic build success, more info below." >> $GITHUB_STEP_SUMMARY - echo "- **Platform:** macOS Sequoia / Ubuntu 24.04 / Windows 2025" >> $GITHUB_STEP_SUMMARY + echo "- **Platform:** macOS / Ubuntu 24.04 / Windows" >> $GITHUB_STEP_SUMMARY echo "- **Python:** 3.13 (Stable)" >> $GITHUB_STEP_SUMMARY echo "- **Version:** ${{ steps.get_version.outputs.VERSION }}" >> $GITHUB_STEP_SUMMARY🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/CI.yml around lines 140 - 144, The workflow summary echo that appends platform info to $GITHUB_STEP_SUMMARY contains an incorrect platform string ("Windows 2025"); locate the echo line that writes "- **Platform:** macOS Sequoia / Ubuntu 24.04 / Windows 2025" and replace the Windows entry with the correct runner name/version (e.g., "Windows Server 2022" or "windows-latest") so the summary accurately reflects the runners; ensure the rest of the summary (Python and Version echoes) remain unchanged.BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/asynchronous.py-1367-1377 (1)
1367-1377:⚠️ Potential issue | 🟡 MinorFix the
create_raw_iteratorexample (async withis incorrect here).The shown usage treats the iterator like an async context manager. It should be consumed directly with
async for.📝 Proposed doc fix
- async def bounded_stream(): - async with PocketOptionAsync(ssid) as client: - validator = Validator.regex(r'42\["signal"') - async with await client.create_raw_iterator( - '42["startSignals"]', validator - ) as stream: - async for signal in stream: - process_signal(json.loads(signal)) + async def bounded_stream(): + async with PocketOptionAsync(ssid) as client: + validator = Validator.regex(r'42\["signal"') + stream = await client.create_raw_iterator( + '42["startSignals"]', validator + ) + async for signal in stream: + process_signal(json.loads(signal))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/asynchronous.py` around lines 1367 - 1377, The example incorrectly treats the iterator returned by PocketOptionAsync.create_raw_iterator as an async context manager; update the bounded_stream example to await create_raw_iterator and consume the returned async iterator with async for instead of using "async with ... as stream". Specifically, in bounded_stream (and the example that references Validator.regex and PocketOptionAsync(ssid)), replace the "async with await client.create_raw_iterator(... ) as stream" usage with obtaining the iterator via await client.create_raw_iterator(...) (or directly awaiting inside the async for) and then iterate "async for signal in stream" to consume it.BinaryOptionsToolsV2/rust/src/pocketoption.rs-499-518 (1)
499-518:⚠️ Potential issue | 🟡 MinorAdd explicit non-zero validation for candle periods.
custom_periodandlookback_periodaccept0currently. Guarding this at the boundary gives clearer and earlier errors.🛠️ Proposed fix
pub fn compile_candles<'py>( &self, py: Python<'py>, asset: String, custom_period: u32, lookback_period: u32, ) -> PyResult<Bound<'py, PyAny>> { + if custom_period == 0 || lookback_period == 0 { + return Err(BinaryErrorPy::NotAllowed( + "custom_period and lookback_period must be greater than 0".into(), + ) + .into()); + } let client = self.client.clone(); future_into_py(py, async move { let res = client .compile_candles(asset, custom_period, lookback_period)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@BinaryOptionsToolsV2/rust/src/pocketoption.rs` around lines 499 - 518, In compile_candles validate that custom_period and lookback_period are non-zero at the Rust/Python boundary: check the parameters at the start of the compile_candles method (before calling future_into_py and before cloning/using self.client) and return an early PyResult error if either is 0; reference the function compile_candles and the parameters custom_period and lookback_period and ensure the error is a PyResult error consistent with the crate's BinaryErrorPy/pyerror conventions so callers receive a clear, immediate validation error.tests/python/pocketoption/test_async_mocked.py-324-343 (1)
324-343:⚠️ Potential issue | 🟡 MinorDuplicate
MockLoggerclass shadows the outer definition.The
MockLoggerclass defined inside the fixture (lines 327-341) shadows the top-levelMockLoggerclass (lines 274-291). Ruff flags this as F823 ("Local variableMockLoggerreferenced before assignment"). The conditionalif not hasattr(...)check may not work as intended because the inner class is defined regardless.🐛 Proposed fix: Remove the duplicate definition
# Mock Logger if not present at top-level if not hasattr(BinaryOptionsToolsV2, "Logger"): - - class MockLogger: - def __init__(self): - pass - - def info(self, *args, **kwargs): - pass - - def error(self, *args, **kwargs): - pass - - def debug(self, *args, **kwargs): - pass - - def warn(self, *args, **kwargs): - pass - monkeypatch.setattr(BinaryOptionsToolsV2, "Logger", MockLogger, raising=False)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/python/pocketoption/test_async_mocked.py` around lines 324 - 343, Remove the duplicate inner MockLogger definition that shadows the top-level MockLogger: inside the block that checks if not hasattr(BinaryOptionsToolsV2, "Logger") do not redeclare MockLogger; instead reference the existing top-level MockLogger when calling monkeypatch.setattr(BinaryOptionsToolsV2, "Logger", MockLogger, raising=False). This removes the local assignment that triggers Ruff F823 while preserving the intended monkeypatching behavior.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (5)
BinaryOptionsToolsUni/Cargo.lockis excluded by!**/*.lockBinaryOptionsToolsV2/Cargo.lockis excluded by!**/*.lockCargo.lockis excluded by!**/*.lockcrates/binary_options_tools/Cargo.lockis excluded by!**/*.lockpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (99)
.github/ISSUE_TEMPLATE/bug_report.md.github/ISSUE_TEMPLATE/documentation.md.github/ISSUE_TEMPLATE/feature_request.md.github/ISSUE_TEMPLATE/question.md.github/PULL_REQUEST_TEMPLATE.md.github/workflows/CI.yml.github/workflows/docs.yml.gitignore.husky/pre-commit.serena/.gitignore.serena/project.ymlACKNOWLEDGMENTS.mdAGENTS.mdAUTHORS.mdBinaryOptionsToolsUni/README.mdBinaryOptionsToolsV2/Readme.mdBinaryOptionsToolsV2/pyproject.tomlBinaryOptionsToolsV2/python/BinaryOptionsToolsV2/BinaryOptionsToolsV2.pyiBinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/asynchronous.pyBinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/synchronous.pyBinaryOptionsToolsV2/python/test_simple.pyBinaryOptionsToolsV2/rust/Cargo.tomlBinaryOptionsToolsV2/rust/LICENSEBinaryOptionsToolsV2/rust/Readme.mdBinaryOptionsToolsV2/rust/build.rsBinaryOptionsToolsV2/rust/src/config.rsBinaryOptionsToolsV2/rust/src/error.rsBinaryOptionsToolsV2/rust/src/framework.rsBinaryOptionsToolsV2/rust/src/lib.rsBinaryOptionsToolsV2/rust/src/logs.rsBinaryOptionsToolsV2/rust/src/pocketoption.rsBinaryOptionsToolsV2/rust/src/runtime.rsBinaryOptionsToolsV2/rust/src/stream.rsBinaryOptionsToolsV2/rust/src/validator.rsCHANGELOG.mdCITATION.cffCONTRIBUTING.mdForLLMsAndAgents/guidelines.mdREADME.mdcrates/binary_options_tools/Cargo.tomlcrates/binary_options_tools/data/pocket_options_regions.jsoncrates/binary_options_tools/examples/pending_trades_example.rscrates/binary_options_tools/src/config.rscrates/binary_options_tools/src/error.rscrates/binary_options_tools/src/expertoptions/types.rscrates/binary_options_tools/src/framework/mod.rscrates/binary_options_tools/src/pocketoption/candle.rscrates/binary_options_tools/src/pocketoption/modules/mod.rscrates/binary_options_tools/src/pocketoption/modules/pending_trades.rscrates/binary_options_tools/src/pocketoption/modules/pending_trades_tests.rscrates/binary_options_tools/src/pocketoption/modules/subscriptions.rscrates/binary_options_tools/src/pocketoption/modules/trades.rscrates/binary_options_tools/src/pocketoption/modules/trades_tests/common.rscrates/binary_options_tools/src/pocketoption/modules/trades_tests/mod.rscrates/binary_options_tools/src/pocketoption/pocket_client.rscrates/binary_options_tools/src/pocketoption/state.rscrates/binary_options_tools/src/pocketoption/types.rscrates/binary_options_tools/src/utils/serialize.rscrates/binary_options_tools/src/validator.rscrates/core-pre/Cargo.tomlcrates/core-pre/LICENSEcrates/core-pre/README.mdcrates/core-pre/docs/testing-framework.mdcrates/core-pre/src/builder.rscrates/core-pre/src/middleware.rscrates/macros/Cargo.tomlcrates/macros/LICENSEdata/ssid.jsondata/test_close_order.jsondata/update_closed_deals.jsondata/update_opened_deals.jsondocs/INDEX.mddocs/OVERVIEW.mddocs/examples/python/async/active_assets.pydocs/examples/python/async/comprehensive_demo.pydocs/examples/python/async/history.pydocs/examples/python/async/rich_dashboard_bot.pydocs/examples/python/async/strategy_example.pydocs/examples/python/sync/logs.pydocs/examples/python/sync/subscribe_symbol.pydocs/examples/python/sync/subscribe_symbol_chuncked.pydocs/examples/python/sync/subscribe_symbol_timed.pydocs/guides/python-pystrategy-trading-bot.mddocs/project/breaking-changes-0.2.6.mdmkdocs.ymlpackage.jsonscripts/bot-cli.pytests/conftest.pytests/python/core/test_config.pytests/python/experimental/test.pytests/python/pocketoption/test_async_mocked.pytests/python/pocketoption/test_asynchronous.pytests/python/pocketoption/test_integration.pytests/python/pocketoption/test_raw_handler.pytests/python/pocketoption/test_sync_mocked.pytests/python/pocketoption/test_synchronous.pytests/python/tracing/test_tracing.pytutorials/scripts/SSID_Fetcher_UserScript.user.jstutorials/scripts/howto.txt
💤 Files with no reviewable changes (1)
- .husky/pre-commit
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs (1)
216-216:⚠️ Potential issue | 🟠 MajorRemove remaining raw
eprintln!debug output before merge.This same logging concern was already raised in earlier review and is still present (Lines 216, 223, 230, 261, 263, 269, 288, 302). Please switch to structured tracing or remove.
Also applies to: 223-223, 230-230, 261-263, 269-270, 288-288, 302-302
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs` at line 216, Replace all raw eprintln! debug prints in pending_trades.rs (e.g., the eprintln! that logs cmd and self.last_req_id inside the handler) with structured tracing calls (tracing::debug! or tracing::trace!) and include the same context fields (cmd, self.last_req_id, request ids) as named fields in the macro; remove any remaining eprintln! occurrences mentioned in the review. Ensure the module imports tracing (use tracing::debug or trace) and that the log level chosen matches existing logs so behavior remains consistent.
🧹 Nitpick comments (1)
crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs (1)
161-165: Update concurrency docs to match current handle behavior.Line 163 says concurrent
open_pending_ordercalls on the handle lead to warnings/timeouts, but the mutex at Line 97 serializes calls per cloned handle. Clarify that risk now primarily applies to module-level correlation constraints / multiple independent handle instances.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs` around lines 161 - 165, Update the module docs for OpenPendingOrder/open_pending_order to state that concurrent calls on the same PendingTradesHandle are serialized by the handle's mutex (so cloned handle instances serialize callers), and that the original warning about timeouts applies mainly when multiple independent handle instances (or external callers bypassing the handle) exist or when module-level correlation constraints (last_req_id) are violated; reference PendingTradesHandle, open_pending_order, OpenPendingOrder, and last_req_id in the wording so readers can locate the relevant synchronization and correlation state.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AGENTS.md`:
- Line 129: Fix the extra space inside the inline code span: replace the
backticked token "` darling`" with "`darling`" in the AGENTS.md sentence that
reads "Rust macros in `crates/macros` with ` darling` for attribute parsing" so
the code span has no leading space; update the text to "Rust macros in
`crates/macros` with `darling` for attribute parsing".
In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs`:
- Around line 55-59: PendingTradesHandle currently exposes its internals
(sender, receiver, call_lock) allowing external callers to bypass coordination
(e.g., call recv() directly) and break open_pending_order invariants; make the
three fields private (remove pub) and add minimal, controlled async methods on
PendingTradesHandle that encapsulate sending a Command and awaiting the
correlated CommandResponse (e.g., a send_and_wait or use_existing
open_pending_order signature), retaining the internal Arc<Mutex<()>> lock usage
inside those methods so external code cannot acquire the receiver or sender
directly; update call sites to use the new methods instead of accessing
sender/receiver/call_lock directly and keep the struct definition as
PendingTradesHandle { sender: AsyncSender<Command>, receiver:
AsyncReceiver<CommandResponse>, call_lock: Arc<tokio::sync::Mutex<()>> } with
only methods exposed.
---
Duplicate comments:
In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs`:
- Line 216: Replace all raw eprintln! debug prints in pending_trades.rs (e.g.,
the eprintln! that logs cmd and self.last_req_id inside the handler) with
structured tracing calls (tracing::debug! or tracing::trace!) and include the
same context fields (cmd, self.last_req_id, request ids) as named fields in the
macro; remove any remaining eprintln! occurrences mentioned in the review.
Ensure the module imports tracing (use tracing::debug or trace) and that the log
level chosen matches existing logs so behavior remains consistent.
---
Nitpick comments:
In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs`:
- Around line 161-165: Update the module docs for
OpenPendingOrder/open_pending_order to state that concurrent calls on the same
PendingTradesHandle are serialized by the handle's mutex (so cloned handle
instances serialize callers), and that the original warning about timeouts
applies mainly when multiple independent handle instances (or external callers
bypassing the handle) exist or when module-level correlation constraints
(last_req_id) are violated; reference PendingTradesHandle, open_pending_order,
OpenPendingOrder, and last_req_id in the wording so readers can locate the
relevant synchronization and correlation state.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
AGENTS.mdBinaryOptionsToolsV2/rust/build.rscrates/binary_options_tools/src/pocketoption/modules/pending_trades.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- BinaryOptionsToolsV2/rust/build.rs
|
|
||
| ## Conventions Observed | ||
|
|
||
| - Rust macros in `crates/macros` with ` darling` for attribute parsing |
There was a problem hiding this comment.
Fix extra space in code span.
There's an extraneous space before "darling" inside the backticks, which violates markdown formatting conventions.
📝 Proposed fix
-- Rust macros in `crates/macros` with ` darling` for attribute parsing
+- Rust macros in `crates/macros` with `darling` for attribute parsing📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Rust macros in `crates/macros` with ` darling` for attribute parsing | |
| - Rust macros in `crates/macros` with `darling` for attribute parsing |
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 129-129: Spaces inside code span elements
(MD038, no-space-in-code)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AGENTS.md` at line 129, Fix the extra space inside the inline code span:
replace the backticked token "` darling`" with "`darling`" in the AGENTS.md
sentence that reads "Rust macros in `crates/macros` with ` darling` for
attribute parsing" so the code span has no leading space; update the text to
"Rust macros in `crates/macros` with `darling` for attribute parsing".
| pub struct PendingTradesHandle { | ||
| pub sender: AsyncSender<Command>, | ||
| pub receiver: AsyncReceiver<CommandResponse>, | ||
| pub call_lock: Arc<tokio::sync::Mutex<()>>, | ||
| } |
There was a problem hiding this comment.
Keep PendingTradesHandle internals private to preserve invariants.
Lines 56-58 expose the raw channels and mutex. External code can bypass open_pending_order (e.g., direct recv()), which can steal responses and cause hard-to-debug timeouts/correlation failures.
♻️ Proposed fix
pub struct PendingTradesHandle {
- pub sender: AsyncSender<Command>,
- pub receiver: AsyncReceiver<CommandResponse>,
- pub call_lock: Arc<tokio::sync::Mutex<()>>,
+ sender: AsyncSender<Command>,
+ receiver: AsyncReceiver<CommandResponse>,
+ call_lock: Arc<tokio::sync::Mutex<()>>,
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub struct PendingTradesHandle { | |
| pub sender: AsyncSender<Command>, | |
| pub receiver: AsyncReceiver<CommandResponse>, | |
| pub call_lock: Arc<tokio::sync::Mutex<()>>, | |
| } | |
| pub struct PendingTradesHandle { | |
| sender: AsyncSender<Command>, | |
| receiver: AsyncReceiver<CommandResponse>, | |
| call_lock: Arc<tokio::sync::Mutex<()>>, | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs`
around lines 55 - 59, PendingTradesHandle currently exposes its internals
(sender, receiver, call_lock) allowing external callers to bypass coordination
(e.g., call recv() directly) and break open_pending_order invariants; make the
three fields private (remove pub) and add minimal, controlled async methods on
PendingTradesHandle that encapsulate sending a Command and awaiting the
correlated CommandResponse (e.g., a send_and_wait or use_existing
open_pending_order signature), retaining the internal Arc<Mutex<()>> lock usage
inside those methods so external code cannot acquire the receiver or sender
directly; update call sites to use the new methods instead of accessing
sender/receiver/call_lock directly and keep the struct definition as
PendingTradesHandle { sender: AsyncSender<Command>, receiver:
AsyncReceiver<CommandResponse>, call_lock: Arc<tokio::sync::Mutex<()>> } with
only methods exposed.
| if self.last_req_id.is_some() { | ||
| warn!(target: "PendingTradesApiModule", "Overwriting a pending request. Concurrent open_pending_order calls are not supported."); | ||
| } | ||
| self.last_req_id = Some(req_id); | ||
| eprintln!("[DEBUG] Set last_req_id to: {:?}", req_id); |
There was a problem hiding this comment.
Overwriting last_req_id can misroute delayed server responses to the wrong caller.
At Line 222, a new command overwrites an existing pending req_id. If a previous request timed out client-side and its response arrives later, that stale response can be attributed to the newer request path. Also, Line 273 forwards fail responses unconditionally, which can poison subsequent requests.
🛡️ Safer local guard (prevents overwrite and drops stale fail when no pending request)
Command::OpenPendingOrder { open_type, amount, asset, open_time, open_price, timeframe, min_payout, command, req_id } => {
if self.last_req_id.is_some() {
warn!(target: "PendingTradesApiModule", "Overwriting a pending request. Concurrent open_pending_order calls are not supported.");
- }
- self.last_req_id = Some(req_id);
+ continue;
+ }
+ self.last_req_id = Some(req_id);
eprintln!("[DEBUG] Set last_req_id to: {:?}", req_id);
let order = OpenPendingOrder::new(open_type, amount, asset, open_time, open_price, timeframe, min_payout, command);
self.to_ws_sender.send(Message::text(order.to_string())).await?;
}
...
ServerResponse::Fail(fail) => {
- self.last_req_id = None;
- self.command_responder.send(CommandResponse::Error(fail)).await?;
+ if self.last_req_id.take().is_some() {
+ self.command_responder.send(CommandResponse::Error(fail)).await?;
+ } else {
+ warn!(target: "PendingTradesApiModule", "Received failopenPendingOrder but no req_id was pending. Dropping response to avoid ambiguity.");
+ }
}Also applies to: 273-276
There was a problem hiding this comment.
Pull request overview
This pull request updates BinaryOptionsTools V2 to version 0.2.8, introducing several significant features and improvements. The PR lacks a proper description (title: "idk i did a lot so look at the changelog file"), but the CHANGELOG provides comprehensive details. Key changes include custom candle compilation, thread-safe synchronous operations, multi-subscription support per asset, Python stub generation during builds, and various bug fixes.
Changes:
- Added custom candle compilation feature allowing non-standard timeframes
- Implemented thread-safe synchronous client operations
- Enhanced subscription system to support multiple subscriptions per asset with unique IDs
- Added Python stub generation via build script with optional
stubgenfeature - Fixed import ordering across Python test files (PEP 8 compliance)
- Updated version numbers, documentation, and license files
- Improved SSID fetcher userscript with better security handling
Reviewed changes
Copilot reviewed 78 out of 104 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| CHANGELOG.md | Documents version 0.2.8 features, fixes, and breaking changes |
| README.md | Updated installation links from 0.2.6 to 0.2.8 |
| CITATION.cff | Version bump to 0.2.8, year updated to 2026 |
| tutorials/scripts/* | SSID fetcher userscript improvements and documentation |
| tests/python/* | Import ordering fixes for PEP 8 compliance |
| crates/binary_options_tools/src/* | Tests added, field visibility changes, compile_candles feature |
| crates/binary_options_tools/src/pocketoption/state.rs | Subscription storage changed to Vec for multi-subscription support |
| crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs | Major refactor supporting multiple subscriptions per asset |
| crates/binary_options_tools/src/pocketoption/pocket_client.rs | Added compile_candles method and pending_trades_lock |
| BinaryOptionsToolsV2/rust/* | Added stub generation, updated dependencies, new modules |
| Configuration files | Build system updates, stub generation configuration |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 15
♻️ Duplicate comments (3)
crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs (1)
703-745:⚠️ Potential issue | 🟠 MajorUnsubscribe cleanup still aborts on send failure, and
.awaitoccurs under the write lock.Line 716 and Line 738 still propagate send errors via
?, so cleanup can exit early. Also, these awaited sends run whileactive_subscriptionswrite lock is held, which can stall other subscription operations.🔧 Suggested fix
async fn remove_subscription( &mut self, asset: &str, subscription_id: Option<Uuid>, ) -> CoreResult<bool> { - let mut subscriptions = self.state.active_subscriptions.write().await; - if let Some(vec) = subscriptions.get_mut(asset) { - if let Some(sub_id) = subscription_id { - // Remove specific subscription by ID - let index = vec.iter().position(|(_, _, id)| *id == sub_id); - if let Some(idx) = index { - let (stream_sender, _, _) = vec.remove(idx); - // Send termination signal - stream_sender - .send(SubscriptionEvent::Terminated { reason: "Unsubscribed from main module".to_string() }) - .await - .inspect_err(|e| warn!(target: "SubscriptionsApiModule", "Failed to send termination signal: {}", e))?; - // If vec is empty, remove the asset entry - if vec.is_empty() { - subscriptions.remove(asset); - } - return Ok(true); - } else { - // Subscription ID not found - return Ok(false); - } - } else { - // Remove all subscriptions for this asset - let senders: Vec<_> = vec - .drain(..) - .map(|(stream_sender, _, _)| stream_sender) - .collect(); - if !senders.is_empty() { - // Send termination to all - for stream_sender in senders { - stream_sender - .send(SubscriptionEvent::Terminated { reason: "Unsubscribed from main module".to_string() }) - .await - .inspect_err(|e| warn!(target: "SubscriptionsApiModule", "Failed to send termination signal: {}", e))?; - } - // Remove the asset entry - subscriptions.remove(asset); - return Ok(true); - } else { - return Ok(false); - } - } - } - // No subscriptions for this asset - Ok(false) + let mut senders_to_notify: Vec<AsyncSender<SubscriptionEvent>> = Vec::new(); + let removed = { + let mut subscriptions = self.state.active_subscriptions.write().await; + if let Some(vec) = subscriptions.get_mut(asset) { + if let Some(sub_id) = subscription_id { + if let Some(idx) = vec.iter().position(|(_, _, id)| *id == sub_id) { + let (stream_sender, _, _) = vec.remove(idx); + senders_to_notify.push(stream_sender); + if vec.is_empty() { + subscriptions.remove(asset); + } + true + } else { + false + } + } else { + senders_to_notify.extend(vec.drain(..).map(|(stream_sender, _, _)| stream_sender)); + if !senders_to_notify.is_empty() { + subscriptions.remove(asset); + true + } else { + false + } + } + } else { + false + } + }; + + if !removed { + return Ok(false); + } + + for stream_sender in senders_to_notify { + let _ = stream_sender + .send(SubscriptionEvent::Terminated { reason: "Unsubscribed from main module".to_string() }) + .await + .inspect_err(|e| warn!(target: "SubscriptionsApiModule", "Failed to send termination signal: {}", e)); + } + + Ok(true) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs` around lines 703 - 745, In remove_subscription: avoid awaiting send() calls while holding the active_subscriptions write lock and stop propagating send errors with `?`; instead, extract the stream_sender(s) (for the specific-case get the removed sender from vec, for the bulk-case collect senders), perform any necessary removals from subscriptions (e.g., vec.remove(idx) or subscriptions.remove(asset)), then drop the write lock and asynchronously send SubscriptionEvent::Terminated to each sender while logging failures (use inspect_err or log but do not return Err). Ensure you reference the remove_subscription function, the active_subscriptions write lock, the vec of (stream_sender, _, id) entries, and the SubscriptionEvent::Terminated sends when relocating the awaits outside the lock.crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs (2)
55-59:⚠️ Potential issue | 🟠 MajorKeep
PendingTradesHandleinternals private to preserve request/response invariants.Line 56-58 expose raw channels and lock; external callers can bypass
open_pending_ordercoordination and steal responses.🔒 Suggested fix
pub struct PendingTradesHandle { - pub sender: AsyncSender<Command>, - pub receiver: AsyncReceiver<CommandResponse>, - pub call_lock: Arc<tokio::sync::Mutex<()>>, + sender: AsyncSender<Command>, + receiver: AsyncReceiver<CommandResponse>, + call_lock: Arc<tokio::sync::Mutex<()>>, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs` around lines 55 - 59, PendingTradesHandle currently exposes internal fields (sender, receiver, call_lock) allowing callers to bypass coordination and steal responses; make those fields private (remove pub) and instead add controlled APIs on PendingTradesHandle such as an async method (e.g., open_pending_order or send_pending_command) that acquires the internal call_lock, sends via the internal sender, awaits the corresponding response from the internal receiver, and returns the CommandResponse; ensure any existing code using PendingTradesHandle.sender/receiver/call_lock is updated to use the new method so the request/response invariant is preserved.
224-227:⚠️ Potential issue | 🔴 CriticalDon’t overwrite an in-flight req_id, and don’t emit fail responses when no request is pending.
Line 224-227 still overwrites
last_req_idduring overlap, and Line 278-280 still forwards fail responses unconditionally. This can route stale failures to the wrong caller.🛡️ Suggested local guard
Command::OpenPendingOrder { open_type, amount, asset, open_time, open_price, timeframe, min_payout, command, req_id } => { if self.last_req_id.is_some() { warn!(target: "PendingTradesApiModule", "Overwriting a pending request. Concurrent open_pending_order calls are not supported."); - } - self.last_req_id = Some(req_id); + continue; + } + self.last_req_id = Some(req_id); tracing::debug!(target: "PendingTradesApiModule", "Set last_req_id to: {:?}", req_id); let order = OpenPendingOrder::new(open_type, amount, asset, open_time, open_price, timeframe, min_payout, command); self.to_ws_sender.send(Message::text(order.to_string())).await?; } @@ ServerResponse::Fail(fail) => { - self.last_req_id = None; - self.command_responder.send(CommandResponse::Error(fail)).await?; + if self.last_req_id.take().is_some() { + self.command_responder.send(CommandResponse::Error(fail)).await?; + } else { + warn!(target: "PendingTradesApiModule", "Received failopenPendingOrder with no pending req_id; dropping stale error."); + } }Also applies to: 278-280
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs` around lines 224 - 227, Do not overwrite an active in-flight request id: in the open_pending_order path where you set self.last_req_id = Some(req_id), first check if self.last_req_id.is_some() and if so return/raise an appropriate error or reject the new request instead of replacing it (preserve the existing last_req_id and log the collision). Likewise, in the failure-handling path that currently unconditionally forwards fail responses (the block that reads/forwards fail responses for pending trades), only forward a failure to a caller if self.last_req_id.is_some() and the stored id matches the response (or at minimum ensure last_req_id is present), then clear last_req_id; otherwise drop/log the stale failure so it cannot be routed to the wrong caller. Ensure references to last_req_id and req_id (and the open_pending_order request path) are used to gate both accepting new requests and dispatching failure responses.
🧹 Nitpick comments (3)
tutorials/scripts/howto.txt (1)
23-30: Small docs hardening: explicitly mention.gitignorefor.env.You already say “never commit,” which is good. Consider adding a direct note like “ensure
.envis listed in.gitignore” for clearer guardrails.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tutorials/scripts/howto.txt` around lines 23 - 30, Add a brief, explicit sentence after the .env recommendation instructing readers to ensure the .env file is listed in their .gitignore (e.g., “ensure .env is added to .gitignore so it isn’t committed”); update the paragraph that mentions “.env file (local development only) — never commit to version control” to include this exact guidance and optionally suggest checking .gitignore before committing.crates/binary_options_tools/src/pocketoption/pocket_client.rs (1)
803-825: Consider validatingcustom_period> 0 for clearer error feedback.The underlying
compile_candles_from_tuplessilently returns an emptyVecwhenperiod == 0. Consider adding explicit validation to provide a clearer error message to callers.💡 Optional: Add input validation
pub async fn compile_candles( &self, asset: impl ToString, custom_period: u32, lookback_period: u32, ) -> PocketResult<Vec<Candle>> { let asset_str = asset.to_string(); + if custom_period == 0 { + return Err(PocketError::General("custom_period must be greater than 0".into())); + } + // Validate asset exists (if assets are loaded) if let Some(assets) = self.assets().await {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/binary_options_tools/src/pocketoption/pocket_client.rs` around lines 803 - 825, In compile_candles, validate that custom_period is > 0 before calling compile_candles_from_tuples: if custom_period == 0 return a clear PocketError (e.g., add and use PocketError::InvalidPeriod(u32) or an appropriate existing variant) so callers get explicit feedback instead of receiving an empty Vec; update compile_candles to perform this check at the start, and add the new PocketError variant and tests if it doesn't already exist.tests/python/pocketoption/test_async_mocked.py (1)
494-521: Add explicit tests forcompile_candlesin this mocked suite.The new API is part of this PR’s surface, but this file doesn’t currently assert argument forwarding + JSON decoding behavior for it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/python/pocketoption/test_async_mocked.py` around lines 494 - 521, Add an explicit async test for compile_candles in the TestCandles suite: create a test method (e.g. test_compile_candles_success) that calls async_client.compile_candles with representative arguments (symbol, interval, offset and any flag/ts parameter you expect), asserts the returned value is the decoded JSON type (list/dict), checks expected keys exist in the decoded payload (e.g. "open", "close" or top-level keys your API returns), and asserts that numeric/boolean arguments are reflected in the response or forwarded behavior (so you validate argument forwarding and JSON decoding); reference the compile_candles function and async_client.compile_candles when adding the assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/CI.yml:
- Around line 7-8: The top-level permissions block currently grants contents:
write to all jobs; narrow this by removing or changing the top-level permissions
and instead add a job-level permissions block under the release job (the job
named "release") with contents: write, leaving the build jobs (linux, windows,
macos) to inherit default read-only permissions; update the workflow so only the
release job has permissions.contents set to write and other jobs do not request
write access.
- Around line 58-69: Change the Windows build matrix values so the
PyO3/maturin-action receives a valid architecture name: replace the matrix key
value that currently lists "x64" with "x86_64" in the matrix target array used
by the workflow (the "target" array referenced where matrix.target is later
consumed by actions/setup-python and PyO3/maturin-action arguments); keep "x86"
for 32-bit builds so the array becomes ["x86_64", "x86"].
In
`@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/asynchronous.py`:
- Around line 727-767: The compile_candles method lacks Python-side validation
for custom_period and lookback_period so negative values reach the Rust FFI and
raise OverflowError; add explicit checks at the start of
PocketOptionAsync.compile_candles to ensure both custom_period and
lookback_period are positive integers (e.g., isinstance(..., int) and > 0) and
raise ValueError with a clear message if not, before calling
self.client.compile_candles; keep the rest of the method unchanged so the FFI
call still returns json.loads(await self.client.compile_candles(...)).
In `@BinaryOptionsToolsV2/Readme.md`:
- Around line 45-50: The README contains bash code snippets that use C/JS-style
'//' comments (seen in the snippets around the build/install commands and the
later snippet), which is invalid in shell; update those code blocks by replacing
every '//' comment prefix with POSIX shell comments '#' so the commands can be
copy/pasted and executed correctly (edit the two bash code blocks that include
"cd BinaryOptionsToolsV2", "maturin build -r", "pip install path/to/file.whl"
and the later similar block).
In `@BinaryOptionsToolsV2/rust/Readme.md`:
- Around line 45-50: The README's bash snippets use C-style '//' comments which
are invalid in shell; update the shell code fences (the blocks around the
maturin and pip commands) to use POSIX '#' comments instead of '//' at the
occurrences shown (the comments before the maturin build and pip install
commands and the other commented lines in the second bash block), ensuring all
lines inside the ```bash fences use '#' for comments so the commands can be
pasted and run correctly.
- Around line 343-345: Update the README example to call compile_candles with
the correct three-argument arity (asset, custom_period, lookback_period) by
replacing the four-argument call (e.g., candles = await
client.compile_candles("EURUSD_otc", 60, 300, 100)) with a three-argument call
and correct the parameter comment to list only symbol/asset, custom_period, and
lookback_period; also fix the bash code blocks that incorrectly use '//'
comments (in the blocks currently using bash syntax) by replacing all '//'
comment markers with '#' so the snippets are valid bash.
In `@crates/binary_options_tools/examples/pending_trades_example.rs`:
- Around line 139-147: The mock ServerResponse::Success is being sent to msg_tx
before open_pending_order is called, so last_req_id may be None and the response
gets dropped; move the send of the simulated response (the
ServerResponse::Success created from create_test_pending_order and sent via
msg_tx) to after calling open_pending_order (so last_req_id is set) and apply
the same ordering change in the concurrent example task block that uses msg_tx
(ensure the simulated response is queued only after invoking open_pending_order
and acquiring its request id).
- Around line 494-520: The test uses a randomly generated correct_id that
doesn't match the internal request id produced by open_pending_order, so replace
the hard-coded correct_id with the actual req_id returned by the call that sends
the command and use that req_id when constructing the matching response (resp3);
specifically, capture the request id from the function that enqueues/returns the
pending order request (the send/open function used in this example—e.g., the
call that uses PendingTradesHandle or open_pending_order) and use that captured
req_id for resp3 (and ensure resp1/resp2 use different ids) so the awaited
request id matches the response.
- Around line 425-452: The example never injects the simulated inbound response
so open_pending_order on pending_trades_handle will always hit the timeout; fix
by either (A) simulating the module response on the message channel used by the
handler (send the expected success response into msg_tx/msg_rx matching the
format the handler expects immediately after starting the open_pending_order
future and before the timeout elapses) or (B) explicitly document that the
example is expected to time out; locate the call to
pending_trades_handle.open_pending_order and the message channel variables
msg_tx/msg_rx and add a short send of the appropriate success message into
msg_tx (or a comment explaining the timeout behavior) so the success branch
(Ok(Ok(order))) can be reached.
In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs`:
- Around line 219-235: The select! over self.command_receiver.recv() and
self.message_receiver.recv() lacks an else branch to handle both channels
closing; add an else => { ... } arm to the select! (in the function containing
the select! loop that references command_receiver, message_receiver,
last_req_id, and to_ws_sender.send) that logs/handles shutdown and returns or
breaks the loop cleanly when both receivers return Err, ensuring no panic occurs
when channels close.
In `@crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs`:
- Around line 683-690: The current check using is_max_subscriptions_reached() is
insufficient because active_subscriptions now stores Vecs per asset and you can
bypass MAX_SUBSCRIPTIONS by adding many entries to one asset; update the logic
so the global subscription count (SubscriptionCount / MAX_SUBSCRIPTIONS) is
computed by summing the lengths of all Vecs in state.active_subscriptions (or
maintain an atomic/global counter) and use that total when deciding to return
SubscriptionError::MaxSubscriptionsReached in is_max_subscriptions_reached() (or
before pushing in the block that uses subscriptions.entry(...).push(...));
ensure any helper named is_max_subscriptions_reached(), SubscriptionCount
semantics, and state.active_subscriptions usage are updated so the check
reflects total active subscriptions rather than per-asset entries.
- Around line 776-791: The read lock on self.state.active_subscriptions is being
held across awaited channel sends (stream_sender.send(...).await) which can
block writers; fix by, inside the scope where you call
self.state.active_subscriptions.read().await (the code that checks
subscriptions.get(asset)), clone or collect the AsyncSender instances (the
stream_sender values) into a local Vec (e.g., Vec<AsyncSender> or whatever
sender type is used), then drop/release the read lock and only after that
iterate over the collected senders and perform
stream_sender.send(SubscriptionEvent::Update { ... }).await while logging errors
as before.
In `@tests/python/pocketoption/test_async_mocked.py`:
- Around line 363-366: The teardown currently swallows all exceptions from await
client.shutdown() which hides regressions; change the fixture to let errors
surface by removing the bare try/except or, if you must handle them, catch
Exception as e and call pytest.fail(f"client.shutdown() failed: {e}") (import
pytest) so failures are reported, referencing the await client.shutdown() call
in tests/python/pocketoption/test_async_mocked.py.
- Around line 124-147: The subscription mocks (subscribe_symbol,
subscribe_symbol_chuncked, subscribe_symbol_timed,
subscribe_symbol_time_aligned) currently yield Python dicts but
AsyncSubscription.__anext__() calls json.loads(...) on each yielded item; update
each mock's inner subscription to yield JSON-serialized strings (e.g.,
json.dumps(...) of the same payload shape) so the wrapper's json.loads can parse
them successfully, preserving the same keys/values the tests expect.
- Around line 172-180: The mock for subscribe in create_raw_handler returns a
coroutine-like mock (mock_handler.subscribe =
AsyncMock(return_value=AsyncMock())) which doesn't mimic the production
async-iterator behavior; change mock_handler.subscribe so awaiting
handler.subscribe() returns an async-iterable/async-generator or an object
implementing __aiter__ that yields expected messages (or return an AsyncMock
whose return_value implements __aiter__ yielding the stream), ensuring the mock
matches the production subscribe async-iterator shape used by code that does
"async for msg in handler.subscribe()" or "async with" patterns.
---
Duplicate comments:
In `@crates/binary_options_tools/src/pocketoption/modules/pending_trades.rs`:
- Around line 55-59: PendingTradesHandle currently exposes internal fields
(sender, receiver, call_lock) allowing callers to bypass coordination and steal
responses; make those fields private (remove pub) and instead add controlled
APIs on PendingTradesHandle such as an async method (e.g., open_pending_order or
send_pending_command) that acquires the internal call_lock, sends via the
internal sender, awaits the corresponding response from the internal receiver,
and returns the CommandResponse; ensure any existing code using
PendingTradesHandle.sender/receiver/call_lock is updated to use the new method
so the request/response invariant is preserved.
- Around line 224-227: Do not overwrite an active in-flight request id: in the
open_pending_order path where you set self.last_req_id = Some(req_id), first
check if self.last_req_id.is_some() and if so return/raise an appropriate error
or reject the new request instead of replacing it (preserve the existing
last_req_id and log the collision). Likewise, in the failure-handling path that
currently unconditionally forwards fail responses (the block that reads/forwards
fail responses for pending trades), only forward a failure to a caller if
self.last_req_id.is_some() and the stored id matches the response (or at minimum
ensure last_req_id is present), then clear last_req_id; otherwise drop/log the
stale failure so it cannot be routed to the wrong caller. Ensure references to
last_req_id and req_id (and the open_pending_order request path) are used to
gate both accepting new requests and dispatching failure responses.
In `@crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs`:
- Around line 703-745: In remove_subscription: avoid awaiting send() calls while
holding the active_subscriptions write lock and stop propagating send errors
with `?`; instead, extract the stream_sender(s) (for the specific-case get the
removed sender from vec, for the bulk-case collect senders), perform any
necessary removals from subscriptions (e.g., vec.remove(idx) or
subscriptions.remove(asset)), then drop the write lock and asynchronously send
SubscriptionEvent::Terminated to each sender while logging failures (use
inspect_err or log but do not return Err). Ensure you reference the
remove_subscription function, the active_subscriptions write lock, the vec of
(stream_sender, _, id) entries, and the SubscriptionEvent::Terminated sends when
relocating the awaits outside the lock.
---
Nitpick comments:
In `@crates/binary_options_tools/src/pocketoption/pocket_client.rs`:
- Around line 803-825: In compile_candles, validate that custom_period is > 0
before calling compile_candles_from_tuples: if custom_period == 0 return a clear
PocketError (e.g., add and use PocketError::InvalidPeriod(u32) or an appropriate
existing variant) so callers get explicit feedback instead of receiving an empty
Vec; update compile_candles to perform this check at the start, and add the new
PocketError variant and tests if it doesn't already exist.
In `@tests/python/pocketoption/test_async_mocked.py`:
- Around line 494-521: Add an explicit async test for compile_candles in the
TestCandles suite: create a test method (e.g. test_compile_candles_success) that
calls async_client.compile_candles with representative arguments (symbol,
interval, offset and any flag/ts parameter you expect), asserts the returned
value is the decoded JSON type (list/dict), checks expected keys exist in the
decoded payload (e.g. "open", "close" or top-level keys your API returns), and
asserts that numeric/boolean arguments are reflected in the response or
forwarded behavior (so you validate argument forwarding and JSON decoding);
reference the compile_candles function and async_client.compile_candles when
adding the assertions.
In `@tutorials/scripts/howto.txt`:
- Around line 23-30: Add a brief, explicit sentence after the .env
recommendation instructing readers to ensure the .env file is listed in their
.gitignore (e.g., “ensure .env is added to .gitignore so it isn’t committed”);
update the paragraph that mentions “.env file (local development only) — never
commit to version control” to include this exact guidance and optionally suggest
checking .gitignore before committing.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
.github/workflows/CI.yml.github/workflows/docs.ymlBinaryOptionsToolsV2/Readme.mdBinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/asynchronous.pyBinaryOptionsToolsV2/rust/Readme.mdBinaryOptionsToolsV2/rust/build.rsBinaryOptionsToolsV2/rust/src/error.rsBinaryOptionsToolsV2/rust/src/pocketoption.rsCITATION.cffcrates/binary_options_tools/examples/pending_trades_example.rscrates/binary_options_tools/src/pocketoption/modules/pending_trades.rscrates/binary_options_tools/src/pocketoption/modules/subscriptions.rscrates/binary_options_tools/src/pocketoption/pocket_client.rsdocs/guides/python-pystrategy-trading-bot.mddocs/project/breaking-changes-0.2.6.mdtests/python/pocketoption/test_async_mocked.pytutorials/scripts/SSID_Fetcher_UserScript.user.jstutorials/scripts/howto.txt
🚧 Files skipped from review as they are similar to previous changes (3)
- BinaryOptionsToolsV2/rust/build.rs
- docs/project/breaking-changes-0.2.6.md
- CITATION.cff
| permissions: | ||
| contents: read | ||
| contents: write |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/CI.yml | head -50Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 1824
🏁 Script executed:
rg -n --type=yaml 'permissions:|contents:|release|push|gh release|mkdocs|jobs:' .github/workflows/CI.ymlRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 605
🏁 Script executed:
wc -l .github/workflows/CI.ymlRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 106
🏁 Script executed:
cat -n .github/workflows/CI.ymlRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 5641
Scope contents: write to the release job only.
The workflow grants write permissions to all jobs at the top level (lines 7-8), but only the release job requires it. The build jobs (linux, windows, macos) perform only read operations—checkout, build, and upload artifacts to Actions—and should not have write access. Move contents: write to a job-level permissions block in the release job to reduce the blast radius.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/CI.yml around lines 7 - 8, The top-level permissions block
currently grants contents: write to all jobs; narrow this by removing or
changing the top-level permissions and instead add a job-level permissions block
under the release job (the job named "release") with contents: write, leaving
the build jobs (linux, windows, macos) to inherit default read-only permissions;
update the workflow so only the release job has permissions.contents set to
write and other jobs do not request write access.
| target: [x64, x86] | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
| - uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: 3.x | ||
| architecture: ${{ matrix.platform.target }} | ||
| python-version: "3.13" | ||
| architecture: ${{ matrix.target }} | ||
| - name: Build wheels | ||
| uses: PyO3/maturin-action@v1 | ||
| with: | ||
| target: ${{ matrix.platform.target }} | ||
| args: --release --out dist | ||
| sccache: ${{ !startsWith(github.ref, 'refs/tags/') }} | ||
| working-directory: BinaryOptionsToolsV2 | ||
| - uses: actions/setup-python@v5 | ||
| with: | ||
| python-version: 3.13 | ||
| architecture: ${{ matrix.platform.target }} | ||
| - name: Build wheels (3.13) | ||
| uses: PyO3/maturin-action@v1 | ||
| with: | ||
| target: ${{ matrix.platform.target }} | ||
| target: ${{ matrix.target }} | ||
| args: --release --out dist -i python3.13 |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
maturin build --target valid values Windows x64 x86 rust target triple
💡 Result:
maturin build --target takes a Rust target triple (same values you’d pass to cargo build --target ...). In maturin this option is literally “Build for the target triple”. (docs.rs)
Common Windows triples:
-
Windows x64 (64-bit Intel/AMD)
x86_64-pc-windows-msvc(MSVC toolchain) (doc.rust-lang.org)x86_64-pc-windows-gnu(MinGW) (blog.rust-lang.org)x86_64-pc-windows-gnullvm(GNU + LLVM/LLD) (blog.rust-lang.org)
-
Windows x86 (32-bit Intel)
i686-pc-windows-msvc(MSVC toolchain) (doc.rust-lang.org)i686-pc-windows-gnu(MinGW) (blog.rust-lang.org)i686-pc-windows-gnullvm(GNU + LLVM/LLD) (blog.rust-lang.org)
Example:
maturin build --release --target x86_64-pc-windows-msvcCitations:
- 1: https://docs.rs/maturin/latest/maturin/struct.CargoOptions.html?utm_source=openai
- 2: https://doc.rust-lang.org/beta/rustc/platform-support/windows-msvc.html?utm_source=openai
- 3: https://blog.rust-lang.org/2024/02/26/Windows-7/?utm_source=openai
- 4: https://blog.rust-lang.org/2024/02/26/Windows-7/?utm_source=openai
- 5: https://doc.rust-lang.org/beta/rustc/platform-support/windows-msvc.html?utm_source=openai
- 6: https://blog.rust-lang.org/2024/02/26/Windows-7/?utm_source=openai
- 7: https://blog.rust-lang.org/2024/02/26/Windows-7/?utm_source=openai
🌐 Web query:
PyO3 maturin-action target parameter architecture mapping
💡 Result:
In PyO3/maturin-action, the target input is (conceptually) Cargo’s --target. (github.com)
In practice, most CI examples pass a short architecture name (e.g. x86_64, aarch64) and the action uses that (plus OS / manylinux) to pick the right build environment (notably the manylinux Docker image). (github.com)
Common “short arch” values used with maturin-action
These are the values you typically put in with: target: (as generated by maturin generate-ci github): x86_64, x86, aarch64, armv7, s390x, ppc64le. (gist.github.com)
The action’s README also shows the corresponding manylinux container architectures it knows about (e.g. x86_64, i686, aarch64, armv7l, ppc64le, s390x, …). (github.com)
Architecture → Rust target triple (what it usually means)
If you want an explicit mapping, these are the typical Rust triples those short names correspond to:
x86_64- Linux (glibc):
x86_64-unknown-linux-gnu - Linux (musl):
x86_64-unknown-linux-musl - macOS:
x86_64-apple-darwin - Windows:
x86_64-pc-windows-msvc
- Linux (glibc):
x86(32-bit x86)- Linux (glibc):
i686-unknown-linux-gnu - Linux (musl):
i686-unknown-linux-musl - Windows:
i686-pc-windows-msvc
- Linux (glibc):
aarch64- Linux (glibc):
aarch64-unknown-linux-gnu - Linux (musl):
aarch64-unknown-linux-musl - macOS:
aarch64-apple-darwin - Windows:
aarch64-pc-windows-msvc
- Linux (glibc):
armv7(matches manylinux “armv7l”)- Linux (glibc, hard-float): typically
armv7-unknown-linux-gnueabihf - Linux (musl, hard-float): typically
armv7-unknown-linux-musleabihf
- Linux (glibc, hard-float): typically
ppc64le- Linux (glibc):
powerpc64le-unknown-linux-gnu - Linux (musl):
powerpc64le-unknown-linux-musl
- Linux (glibc):
s390x- Linux (glibc):
s390x-unknown-linux-gnu
- Linux (glibc):
Tip (to avoid ambiguity): if you already know the exact Rust triple you want, pass the full triple yourself via with: target: ... (since the action treats target as Cargo’s --target). (github.com)
Citations:
- 1: https://github.com/PyO3/maturin-action
- 2: https://github.com/PyO3/maturin-action
- 3: https://gist.github.com/jsstevenson/71c529f2227b4484cf1e1c19f92b43c9?utm_source=openai
- 4: https://github.com/PyO3/maturin-action
- 5: https://github.com/PyO3/maturin-action
Change x64 to x86_64 in the Windows build matrix.
The workflow uses PyO3/maturin-action, which accepts short architecture names (x86_64, x86, aarch64, etc.). The current matrix value x64 is not valid; it should be x86_64. The value x86 is correct for 32-bit builds.
Change line 58 from:
target: [x64, x86]to:
target: [x86_64, x86]The rest of the workflow (using matrix.target directly) will work correctly with these short names.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/CI.yml around lines 58 - 69, Change the Windows build
matrix values so the PyO3/maturin-action receives a valid architecture name:
replace the matrix key value that currently lists "x64" with "x86_64" in the
matrix target array used by the workflow (the "target" array referenced where
matrix.target is later consumed by actions/setup-python and PyO3/maturin-action
arguments); keep "x86" for 32-bit builds so the array becomes ["x86_64", "x86"].
| async def compile_candles(self, asset: str, custom_period: int, lookback_period: int) -> List[Dict]: | ||
| """Compiles custom candlesticks from raw tick history. | ||
|
|
||
| This method fetches raw tick data over the specified lookback period and | ||
| aggregates it into custom-sized candles. This enables non-standard timeframes | ||
| like 20 seconds, 40 seconds, 90 seconds, etc. | ||
|
|
||
| Args: | ||
| asset (str): Trading asset symbol (e.g., "EURUSD_otc") | ||
| custom_period (int): Desired candle duration in seconds (e.g., 20, 40, 90) | ||
| lookback_period (int): Number of seconds of tick history to fetch. | ||
| This determines the time range from which ticks are collected. | ||
|
|
||
| Returns: | ||
| List[Dict]: A list of dictionaries, each representing a compiled candlestick: | ||
| - time: Candle timestamp (Unix timestamp, aligned to period boundaries) | ||
| - open: Opening price | ||
| - high: Highest price during the period | ||
| - low: Lowest price during the period | ||
| - close: Closing price | ||
|
|
||
| Raises: | ||
| ConnectionError: If the client is not connected | ||
| ValueError: If the asset is invalid or periods are zero/negative | ||
| TimeoutError: If tick fetch or compilation times out | ||
|
|
||
| Example: | ||
| ```python | ||
| async with PocketOptionAsync(ssid) as client: | ||
| # Get 20-second candles from last 5 minutes | ||
| candles = await client.compile_candles("EURUSD_otc", 20, 300) | ||
| for candle in candles: | ||
| print(f"{candle['time']}: O={candle['open']}, C={candle['close']}") | ||
| ``` | ||
|
|
||
| Note: | ||
| - This is a compute-intensive operation as it fetches and processes raw ticks. | ||
| - For standard timeframes, use `candles()` or `get_candles()` for better efficiency. | ||
| """ | ||
| return json.loads(await self.client.compile_candles(asset, custom_period, lookback_period)) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/asynchronous.py | sed -n '720,775p'Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 3272
🏁 Script executed:
# Let's also check the file structure to ensure it exists
fd asynchronous.pyRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 202
🏁 Script executed:
# Search for any validation patterns in related methods
rg "custom_period|lookback_period" BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/ -A 2 -B 2Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 4167
🏁 Script executed:
# Check the Rust client binding to understand what exceptions it raises
rg "compile_candles" BinaryOptionsToolsV2/ -A 5 -B 2Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 12928
🏁 Script executed:
# Let's verify how PyO3 handles negative int to u32 conversion
# by checking if there are any existing validation patterns in the codebase
rg "if.*<=\s*0|if.*<\s*0" BinaryOptionsToolsV2/python/ -A 1Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 60
🏁 Script executed:
# Check if there are tests for compile_candles that might reveal expected behavior
cat tests/python/pocketoption/test_asynchronous.py | grep -A 20 "compile_candles"Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 60
🏁 Script executed:
# Check if test file exists and what it contains
fd test_asynchronous.py -x wc -lRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 130
🏁 Script executed:
# Look for any validation patterns in synchronous.py for comparison
rg "if.*<.*0|if.*<=.*0|ValueError.*period" BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 655
🏁 Script executed:
# Check the Rust implementation to see if it validates periods
cat -n BinaryOptionsToolsV2/rust/src/pocketoption.rs | grep -A 30 "pub fn compile_candles"Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 1346
🏁 Script executed:
# Check what BinaryErrorPy::InvalidParameter maps to in Python
rg "BinaryErrorPy|InvalidParameter" BinaryOptionsToolsV2/rust/src/ -A 2 -B 2Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 36208
🏁 Script executed:
# Verify how the error is converted to Python
rg "impl.*From.*BinaryErrorPy\|impl.*Into.*PyErr" BinaryOptionsToolsV2/rust/src/ -A 5Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 60
Add input validation for period parameters to honor documented ValueError guarantee.
The docstring promises ValueError for non-positive periods, but negative values currently bypass Python-side validation. Since the Rust binding expects u32 (unsigned), negative integers will fail at the FFI boundary with OverflowError instead of the documented ValueError.
🛡️ Proposed fix
async def compile_candles(self, asset: str, custom_period: int, lookback_period: int) -> List[Dict]:
@@
- return json.loads(await self.client.compile_candles(asset, custom_period, lookback_period))
+ if custom_period <= 0:
+ raise ValueError("custom_period must be > 0")
+ if lookback_period <= 0:
+ raise ValueError("lookback_period must be > 0")
+ return json.loads(await self.client.compile_candles(asset, custom_period, lookback_period))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def compile_candles(self, asset: str, custom_period: int, lookback_period: int) -> List[Dict]: | |
| """Compiles custom candlesticks from raw tick history. | |
| This method fetches raw tick data over the specified lookback period and | |
| aggregates it into custom-sized candles. This enables non-standard timeframes | |
| like 20 seconds, 40 seconds, 90 seconds, etc. | |
| Args: | |
| asset (str): Trading asset symbol (e.g., "EURUSD_otc") | |
| custom_period (int): Desired candle duration in seconds (e.g., 20, 40, 90) | |
| lookback_period (int): Number of seconds of tick history to fetch. | |
| This determines the time range from which ticks are collected. | |
| Returns: | |
| List[Dict]: A list of dictionaries, each representing a compiled candlestick: | |
| - time: Candle timestamp (Unix timestamp, aligned to period boundaries) | |
| - open: Opening price | |
| - high: Highest price during the period | |
| - low: Lowest price during the period | |
| - close: Closing price | |
| Raises: | |
| ConnectionError: If the client is not connected | |
| ValueError: If the asset is invalid or periods are zero/negative | |
| TimeoutError: If tick fetch or compilation times out | |
| Example: | |
| ```python | |
| async with PocketOptionAsync(ssid) as client: | |
| # Get 20-second candles from last 5 minutes | |
| candles = await client.compile_candles("EURUSD_otc", 20, 300) | |
| for candle in candles: | |
| print(f"{candle['time']}: O={candle['open']}, C={candle['close']}") | |
| ``` | |
| Note: | |
| - This is a compute-intensive operation as it fetches and processes raw ticks. | |
| - For standard timeframes, use `candles()` or `get_candles()` for better efficiency. | |
| """ | |
| return json.loads(await self.client.compile_candles(asset, custom_period, lookback_period)) | |
| async def compile_candles(self, asset: str, custom_period: int, lookback_period: int) -> List[Dict]: | |
| """Compiles custom candlesticks from raw tick history. | |
| This method fetches raw tick data over the specified lookback period and | |
| aggregates it into custom-sized candles. This enables non-standard timeframes | |
| like 20 seconds, 40 seconds, 90 seconds, etc. | |
| Args: | |
| asset (str): Trading asset symbol (e.g., "EURUSD_otc") | |
| custom_period (int): Desired candle duration in seconds (e.g., 20, 40, 90) | |
| lookback_period (int): Number of seconds of tick history to fetch. | |
| This determines the time range from which ticks are collected. | |
| Returns: | |
| List[Dict]: A list of dictionaries, each representing a compiled candlestick: | |
| - time: Candle timestamp (Unix timestamp, aligned to period boundaries) | |
| - open: Opening price | |
| - high: Highest price during the period | |
| - low: Lowest price during the period | |
| - close: Closing price | |
| Raises: | |
| ConnectionError: If the client is not connected | |
| ValueError: If the asset is invalid or periods are zero/negative | |
| TimeoutError: If tick fetch or compilation times out | |
| Example: |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/pocketoption/asynchronous.py`
around lines 727 - 767, The compile_candles method lacks Python-side validation
for custom_period and lookback_period so negative values reach the Rust FFI and
raise OverflowError; add explicit checks at the start of
PocketOptionAsync.compile_candles to ensure both custom_period and
lookback_period are positive integers (e.g., isinstance(..., int) and > 0) and
raise ValueError with a clear message if not, before calling
self.client.compile_candles; keep the rest of the method unchanged so the FFI
call still returns json.loads(await self.client.compile_candles(...)).
| // Inside the root folder | ||
| cd BinaryOptionsToolsV2 | ||
| maturin build -r | ||
|
|
||
| // Once the command is executed it should print a path to a .whl file, copy it and then run | ||
| pip install path/to/file.whl |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "Readme.md" | grep -i binaryoptionsRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 148
🏁 Script executed:
# First, let's locate the file and check its content
fd -t f "Readme.md" | head -20Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 182
🏁 Script executed:
# Check if the file exists and read the specific lines
if [ -f "BinaryOptionsToolsV2/Readme.md" ]; then
echo "=== Lines 45-50 ===";
sed -n '45,50p' "BinaryOptionsToolsV2/Readme.md";
echo "";
echo "=== Lines 56-62 ===";
sed -n '56,62p' "BinaryOptionsToolsV2/Readme.md";
else
echo "File not found, searching...";
find . -type f -name "*.md" | xargs grep -l "BinaryOptionsTools" 2>/dev/null | head -5;
fiRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 522
Replace // comments with # in bash code snippets.
The Markdown file uses // for comments in shell code blocks (lines 45-50 and 56-62), which is invalid bash syntax and will cause errors if someone copies and pastes the commands. Use # for bash comments instead.
🛠️ Proposed fix
-// Inside the root folder
+# Inside the root folder
cd BinaryOptionsToolsV2
maturin build -r
-// Once the command is executed it should print a path to a .whl file, copy it and then run
+# Once the command is executed it should print a path to a .whl file, copy it and then run
pip install path/to/file.whl-// Inside the root folder
+# Inside the root folder
cd BinaryOptionsToolsV2
-// Activate the virtual environment if not done already
+# Activate the virtual environment if not done already
-// Execute the following command and it should automatically install the library in the VM
+# Execute the following command and it should automatically install the library in the VM
maturin develop📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Inside the root folder | |
| cd BinaryOptionsToolsV2 | |
| maturin build -r | |
| // Once the command is executed it should print a path to a .whl file, copy it and then run | |
| pip install path/to/file.whl | |
| # Inside the root folder | |
| cd BinaryOptionsToolsV2 | |
| maturin build -r | |
| # Once the command is executed it should print a path to a .whl file, copy it and then run | |
| pip install path/to/file.whl |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@BinaryOptionsToolsV2/Readme.md` around lines 45 - 50, The README contains
bash code snippets that use C/JS-style '//' comments (seen in the snippets
around the build/install commands and the later snippet), which is invalid in
shell; update those code blocks by replacing every '//' comment prefix with
POSIX shell comments '#' so the commands can be copy/pasted and executed
correctly (edit the two bash code blocks that include "cd BinaryOptionsToolsV2",
"maturin build -r", "pip install path/to/file.whl" and the later similar block).
| // Inside the root folder | ||
| cd BinaryOptionsToolsV2 | ||
| maturin build -r | ||
|
|
||
| // Once the command is executed it should print a path to a .whl file, copy it and then run | ||
| pip install path/to/file.whl |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd BinaryOptionsToolsV2/rust && ls -la | grep -i readme || echo "Checking for Readme file..."Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 137
🏁 Script executed:
# Find the Readme file
find . -iname "*readme*" -type f 2>/dev/null | head -20Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 779
🏁 Script executed:
# Check if the file exists in the repository
git ls-files | grep -i "binary.*readme\|readme.*binary" | head -10Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 461
🏁 Script executed:
cat -n BinaryOptionsToolsV2/rust/Readme.md | sed -n '40,70p'Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 1034
Use valid bash comments in shell snippets.
Inside ```bash code fences, // is not a shell comment and becomes an invalid command when pasted. Replace with `#` at lines 45, 49 (and also 56, 59, 61 in the second bash block):
🛠️ Proposed doc fix
-// Inside the root folder
+# Inside the root folder
cd BinaryOptionsToolsV2
maturin build -r
-// Once the command is executed it should print a path to a .whl file, copy it and then run
+# Once the command is executed it should print a path to a .whl file, copy it and then run
pip install path/to/file.whl-// Inside the root folder
+# Inside the root folder
cd BinaryOptionsToolsV2
-// Activate the virtual environment if not done already
+# Activate the virtual environment if not done already
-// Execute the following command and it should automatically install the library in the VM
+# Execute the following command and it should automatically install the library in the VM
maturin develop🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@BinaryOptionsToolsV2/rust/Readme.md` around lines 45 - 50, The README's bash
snippets use C-style '//' comments which are invalid in shell; update the shell
code fences (the blocks around the maturin and pip commands) to use POSIX '#'
comments instead of '//' at the occurrences shown (the comments before the
maturin build and pip install commands and the other commented lines in the
second bash block), ensuring all lines inside the ```bash fences use '#' for
comments so the commands can be pasted and run correctly.
| if self.is_max_subscriptions_reached().await { | ||
| return Err(SubscriptionError::MaxSubscriptionsReached.into()); | ||
| } | ||
|
|
||
| // Check if subscription already exists | ||
| if self | ||
| .state | ||
| .active_subscriptions | ||
| .read() | ||
| .await | ||
| .contains_key(&asset) | ||
| { | ||
| return Err(SubscriptionError::SubscriptionAlreadyExists.into()); | ||
| } | ||
|
|
||
| // Add to active subscriptions | ||
| self.state | ||
| .active_subscriptions | ||
| .write() | ||
| .await | ||
| .insert(asset, (stream_sender, sub_type)); | ||
| // Add to active subscriptions - push to vec with subscription_id | ||
| let mut subscriptions = self.state.active_subscriptions.write().await; | ||
| let entry = subscriptions.entry(asset).or_insert_with(Vec::new); | ||
| entry.push((stream_sender, sub_type, subscription_id)); |
There was a problem hiding this comment.
MAX_SUBSCRIPTIONS can be bypassed after moving to per-subscription storage.
Line 683 checks a per-asset count, but Line 690 now stores multiple subscriptions per asset. This allows unlimited subscriptions on one symbol and makes SubscriptionCount inconsistent with its API meaning.
🔧 Suggested fix
async fn is_max_subscriptions_reached(&self) -> bool {
- self.state.active_subscriptions.read().await.len() >= MAX_SUBSCRIPTIONS
+ self.state
+ .active_subscriptions
+ .read()
+ .await
+ .values()
+ .map(Vec::len)
+ .sum::<usize>()
+ >= MAX_SUBSCRIPTIONS
} Command::SubscriptionCount { command_id } => {
- let count = self.state.active_subscriptions.read().await.len() as u32;
+ let count = self
+ .state
+ .active_subscriptions
+ .read()
+ .await
+ .values()
+ .map(Vec::len)
+ .sum::<usize>() as u32;
self.command_responder.send(CommandResponse::SubscriptionCount { command_id, count }).await?;
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if self.is_max_subscriptions_reached().await { | |
| return Err(SubscriptionError::MaxSubscriptionsReached.into()); | |
| } | |
| // Check if subscription already exists | |
| if self | |
| .state | |
| .active_subscriptions | |
| .read() | |
| .await | |
| .contains_key(&asset) | |
| { | |
| return Err(SubscriptionError::SubscriptionAlreadyExists.into()); | |
| } | |
| // Add to active subscriptions | |
| self.state | |
| .active_subscriptions | |
| .write() | |
| .await | |
| .insert(asset, (stream_sender, sub_type)); | |
| // Add to active subscriptions - push to vec with subscription_id | |
| let mut subscriptions = self.state.active_subscriptions.write().await; | |
| let entry = subscriptions.entry(asset).or_insert_with(Vec::new); | |
| entry.push((stream_sender, sub_type, subscription_id)); | |
| async fn is_max_subscriptions_reached(&self) -> bool { | |
| self.state | |
| .active_subscriptions | |
| .read() | |
| .await | |
| .values() | |
| .map(Vec::len) | |
| .sum::<usize>() | |
| >= MAX_SUBSCRIPTIONS | |
| } |
| if self.is_max_subscriptions_reached().await { | |
| return Err(SubscriptionError::MaxSubscriptionsReached.into()); | |
| } | |
| // Check if subscription already exists | |
| if self | |
| .state | |
| .active_subscriptions | |
| .read() | |
| .await | |
| .contains_key(&asset) | |
| { | |
| return Err(SubscriptionError::SubscriptionAlreadyExists.into()); | |
| } | |
| // Add to active subscriptions | |
| self.state | |
| .active_subscriptions | |
| .write() | |
| .await | |
| .insert(asset, (stream_sender, sub_type)); | |
| // Add to active subscriptions - push to vec with subscription_id | |
| let mut subscriptions = self.state.active_subscriptions.write().await; | |
| let entry = subscriptions.entry(asset).or_insert_with(Vec::new); | |
| entry.push((stream_sender, sub_type, subscription_id)); | |
| Command::SubscriptionCount { command_id } => { | |
| let count = self | |
| .state | |
| .active_subscriptions | |
| .read() | |
| .await | |
| .values() | |
| .map(Vec::len) | |
| .sum::<usize>() as u32; | |
| self.command_responder.send(CommandResponse::SubscriptionCount { command_id, count }).await?; | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs` around
lines 683 - 690, The current check using is_max_subscriptions_reached() is
insufficient because active_subscriptions now stores Vecs per asset and you can
bypass MAX_SUBSCRIPTIONS by adding many entries to one asset; update the logic
so the global subscription count (SubscriptionCount / MAX_SUBSCRIPTIONS) is
computed by summing the lengths of all Vecs in state.active_subscriptions (or
maintain an atomic/global counter) and use that total when deciding to return
SubscriptionError::MaxSubscriptionsReached in is_max_subscriptions_reached() (or
before pushing in the block that uses subscriptions.entry(...).push(...));
ensure any helper named is_max_subscriptions_reached(), SubscriptionCount
semantics, and state.active_subscriptions usage are updated so the check
reflects total active subscriptions rather than per-asset entries.
| // Forward data to all subscribers for this asset | ||
| let subscriptions = self.state.active_subscriptions.read().await; | ||
| if let Some(vec) = subscriptions.get(asset) { | ||
| // Iterate over all senders and send data | ||
| for (stream_sender, _, _) in vec { | ||
| // Best effort: send to each subscriber, don't fail if one fails | ||
| let update = SubscriptionEvent::Update { | ||
| asset: asset.to_string(), | ||
| price, | ||
| timestamp, | ||
| }) | ||
| .await | ||
| .map_err(CoreError::from)?; | ||
| }; | ||
| if let Err(e) = stream_sender.send(update).await { | ||
| // Log but continue to other subscribers | ||
| warn!(target: "SubscriptionsApiModule", "Failed to forward data to subscriber: {}", e); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && head -n 850 subscriptions.rs | tail -n 150Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 6142
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && sed -n '750,800p' subscriptions.rsRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 1879
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && grep -n "active_subscriptions" subscriptions.rs | head -20Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 721
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && grep -n "active_subscriptions:" subscriptions.rs -A 2 -B 2Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 60
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && grep -n "struct.*State" subscriptions.rs -A 20Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 60
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && grep -n "type AsyncSender\|AsyncSender<\|Sender<" subscriptions.rs | head -20Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 865
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && rg "active_subscriptions.*:" -A 3 subscriptions.rsRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 60
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && sed -n '1,100p' subscriptions.rs | grep -n "struct\|type\|use"Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 1165
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && head -n 50 subscriptions.rsRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 1850
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption && find . -name "*.rs" -exec grep -l "struct State" {} \;Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 88
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption/modules && grep -n "^pub struct\|^struct" subscriptions.rsRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 307
🏁 Script executed:
cd crates/binary_options_tools/src/pocketoption && sed -n '1,150p' state.rsRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 5120
🏁 Script executed:
cd crates/binary_options_tools && find . -name "*.rs" -path "*/core_pre/*" -exec grep -l "AsyncSender\|pub type AsyncSender" {} \;Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 60
🏁 Script executed:
cd crates && grep -r "pub type AsyncSender" --include="*.rs"Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 60
🏁 Script executed:
find crates -name "lib.rs" -o -name "mod.rs" | xargs grep -l "binary_options_tools_core_pre\|core_pre"Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 161
🏁 Script executed:
cd crates && ls -la | grep coreRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 189
🏁 Script executed:
find . -path "*/core_pre*" -name "*.rs" | head -5Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 60
🏁 Script executed:
cd crates && find . -name "reimports.rs" -o -name "reimports" -type dRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 169
🏁 Script executed:
cd crates/core-pre/src && cat reimports.rsRepository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 336
Hold the read lock only for data access; await channel sends outside the lock.
Line 777 acquires a read lock that's held during the awaited send() calls on line 787. If a subscriber's channel is full or slow to consume, these awaits block the lock, preventing concurrent subscribe/unsubscribe operations that require write access. This degrades module responsiveness.
Collect the senders into a local Vec inside the lock scope, then release the lock before iterating and sending. Since AsyncSender is cloneable, this approach is straightforward and prevents lock-hold duration from depending on subscriber performance.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/binary_options_tools/src/pocketoption/modules/subscriptions.rs` around
lines 776 - 791, The read lock on self.state.active_subscriptions is being held
across awaited channel sends (stream_sender.send(...).await) which can block
writers; fix by, inside the scope where you call
self.state.active_subscriptions.read().await (the code that checks
subscriptions.get(asset)), clone or collect the AsyncSender instances (the
stream_sender values) into a local Vec (e.g., Vec<AsyncSender> or whatever
sender type is used), then drop/release the read lock and only after that
iterate over the collected senders and perform
stream_sender.send(SubscriptionEvent::Update { ... }).await while logging errors
as before.
| async def subscribe_symbol(self, asset): | ||
| async def subscription(): | ||
| yield {"symbol": asset, "price": 1.11} | ||
|
|
||
| return subscription() | ||
|
|
||
| async def subscribe_symbol_chuncked(self, asset, chunk_size): | ||
| async def subscription(): | ||
| yield {"chunk": 1, "open": 1.1, "close": 1.2} | ||
|
|
||
| return subscription() | ||
|
|
||
| async def subscribe_symbol_timed(self, asset, time): | ||
| async def subscription(): | ||
| yield {"time": 1000, "price": 1.11} | ||
|
|
||
| return subscription() | ||
|
|
||
| async def subscribe_symbol_time_aligned(self, asset, time): | ||
| async def subscription(): | ||
| yield {"aligned_time": 1000, "price": 1.11} | ||
|
|
||
| return subscription() | ||
|
|
There was a problem hiding this comment.
Subscription mocks don’t match the wrapper’s expected payload type.
These mocks yield dicts, but AsyncSubscription.__anext__() does json.loads(...) on each item. Iteration will fail once exercised.
🧪 Proposed mock fix
async def subscribe_symbol(self, asset):
async def subscription():
- yield {"symbol": asset, "price": 1.11}
+ yield json.dumps({"symbol": asset, "price": 1.11})
return subscription()
async def subscribe_symbol_chuncked(self, asset, chunk_size):
async def subscription():
- yield {"chunk": 1, "open": 1.1, "close": 1.2}
+ yield json.dumps({"chunk": 1, "open": 1.1, "close": 1.2})
return subscription()
async def subscribe_symbol_timed(self, asset, time):
async def subscription():
- yield {"time": 1000, "price": 1.11}
+ yield json.dumps({"time": 1000, "price": 1.11})
return subscription()
async def subscribe_symbol_time_aligned(self, asset, time):
async def subscription():
- yield {"aligned_time": 1000, "price": 1.11}
+ yield json.dumps({"aligned_time": 1000, "price": 1.11})
return subscription()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def subscribe_symbol(self, asset): | |
| async def subscription(): | |
| yield {"symbol": asset, "price": 1.11} | |
| return subscription() | |
| async def subscribe_symbol_chuncked(self, asset, chunk_size): | |
| async def subscription(): | |
| yield {"chunk": 1, "open": 1.1, "close": 1.2} | |
| return subscription() | |
| async def subscribe_symbol_timed(self, asset, time): | |
| async def subscription(): | |
| yield {"time": 1000, "price": 1.11} | |
| return subscription() | |
| async def subscribe_symbol_time_aligned(self, asset, time): | |
| async def subscription(): | |
| yield {"aligned_time": 1000, "price": 1.11} | |
| return subscription() | |
| async def subscribe_symbol(self, asset): | |
| async def subscription(): | |
| yield json.dumps({"symbol": asset, "price": 1.11}) | |
| return subscription() | |
| async def subscribe_symbol_chuncked(self, asset, chunk_size): | |
| async def subscription(): | |
| yield json.dumps({"chunk": 1, "open": 1.1, "close": 1.2}) | |
| return subscription() | |
| async def subscribe_symbol_timed(self, asset, time): | |
| async def subscription(): | |
| yield json.dumps({"time": 1000, "price": 1.11}) | |
| return subscription() | |
| async def subscribe_symbol_time_aligned(self, asset, time): | |
| async def subscription(): | |
| yield json.dumps({"aligned_time": 1000, "price": 1.11}) | |
| return subscription() |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 130-130: Unused method argument: asset
(ARG002)
[warning] 130-130: Unused method argument: chunk_size
(ARG002)
[warning] 136-136: Unused method argument: asset
(ARG002)
[warning] 136-136: Unused method argument: time
(ARG002)
[warning] 142-142: Unused method argument: asset
(ARG002)
[warning] 142-142: Unused method argument: time
(ARG002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/python/pocketoption/test_async_mocked.py` around lines 124 - 147, The
subscription mocks (subscribe_symbol, subscribe_symbol_chuncked,
subscribe_symbol_timed, subscribe_symbol_time_aligned) currently yield Python
dicts but AsyncSubscription.__anext__() calls json.loads(...) on each yielded
item; update each mock's inner subscription to yield JSON-serialized strings
(e.g., json.dumps(...) of the same payload shape) so the wrapper's json.loads
can parse them successfully, preserving the same keys/values the tests expect.
| async def create_raw_handler(self, validator, keep_alive=None): | ||
| mock_handler = MagicMock() | ||
| mock_handler.id.return_value = "handler_123" | ||
| mock_handler.send_text = AsyncMock() | ||
| mock_handler.send_binary = AsyncMock() | ||
| mock_handler.send_and_wait = AsyncMock(return_value='42["response"]') | ||
| mock_handler.wait_next = AsyncMock(return_value='42["message"]') | ||
| mock_handler.subscribe = AsyncMock(return_value=AsyncMock()) | ||
| mock_handler.close = AsyncMock() |
There was a problem hiding this comment.
subscribe mock shape is inconsistent with production behavior.
mock_handler.subscribe = AsyncMock(return_value=AsyncMock()) can make await handler.subscribe() return a coroutine-like mock, not an async iterator stream.
🧪 Proposed mock fix
async def create_raw_handler(self, validator, keep_alive=None):
mock_handler = MagicMock()
@@
- mock_handler.subscribe = AsyncMock(return_value=AsyncMock())
+ async def _stream():
+ yield '42["message"]'
+ mock_handler.subscribe = MagicMock(return_value=_stream())
mock_handler.close = AsyncMock()
return mock_handler📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def create_raw_handler(self, validator, keep_alive=None): | |
| mock_handler = MagicMock() | |
| mock_handler.id.return_value = "handler_123" | |
| mock_handler.send_text = AsyncMock() | |
| mock_handler.send_binary = AsyncMock() | |
| mock_handler.send_and_wait = AsyncMock(return_value='42["response"]') | |
| mock_handler.wait_next = AsyncMock(return_value='42["message"]') | |
| mock_handler.subscribe = AsyncMock(return_value=AsyncMock()) | |
| mock_handler.close = AsyncMock() | |
| async def create_raw_handler(self, validator, keep_alive=None): | |
| mock_handler = MagicMock() | |
| mock_handler.id.return_value = "handler_123" | |
| mock_handler.send_text = AsyncMock() | |
| mock_handler.send_binary = AsyncMock() | |
| mock_handler.send_and_wait = AsyncMock(return_value='42["response"]') | |
| mock_handler.wait_next = AsyncMock(return_value='42["message"]') | |
| async def _stream(): | |
| yield '42["message"]' | |
| mock_handler.subscribe = MagicMock(return_value=_stream()) | |
| mock_handler.close = AsyncMock() |
🧰 Tools
🪛 Ruff (0.15.2)
[warning] 172-172: Unused method argument: validator
(ARG002)
[warning] 172-172: Unused method argument: keep_alive
(ARG002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/python/pocketoption/test_async_mocked.py` around lines 172 - 180, The
mock for subscribe in create_raw_handler returns a coroutine-like mock
(mock_handler.subscribe = AsyncMock(return_value=AsyncMock())) which doesn't
mimic the production async-iterator behavior; change mock_handler.subscribe so
awaiting handler.subscribe() returns an async-iterable/async-generator or an
object implementing __aiter__ that yields expected messages (or return an
AsyncMock whose return_value implements __aiter__ yielding the stream), ensuring
the mock matches the production subscribe async-iterator shape used by code that
does "async for msg in handler.subscribe()" or "async with" patterns.
| try: | ||
| await client.shutdown() | ||
| except Exception: | ||
| pass |
There was a problem hiding this comment.
Avoid swallowing teardown errors in fixtures.
The broad except Exception: pass hides shutdown regressions and makes failures hard to diagnose.
🛠️ Proposed fix
try:
await client.shutdown()
- except Exception:
- pass
+ except Exception as exc:
+ pytest.fail(f"async_client teardown failed: {exc}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| await client.shutdown() | |
| except Exception: | |
| pass | |
| try: | |
| await client.shutdown() | |
| except Exception as exc: | |
| pytest.fail(f"async_client teardown failed: {exc}") |
🧰 Tools
🪛 Ruff (0.15.2)
[error] 365-366: try-except-pass detected, consider logging the exception
(S110)
[warning] 365-365: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/python/pocketoption/test_async_mocked.py` around lines 363 - 366, The
teardown currently swallows all exceptions from await client.shutdown() which
hides regressions; change the fixture to let errors surface by removing the bare
try/except or, if you must handle them, catch Exception as e and call
pytest.fail(f"client.shutdown() failed: {e}") (import pytest) so failures are
reported, referencing the await client.shutdown() call in
tests/python/pocketoption/test_async_mocked.py.
Pull Request
Overview
Summarize the changes and the motivation behind them. Link any related issues using keywords (e.g., Fixes #123).
Changes
Type of Change
Validation
Describe how the changes were tested.
Environment
Checklist
Screenshots (Optional)
Add relevant visuals if applicable.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores