Summary
Reproducible Rust-side panic on the tokio runtime when subscribe_symbol_timed streams are reconnected after a sustained network outage. The panic kills the host Python process
via panic-abort (Windows exit 0xC0000409). Because the panic occurs on a tokio worker thread rather than the Python asyncio thread, no Python try / except (including catching PyO3 PanicException) can defend against it — the worker dies before Python sees anything.
Panic
thread 'tokio-rt-worker' panicked at crates\binary_options_tools\src\pocketoption\modules\get_candles.rs:243:13:
all branches are disabled and there is no else branch note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
Process exits with 0xC0000409 (STATUS_STACK_BUFFER_OVERRUN — Rust panic-abort on Windows).
I did not have RUST_BACKTRACE=1 set on the two occurrences I observed. Happy to capture and attach a full backtrace under the minimal repro below if useful.
Environment
BinaryOptionsToolsV2 0.2.10 (verified via pip show)
- Python 3.12
- Windows 11
- Three concurrent
PocketOptionAsync instances on the same SSID, each consuming its own subscribe_symbol_timed stream — two at 60s, one at 30s.
Trigger sequence (consistent — observed twice in 48 hours)
- Multiple
PocketOptionAsync clients are running, each consuming candles from subscribe_symbol_timed(asset, timedelta(seconds=tf)).
- The host loses internet connectivity for an extended period (5–10+ minutes). Both observed cases were sustained outages — a quick flap may not trigger it.
- Connectivity is restored. Each client is reconnected by constructing a fresh
PocketOptionAsync(ssid=...), awaiting client.balance() to confirm liveness, and calling
client.subscribe_symbol_timed(...) to obtain a fresh async iterator. All three reconnects appear clean — balance() returns the correct number, the iterator is returned
without exception.
- ~30–60 seconds after the successful reconnect,
tokio-rt-worker panics at get_candles.rs:243. The panic is not during reconnect — it occurs after the new streams have
already been handed back to Python and are being polled.
Hypothesis on cause
(Possibly wrong about the mechanism, but the trigger pattern is solid.)
tokio::select! panics with "all branches are disabled and there is no else branch" when every branch in the macro is disabled and no else arm is provided. My guess is that
during the long outage, the future-set inside the select! at line 243 reaches a fully-disabled state. Because the tokio runtime appears to be process-global and outlives
PocketOptionAsync.__del__, the wedged subscription tasks survive into the post-reconnect window. The first poll after the new subscribes attaches reaches the disabled
select! and panics.
Minimal reproduction
This script uses only public library API — no application logic. To reproduce:
- Run the script with a valid SSID. Confirm all three streams are receiving candles.
- Disable the host's network adapter (or pull the wifi) for at least 5 minutes. Re-enable.
- Wait 30–90 seconds after connectivity is restored. The
tokio-rt-worker panic should fire.
import asyncio
from datetime import timedelta
from BinaryOptionsToolsV2.pocketoption import PocketOptionAsync
SSID = "<paste your SSID here>"
ASSETS = [("EURUSD_otc", 60), ("GBPUSD_otc", 60), ("XAUUSD_otc", 30)]
async def run_one(asset, tf):
while True:
try:
client = PocketOptionAsync(ssid=SSID)
await asyncio.sleep(10) # init grace period
balance = await client.balance()
print(f"[{asset}] connected, balance={balance}")
stream = await client.subscribe_symbol_timed(
asset, timedelta(seconds=tf)
)
async for candle in stream:
_ = candle # consume only
except Exception as e:
print(f"[{asset}] error: {type(e).__name__}: {e} — reconnecting in 5s")
await asyncio.sleep(5)
async def main():
await asyncio.gather(*(run_one(a, tf) for a, tf in ASSETS))
if __name__ == "__main__":
asyncio.run(main())
Run with RUST_BACKTRACE=1 for a full Rust backtrace if you want me to capture one.
Why this needs an upstream fix
Because the panic is on a tokio worker thread and not on the Python asyncio thread, the panic-abort kills the host process before any Python frame can intercept it. There is no
wrapper-side mitigation that can guard against it — try / except BaseException on subscribe_symbol_timed, __anext__, and the iterator body will never see the panic.
Possible upstream fixes (in increasing scope):
1. Minimum: at get_candles.rs:243, replace the panic! with a return Err(...) (e.g., a StreamClosed / Reconnect variant) so wrappers can recover. Even this alone is sufficient to
make the wrapper recoverable.
2. Add an else => { … } arm on the select! macro that yields the same error variant when all branches are disabled.
3. On WS-channel-closed detection, rebuild the future-set so a fully-disabled state is unreachable.
Happy to test any candidate fix on a pre-release build.
Summary
Reproducible Rust-side panic on the tokio runtime when
subscribe_symbol_timedstreams are reconnected after a sustained network outage. The panic kills the host Python processvia panic-abort (Windows exit
0xC0000409). Because the panic occurs on a tokio worker thread rather than the Python asyncio thread, no Pythontry / except(including catching PyO3PanicException) can defend against it — the worker dies before Python sees anything.Panic
thread 'tokio-rt-worker' panicked at crates\binary_options_tools\src\pocketoption\modules\get_candles.rs:243:13:
all branches are disabled and there is no else branch note: run with RUST_BACKTRACE=1 environment variable to display a backtrace
Process exits with
0xC0000409(STATUS_STACK_BUFFER_OVERRUN— Rust panic-abort on Windows).I did not have
RUST_BACKTRACE=1set on the two occurrences I observed. Happy to capture and attach a full backtrace under the minimal repro below if useful.Environment
BinaryOptionsToolsV20.2.10 (verified viapip show)PocketOptionAsyncinstances on the same SSID, each consuming its ownsubscribe_symbol_timedstream — two at 60s, one at 30s.Trigger sequence (consistent — observed twice in 48 hours)
PocketOptionAsyncclients are running, each consuming candles fromsubscribe_symbol_timed(asset, timedelta(seconds=tf)).PocketOptionAsync(ssid=...), awaitingclient.balance()to confirm liveness, and callingclient.subscribe_symbol_timed(...)to obtain a fresh async iterator. All three reconnects appear clean —balance()returns the correct number, the iterator is returnedwithout exception.
tokio-rt-workerpanics atget_candles.rs:243. The panic is not during reconnect — it occurs after the new streams havealready been handed back to Python and are being polled.
Hypothesis on cause
(Possibly wrong about the mechanism, but the trigger pattern is solid.)
tokio::select!panics with "all branches are disabled and there is no else branch" when every branch in the macro is disabled and noelsearm is provided. My guess is thatduring the long outage, the future-set inside the
select!at line 243 reaches a fully-disabled state. Because the tokio runtime appears to be process-global and outlivesPocketOptionAsync.__del__, the wedged subscription tasks survive into the post-reconnect window. The first poll after the new subscribes attaches reaches the disabledselect!and panics.Minimal reproduction
This script uses only public library API — no application logic. To reproduce:
tokio-rt-workerpanic should fire.