-
Notifications
You must be signed in to change notification settings - Fork 419
Speed up miner startup with parallel streamer init and smarter drop syncing #753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
…ity timeout, mine()/run cleanup, parallel streamer initialization, and lazy drop syncing
…reamer init with a timeout
Adds additional checks to get_stream_info to avoid NoneType errors.
|
@rdavydov can we possible merge this before we get more issues cleared? |
|
So many non-relevant changes in formatting. Too much time would be needed to review all this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR improves the startup robustness and performance of the Twitch Channel Points Miner by implementing parallel streamer initialization, adding connectivity checks with timeouts, and optimizing drop syncing. The changes include explicit type hints for better type safety and comprehensive Black formatting across the codebase.
Key Changes:
- Parallel streamer initialization via
initialize_streamers_contextmethod with ThreadPoolExecutor and timeout guards - Early exit on connectivity failures or invalid streamers to prevent running in broken states
- Optimized drop syncing to skip unnecessary work when no streamer is farming drops
- Type hints added for
priority,streamers, andblacklistparameters using modern Python union syntax
Reviewed Changes
Copilot reviewed 21 out of 24 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| TwitchChannelPointsMiner/TwitchChannelPointsMiner.py | Added connectivity check with 60s timeout, type hints for parameters, normalized priority handling, and parallel streamer initialization |
| TwitchChannelPointsMiner/classes/Twitch.py | Implemented initialize_streamers_context for parallel initialization and __streamers_require_campaign_sync helper to skip unnecessary drop sync work |
| README.md | Documented new startup connectivity check and streamer initialization behavior |
| example.py | Applied Black formatting to improve readability (no functional changes) |
| setup.py | Fixed trailing comma formatting |
| pickle_view.py | Applied Black formatting (quote style consistency) |
| TwitchChannelPointsMiner/utils.py | Applied Black formatting and consistent code style |
| TwitchChannelPointsMiner/logger.py | Applied Black formatting to improve code readability |
| TwitchChannelPointsMiner/constants.py | Applied Black formatting for consistency |
| TwitchChannelPointsMiner/classes/*.py | Applied Black formatting across all class files |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if streamer.settings.community_goals is True: | ||
| self.contribute_to_community_goals(streamer) | ||
|
|
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate code: The check if streamer.settings.community_goals is True and call to self.contribute_to_community_goals(streamer) appears three times (lines 698-699, 701-702). This appears to be unintentional duplication that should be removed, keeping only one instance.
| if streamer.settings.community_goals is True: | |
| self.contribute_to_community_goals(streamer) |
| streamers_list = list(streamers) if streamers is not None else None | ||
| blacklist_list = list(blacklist) if blacklist is not None else None |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: When streamers is None, calling list(streamers) on line 212 will raise a TypeError: 'NoneType' object is not iterable. The same issue exists on line 213 for blacklist.
Consider changing to:
streamers_list = list(streamers) if streamers is not None else []
blacklist_list = list(blacklist) if blacklist is not None else []This way, when None is passed, an empty list is used instead of passing None to the run method.
| streamers_list = list(streamers) if streamers is not None else None | |
| blacklist_list = list(blacklist) if blacklist is not None else None | |
| streamers_list = list(streamers) if streamers is not None else [] | |
| blacklist_list = list(blacklist) if blacklist is not None else [] |
README.md
Outdated
|
|
||
| On startup, the miner checks whether Twitch is reachable. If Twitch is not available for 60 seconds, the miner logs an error and exits instead of continuing in a broken state. | ||
|
|
||
| If none of the configured streamers can be initialised (for example due to invalid names or errors), the miner logs an error and exits instead of running with an empty streamer list. |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spelling: "initialised" should use American English spelling "initialized" for consistency with the codebase (the method name is initialize_streamers_context, not initialise_streamers_context).
| If none of the configured streamers can be initialised (for example due to invalid names or errors), the miner logs an error and exits instead of running with an empty streamer list. | |
| If none of the configured streamers can be initialized (for example due to invalid names or errors), the miner logs an error and exits instead of running with an empty streamer list. |
| ) | ||
| except TimeoutError: | ||
| logger.error( | ||
| "Timed out while initialising streamers after %s seconds.", |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spelling: "initialising" should use American English spelling "initializing" for consistency with the codebase (the method name is initialize_streamers_context).
| "Timed out while initialising streamers after %s seconds.", | |
| "Timed out while initializing streamers after %s seconds.", |
pickle_view.py
Outdated
| print("Specify a pickle file as a parameter, e.g. cookies/user.pkl") | ||
| else: | ||
| print(pickle.load(open(argv[1], 'rb'))) | ||
| print(pickle.load(open(argv[1], "rb"))) |
Copilot
AI
Nov 19, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
File is opened but is not closed.
| print(pickle.load(open(argv[1], "rb"))) | |
| with open(argv[1], "rb") as f: | |
| print(pickle.load(f)) |
@Armi1014 maybe you could remove your big formatting commit, that'd make it easier to review? |
|
please do that @Armi1014 👍 |
|
i have a short question, would it be possible for the priority streak at startup to load the streamers we allready got the watchstreak in a logfile or something, so if we restart the miner, it doesnt need to take 5minutes again per stream and go through all streamers trying to check if a watchstreak is aviable? if im right at startup the log only needs to check if 1. we allready got the watchstreak allready in the file and 2nd if the streamer was more than 30mins offline since the last watch streak? would safe a ton of time if someone follows many streamers :) |
I'll look into it! |
sounds amazing, also could you merge @mpforce1 lasst PR? if this is fine for him ofcourse. because i still get errors on your version
|
|
This is a very interesting thread. I didn't understand everything written in this thread, but I'm hopeful it might help with my issues. mpforce1, as someone who's seen my threads and errors, you might remember them. Could you tell me if there's a chance some of my problems could be solved in the same way? |
Some of the GQL and error-handling changes here should help with issues that happen across multiple instances as well, especially around partial responses and timeouts. It’s mainly focused on startup, priority behaviour, and safer handling of Twitch errors, so it might not fix everything you’re seeing, but it should reduce some of the error spam and random crashes. Once (or if) this gets merged, your setup would be a good stress test for those changes. |
The new change adds some error checking to the GQL requests, in particular you're now seeing essentially the same error as before but with more information. The information this time is I've talked about this elsewhere but basically this means Twitch's servers are timing out internally. This is not something we can fix, however we could implement a retry system to mitigate the issue. As I've also mentioned elsewhere, I'm working on a proper integration layer for GQL requests that should automatically handle this situation, but for now this is a great improvement. |
|
Yeah i allready figured out the big error message didnt came up since yesterday at all 👍 |
Yeah, I have some theories about why they drop WS connections but nothing confirmed. I've actually reduced the logging in the Hermes websocket integration for this reason, people don't really need to know everything that's going on under the hood. Eventually we'll have to swap to Hermes so it's probably not worth it to change the PubSub integration now. |
|
ok im out at hermes/pubsub integration here in this case this is too high for me haha |
|
Quick update on the current state of this branch:
From my side this PR should now be feature-complete and stable. |
|
Guys can we also remember that the drop progress bar also never prints (it's the only fix I am waiting for cuz I couldn't fix it myself) 😅 |
|
I just have a silly doubt if anyone could help it would be great |
The You can set priority to whatever you want, this PR fixes a bug (my fault fixing a similar bug 😛) where sometimes only 1 streamer would be mined, so it's not really related to the priority configuration. There are also some other improvements to startup time and such, you can read about them in the thread. |
The order it loads streamers now is not necessarily the order defined in the run file. Loading the streamers is part of the miner initialisation, it's not indicative of the watch order. Also, from what you said your |
No problem. However, that priority list is backwards. You want it the other way around i.e. priority=[
Priority.STREAK,
Priority.DROPS,
Priority.ORDER
]If you put If you want it to only look at the order then you can do priority=Priority.ORDERPretty sure that works without the brackets. |
|
Oh okeyyy I didn't knew that lol 😅 |
|
I’m pretty sure the priority system is explained in the README, but here’s how I personally use it (I mainly focus on collecting points): So mostly its running with Points_Ascenting, but if a Subscribed account comes online it changes 1 watching stream to Subscribed same when a Account with Drops aviable comes online. As @mpforce1 mentioned, you can combine the priorities in any order you like. However, it doesn’t make sense to put Points_Ascending or Points_Descending or ORDER at the top, because they will override all other priorities. |
Thanks a lot for testing and for the screenshot! |
|
|
Yeah it could be possible that it was an error from my end but now idk how but both of my slots are being occupied by the same streamer even thought anther one is streaming and the chat for him is joined 😭. The streamers who is filling both spots has teir 1 sub whereas another one doesn't and I am on order priority (tried restarting 3 times ) |
|
@Gamerns10s I think this is getting off topic, might be best for a discussion instead since you're running a custom branch? |
It doesnt look like you using this version at all, since with this version we testet every priority and it always filled both slots watching either there was a subscribed account, drop account, ... Just download the whole script again here: https://github.com/Armi1014/Twitch-Channel-Points-Miner-v2/tree/master And start the run file normally. Im pretty sure it will work 👍 |
|
Managed to fix that issue 😅 |










Description
initialize_streamers_contextand guard it with a timeout that scales with streamer count so a hanging request can’t block startup.priority,streamers, andblacklist, and normalisepriorityso it is always stored as alist[Priority].watch_streak_cache.jsonso recentWATCH_STREAKclaims are remembered across restarts and expensive streak checks can be skipped within a TTL.VideoPlayerStreamInfoOverlayChannelandDropsHighlightService_AvailableDrops: treaterrors/ partial responses as non-fatal, avoidNoneTypecrashes, and fall back to cached stream info where possible.STREAM_INFO_CACHE_TTLand implement a small stream info cache so valid stream metadata is reused, while clearly invalid/partial payloads are not cached.interruptible_sleephelper (now used in the main loop) to shorten shutdown waits and make Ctrl+C exits more responsive.run.pyunless theORDERpriority is enabled; internal sorts operate on copies only.Related: #739, #748, #750, #764
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
python example.pywith my Twitch credentials against a pool of ~25–50 streamers to confirm login, parallel streamer initialization, websocket threads, lazy drop syncing, priority behaviour, and faster shutdown. Under this load the loop has been stable, with no missed drops or duplicate watch slots observed.service timeouterrors.Checklist: