Skip to content

Conversation

@Armi1014
Copy link

@Armi1014 Armi1014 commented Nov 16, 2025

Description

  • Improve startup robustness: use a connectivity check before starting the miner and exit early if Twitch is unreachable for 60 seconds or if no valid streamers remain after initialization.
  • Batch streamer initialization via initialize_streamers_context and guard it with a timeout that scales with streamer count so a hanging request can’t block startup.
  • Make miner configuration more explicit and safer: add concrete type hints for priority, streamers, and blacklist, and normalise priority so it is always stored as a list[Priority].
  • Align watch selection with upstream fixes: always fill two distinct watch slots, respect user priority order, and handle the “subscribed + lowest points” case correctly (related to Priority Problem again #748, Fixes issue where the same streamer could fill both watch slots. #750, Fixes a bug in the watch priority sorting algorithm. #764).
  • Introduce a thread-safe watch streak cache backed by watch_streak_cache.json so recent WATCH_STREAK claims are remembered across restarts and expensive streak checks can be skipped within a TTL.
  • Harden GQL handling for operations like VideoPlayerStreamInfoOverlayChannel and DropsHighlightService_AvailableDrops: treat errors / partial responses as non-fatal, avoid NoneType crashes, and fall back to cached stream info where possible.
  • Define STREAM_INFO_CACHE_TTL and implement a small stream info cache so valid stream metadata is reused, while clearly invalid/partial payloads are not cached.
  • Skip drop-sync work when no streamer needs it, keeping the existing lazy drop syncing behaviour unchanged.
  • Add a reusable interruptible_sleep helper (now used in the main loop) to shorten shutdown waits and make Ctrl+C exits more responsive.
  • Preserve the streamer order from run.py unless the ORDER priority is enabled; internal sorts operate on copies only.

Related: #739, #748, #750, #764

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality not to work as expected)

How Has This Been Tested?

  • Manually ran python example.py with my Twitch credentials against a pool of ~25–50 streamers to confirm login, parallel streamer initialization, websocket threads, lazy drop syncing, priority behaviour, and faster shutdown. Under this load the loop has been stable, with no missed drops or duplicate watch slots observed.
  • Tested with a large account (~650 streamers, thanks to community testing) to verify:
    • startup time is significantly reduced compared to 2.0.4,
    • watch streak caching skips recent streaks on restart instead of re-checking every streamer,
    • online channels are detected correctly even when GQL occasionally returns partial data or service timeout errors.

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my code
  • I have commented on my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation (README.md)
  • My changes generate no new warnings
  • Any dependent changes have been updated in requirements.txt

@saabia
Copy link

saabia commented Nov 19, 2025

@rdavydov can we possible merge this before we get more issues cleared?
with many streamer its MUCH faster then before

@rdavydov
Copy link
Owner

So many non-relevant changes in formatting. Too much time would be needed to review all this.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR improves the startup robustness and performance of the Twitch Channel Points Miner by implementing parallel streamer initialization, adding connectivity checks with timeouts, and optimizing drop syncing. The changes include explicit type hints for better type safety and comprehensive Black formatting across the codebase.

Key Changes:

  • Parallel streamer initialization via initialize_streamers_context method with ThreadPoolExecutor and timeout guards
  • Early exit on connectivity failures or invalid streamers to prevent running in broken states
  • Optimized drop syncing to skip unnecessary work when no streamer is farming drops
  • Type hints added for priority, streamers, and blacklist parameters using modern Python union syntax

Reviewed Changes

Copilot reviewed 21 out of 24 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
TwitchChannelPointsMiner/TwitchChannelPointsMiner.py Added connectivity check with 60s timeout, type hints for parameters, normalized priority handling, and parallel streamer initialization
TwitchChannelPointsMiner/classes/Twitch.py Implemented initialize_streamers_context for parallel initialization and __streamers_require_campaign_sync helper to skip unnecessary drop sync work
README.md Documented new startup connectivity check and streamer initialization behavior
example.py Applied Black formatting to improve readability (no functional changes)
setup.py Fixed trailing comma formatting
pickle_view.py Applied Black formatting (quote style consistency)
TwitchChannelPointsMiner/utils.py Applied Black formatting and consistent code style
TwitchChannelPointsMiner/logger.py Applied Black formatting to improve code readability
TwitchChannelPointsMiner/constants.py Applied Black formatting for consistency
TwitchChannelPointsMiner/classes/*.py Applied Black formatting across all class files

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 701 to 703
if streamer.settings.community_goals is True:
self.contribute_to_community_goals(streamer)

Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate code: The check if streamer.settings.community_goals is True and call to self.contribute_to_community_goals(streamer) appears three times (lines 698-699, 701-702). This appears to be unintentional duplication that should be removed, keeping only one instance.

Suggested change
if streamer.settings.community_goals is True:
self.contribute_to_community_goals(streamer)

Copilot uses AI. Check for mistakes.
Comment on lines +212 to +213
streamers_list = list(streamers) if streamers is not None else None
blacklist_list = list(blacklist) if blacklist is not None else None
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: When streamers is None, calling list(streamers) on line 212 will raise a TypeError: 'NoneType' object is not iterable. The same issue exists on line 213 for blacklist.

Consider changing to:

streamers_list = list(streamers) if streamers is not None else []
blacklist_list = list(blacklist) if blacklist is not None else []

This way, when None is passed, an empty list is used instead of passing None to the run method.

Suggested change
streamers_list = list(streamers) if streamers is not None else None
blacklist_list = list(blacklist) if blacklist is not None else None
streamers_list = list(streamers) if streamers is not None else []
blacklist_list = list(blacklist) if blacklist is not None else []

Copilot uses AI. Check for mistakes.
README.md Outdated

On startup, the miner checks whether Twitch is reachable. If Twitch is not available for 60 seconds, the miner logs an error and exits instead of continuing in a broken state.

If none of the configured streamers can be initialised (for example due to invalid names or errors), the miner logs an error and exits instead of running with an empty streamer list.
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling: "initialised" should use American English spelling "initialized" for consistency with the codebase (the method name is initialize_streamers_context, not initialise_streamers_context).

Suggested change
If none of the configured streamers can be initialised (for example due to invalid names or errors), the miner logs an error and exits instead of running with an empty streamer list.
If none of the configured streamers can be initialized (for example due to invalid names or errors), the miner logs an error and exits instead of running with an empty streamer list.

Copilot uses AI. Check for mistakes.
)
except TimeoutError:
logger.error(
"Timed out while initialising streamers after %s seconds.",
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling: "initialising" should use American English spelling "initializing" for consistency with the codebase (the method name is initialize_streamers_context).

Suggested change
"Timed out while initialising streamers after %s seconds.",
"Timed out while initializing streamers after %s seconds.",

Copilot uses AI. Check for mistakes.
pickle_view.py Outdated
print("Specify a pickle file as a parameter, e.g. cookies/user.pkl")
else:
print(pickle.load(open(argv[1], 'rb')))
print(pickle.load(open(argv[1], "rb")))
Copy link

Copilot AI Nov 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

File is opened but is not closed.

Suggested change
print(pickle.load(open(argv[1], "rb")))
with open(argv[1], "rb") as f:
print(pickle.load(f))

Copilot uses AI. Check for mistakes.
@mpforce1
Copy link

So many non-relevant changes in formatting. Too much time would be needed to review all this.

@Armi1014 maybe you could remove your big formatting commit, that'd make it easier to review?

@saabia
Copy link

saabia commented Nov 19, 2025

please do that @Armi1014 👍

@Armi1014
Copy link
Author

So many non-relevant changes in formatting. Too much time would be needed to review all this.

@Armi1014 maybe you could remove your big formatting commit, that'd make it easier to review?

please do that @Armi1014 👍

Alright I will try, I'm sorry guys.

@saabia
Copy link

saabia commented Nov 19, 2025

i have a short question, would it be possible for the priority streak at startup to load the streamers we allready got the watchstreak in a logfile or something, so if we restart the miner, it doesnt need to take 5minutes again per stream and go through all streamers trying to check if a watchstreak is aviable? if im right at startup the log only needs to check if 1. we allready got the watchstreak allready in the file and 2nd if the streamer was more than 30mins offline since the last watch streak? would safe a ton of time if someone follows many streamers :)

@Armi1014
Copy link
Author

i have a short question, would it be possible for the priority streak at startup to load the streamers we allready got the watchstreak in a logfile or something, so if we restart the miner, it doesnt need to take 5minutes again per stream and go through all streamers trying to check if a watchstreak is aviable? if im right at startup the log only needs to check if 1. we allready got the watchstreak allready in the file and 2nd if the streamer was more than 30mins offline since the last watch streak? would safe a ton of time if someone follows many streamers :)

I'll look into it!

@saabia
Copy link

saabia commented Nov 19, 2025

i have a short question, would it be possible for the priority streak at startup to load the streamers we allready got the watchstreak in a logfile or something, so if we restart the miner, it doesnt need to take 5minutes again per stream and go through all streamers trying to check if a watchstreak is aviable? if im right at startup the log only needs to check if 1. we allready got the watchstreak allready in the file and 2nd if the streamer was more than 30mins offline since the last watch streak? would safe a ton of time if someone follows many streamers :)

I'll look into it!

sounds amazing, also could you merge @mpforce1 lasst PR? if this is fine for him ofcourse. because i still get errors on your version

grafik

@Ewcounless
Copy link

This is a very interesting thread.
You're discussing issues with running a single script, but I noticed that the list of errors also includes those I encounter when running multiple scripts.

I didn't understand everything written in this thread, but I'm hopeful it might help with my issues.

mpforce1, as someone who's seen my threads and errors, you might remember them. Could you tell me if there's a chance some of my problems could be solved in the same way?

@Armi1014
Copy link
Author

This is a very interesting thread. You're discussing issues with running a single script, but I noticed that the list of errors also includes those I encounter when running multiple scripts.

I didn't understand everything written in this thread, but I'm hopeful it might help with my issues.

mpforce1, as someone who's seen my threads and errors, you might remember them. Could you tell me if there's a chance some of my problems could be solved in the same way?

Some of the GQL and error-handling changes here should help with issues that happen across multiple instances as well, especially around partial responses and timeouts. It’s mainly focused on startup, priority behaviour, and safer handling of Twitch errors, so it might not fix everything you’re seeing, but it should reduce some of the error spam and random crashes. Once (or if) this gets merged, your setup would be a good stress test for those changes.

@saabia
Copy link

saabia commented Nov 22, 2025

Cache for Watch_Streaks doesnt seem to work, i sent everything (logs, screenshots, my watch_streak_cache.json file) in a other thread so only you get the link with the files per Mail.

Everything else is working perfect, streamer loading is faster then it allready was (this is amazingxD ) nearly no errors at all, the allready known Websockets reconnects (often on a User with many Streamers, not so often on a User with only 25 Users)
And a new Error comes up since the last Update:
grafik

But for this 2 errors i have no log file since the Logfile for the 25 Streamer Account allready takes 3 Mb for only 20 Minutes.
As i allready said, im not so good with log files, thats why i made a own thread on your site so youre the only one who getting the files.

@mpforce1
Copy link

But for this 2 errors i have no log file since the Logfile for the 25 Streamer Account allready takes 3 Mb for only 20 Minutes. As i allready said, im not so good with log files, thats why i made a own thread on your site so youre the only one who getting the files.

The new change adds some error checking to the GQL requests, in particular you're now seeing essentially the same error as before but with more information. The information this time is service timeout, I'm assuming on user (image worked for a second but has disappeared now 😆, working off of memory).

I've talked about this elsewhere but basically this means Twitch's servers are timing out internally. This is not something we can fix, however we could implement a retry system to mitigate the issue. As I've also mentioned elsewhere, I'm working on a proper integration layer for GQL requests that should automatically handle this situation, but for now this is a great improvement.

@saabia
Copy link

saabia commented Nov 22, 2025

Yeah i allready figured out the big error message didnt came up since yesterday at all 👍
Also with the Websocket reconnecting it have to be a twitch issue because its mostly in the evening/night and in the morning at ~8-9pm after that it doesnt come up the day, so maximal 1 time in the daytimes the only annoying thing here is if you got ~40 sockets and every one is reconnecting its spaming your console, maybe we could remove this message at all in the log.info version? because the miner is still working well and nobody cares about the reconnect message i guess? haha

@mpforce1
Copy link

Yeah i allready figured out the big error message didnt came up since yesterday at all 👍 Also with the Websocket reconnecting it have to be a twitch issue because its mostly in the evening/night and in the morning at ~8-9pm after that it doesnt come up the day, so maximal 1 time in the daytimes the only annoying thing here is if you got ~40 sockets and every one is reconnecting its spaming your console, maybe we could remove this message at all in the log.info version? because the miner is still working well and nobody cares about the reconnect message i guess? haha

Yeah, I have some theories about why they drop WS connections but nothing confirmed. I've actually reduced the logging in the Hermes websocket integration for this reason, people don't really need to know everything that's going on under the hood. Eventually we'll have to swap to Hermes so it's probably not worth it to change the PubSub integration now.

@saabia
Copy link

saabia commented Nov 22, 2025

ok im out at hermes/pubsub integration here in this case this is too high for me haha

@Armi1014
Copy link
Author

Quick update on the current state of this branch:

  • Watch streak cache is now confirmed working on both a small (~25) and a large (~650) streamer setup (thanks again to @saabia for the logs).
  • On restart, recent WATCH_STREAK claims are now read from watch_streak_cache.json and STREAK checks are skipped within the TTL instead of re-running 5 minutes per streamer.
  • Startup is still significantly faster than 2.0.4, and online/offline detection behaves like the original miner again, even when Twitch occasionally returns service timeout or partial GQL responses.
  • Websocket reconnect logs are mostly due to Twitch load; the miner recovers and keeps farming as expected. I’ve kept the logging as-is for now, since @mpforce1 mentioned Hermes integration will likely change this later anyway.

From my side this PR should now be feature-complete and stable.
If you spot anything odd with priorities, streak caching, or online detection, I’m happy to tweak the branch again.

@Gamerns10s
Copy link

Guys can we also remember that the drop progress bar also never prints (it's the only fix I am waiting for cuz I couldn't fix it myself) 😅
If anyone of you guys have some knowledge about it and would be able to make a fix for the same then it would be really great

@Gamerns10s
Copy link

I just have a silly doubt if anyone could help it would be great
So I wasn't around coding for a month but now I updated my miner acc to this pr but what's the priority order ? Like I defined streamer order as Order>Drop>Streak but I couldn't understand how this priority order works and how can I defined it to something specific 😅

@mpforce1
Copy link

I just have a silly doubt if anyone could help it would be great So I wasn't around coding for a month but now I updated my miner acc to this pr but what's the priority order ? Like I defined streamer order as Order>Drop>Streak but I couldn't understand how this priority order works and how can I defined it to something specific 😅

@Gamerns10s

The priority can be either a single Priority like Priority.ORDER or it can be a list of Priority like [Priority.STREAK, Priority.DROPS, Priority.POINTS_ASCENDING]. In the first case, the miner will prioritise streamers in the order they appear in your run file. In the second case, the miner will first attempt to claim any watch streaks, then it'll attempt to watch streams with an active drops campaign (and where you haven't already claimed the drops), and finally it'll prioritise watching streamers with the lowest channel points first.

You can set priority to whatever you want, this PR fixes a bug (my fault fixing a similar bug 😛) where sometimes only 1 streamer would be mined, so it's not really related to the priority configuration. There are also some other improvements to startup time and such, you can read about them in the thread.

@Gamerns10s
Copy link

I somewhat got it but then I have defined the priority to be order>drop>streak but the miner when starting lists the streamers in a different order than the runner file
Screenshot_2025-11-23-15-06-39-925_com kiwibrowser browser
Screenshot_2025-11-23-15-06-59-052_com kiwibrowser browser

@mpforce1
Copy link

I somewhat got it but then I have defined the priority to be order>drop>streak but the miner when starting lists the streamers in a different order than the runner file

The order it loads streamers now is not necessarily the order defined in the run file. Loading the streamers is part of the miner initialisation, it's not indicative of the watch order.

Also, from what you said your priority is [ORDER, DROPS, STREAK], which can't be right because it evaluates priorities from left to right and ORDER will check every streamer meaning the other 2 priorities don't get checked. Maybe I'm misunderstanding and you mean the other way around.

@mpforce1 mpforce1 mentioned this pull request Nov 23, 2025
@Gamerns10s
Copy link

Ohh alr I got it now Thanks 😄
Btw These are my priorities
Screenshot_2025-11-23-15-55-21-030_com kiwibrowser browser

@mpforce1
Copy link

Ohh alr I got it now Thanks 😄 Btw These are my priorities

No problem. However, that priority list is backwards. You want it the other way around i.e.

priority=[
    Priority.STREAK,
    Priority.DROPS,
    Priority.ORDER
]

If you put ORDER (or POINTS_ASCENDING/POINTS_DESCENDING) first then they supersede the other priorities just because of how they work.

If you want it to only look at the order then you can do

priority=Priority.ORDER

Pretty sure that works without the brackets.

@Gamerns10s
Copy link

Oh okeyyy I didn't knew that lol 😅

@saabia
Copy link

saabia commented Nov 23, 2025

I’m pretty sure the priority system is explained in the README, but here’s how I personally use it (I mainly focus on collecting points):
Streak:
At startup, the tool goes through all streamers and checks — now using a logfile — whether we already have an unclaimed watch streak since the last time the streamer was online. If yes, it immediately claims it (everytime +450 points after reaching a 5-Times streak for each streamer).
Subscribed:
I like to enable this to get the most out of subscriptions: 12/60 instead of 10/50 without a sub.
Drops:
(Currently testing an unreleased version.)
I mainly use Drops to make the behavior look less obvious, since it tends to “mine” the accounts with the lowest points.
Points_Ascending:
This prioritizes the streamer with the lowest point total.

So mostly its running with Points_Ascenting, but if a Subscribed account comes online it changes 1 watching stream to Subscribed same when a Account with Drops aviable comes online.

As @mpforce1 mentioned, you can combine the priorities in any order you like. However, it doesn’t make sense to put Points_Ascending or Points_Descending or ORDER at the top, because they will override all other priorities.

@Gamerns10s
Copy link

Ahh I got this in around 30-45 mins of running the miner
FYI:- Can't provide debug logs as my vps doesn't have root 🥲+ my script is based on @.mpforce1 hermes api version
IMG_20251123_224144

@Armi1014
Copy link
Author

Ahh I got this in around 30-45 mins of running the miner FYI:- Can't provide debug logs as my vps doesn't have root 🥲+ my script is based on @.mpforce1 hermes api version IMG_20251123_224144

Thanks a lot for testing and for the screenshot!
Just to clarify, are you running this PR branch directly, or a fork based on @mpforce1’s hermes-api version with additional changes? That makes a big difference for reproducing it.
From the screenshot it looks like the miner is repeatedly seeing the stream go offline/online, but without debug logs it’s hard to see where in the code this is coming from.
If you can share the exact branch/commit you are on and your priority setup (roughly how many streamers and which priorities you use), I can try to reproduce the issue on my side.

@Gamerns10s
Copy link

I have a repo which has some personal changes + @.mpforce1's hermes api addition + changes in this pr so if you want I could provide you access to my repo
But I tried myself to reproduce it but came accross a few other ones

IMG_20251124_094944 IMG_20251124_094835 IMG_20251124_083943

Also I have defined 17 streamers on based of Order priority.

@mpforce1
Copy link

I have a repo which has some personal changes + @.mpforce1's hermes api addition + changes in this pr so if you want I could provide you access to my repo But I tried myself to reproduce it but came accross a few other ones

@Gamerns10s

context deadline exceeded is a new one for me. I can't really guess what that means.

Bad file descriptor might be related to an old issue #250, which is one of the first issues I looked at years ago. I don't think my proposed fix ever got in but I also can't be sure that a) it would actually fix the issue or b) it would fix your issue. At any rate, that error is probably coming from the IRC integration code. (oh, you and I talked about this a little in the Hermes thread too #728 (comment), you implied that you thought it might be related to changes you'd made)

@Gamerns10s
Copy link

Yeah it could be possible that it was an error from my end but now idk how but both of my slots are being occupied by the same streamer even thought anther one is streaming and the chat for him is joined 😭. The streamers who is filling both spots has teir 1 sub whereas another one doesn't and I am on order priority (tried restarting 3 times )
Again no debug log cuz I am on vps without root access 🥲

@mpforce1
Copy link

@Gamerns10s I think this is getting off topic, might be best for a discussion instead since you're running a custom branch?

@saabia
Copy link

saabia commented Nov 25, 2025

Yeah it could be possible that it was an error from my end but now idk how but both of my slots are being occupied by the same streamer even thought anther one is streaming and the chat for him is joined 😭. The streamers who is filling both spots has teir 1 sub whereas another one doesn't and I am on order priority (tried restarting 3 times ) Again no debug log cuz I am on vps without root access 🥲

It doesnt look like you using this version at all, since with this version we testet every priority and it always filled both slots watching either there was a subscribed account, drop account, ...

Just download the whole script again here: https://github.com/Armi1014/Twitch-Channel-Points-Miner-v2/tree/master
Remove everything just dont "Cookie folder, logs folder, an run.py"

And start the run file normally.

Im pretty sure it will work 👍

@Gamerns10s
Copy link

Managed to fix that issue 😅
But can anyone here confirm if the filter condition TOTAL_USERS for predictions is working properly as for me it's not picking the correct number of viewers

@Gamerns10s Gamerns10s mentioned this pull request Nov 29, 2025
7 tasks
@brunoshure
Copy link

brunoshure commented Dec 7, 2025

Tested this branch and all seems to be working mostly fine. Really sped up the starting process. I haven't encountered any fatal crashes yet, just minor websocket errors (which seems safe to ignore):

07/12 12:20:02 - #40 - WebSocket error: fin=1 opcode=8 data=b'\x10\x04ping pong failed'
07/12 12:20:02 - fin=1 opcode=8 data=b'\x10\x04ping pong failed' - goodbye

From the screenshot it looks like the miner is repeatedly seeing the stream go offline/online, but without debug logs it’s hard to see where in the code this is coming from.

This generally happens when a streamer is having connection issues, so they go ONLINE/OFFLINE multiple times in a few minutes. This miner tracks OFFLINE faster than ONLINE so it causes this "issue" of multiple offlines showing up in the terminal. I would encourage anyone having this issue to go to the streamers channel directly and check it out. Go to the Past Broadcast section and see the stream cut into multiple smaller streams.


EDIT: Not even kidding but right after I posted this comment one of my watched streams started having this problem:

07/12 13:15:22 - 😴  lukedalelive (42.99k points) is Offline!
07/12 13:15:52 - 😴  lukedalelive (42.99k points) is Offline!
07/12 13:16:25 - 😴  lukedalelive (42.99k points) is Offline!
07/12 13:16:52 - 🥳  lukedalelive (42.99k points) is Online!

And would you look at that:

offline

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants