-
Notifications
You must be signed in to change notification settings - Fork 1.2k
test: better checks for trigger removal in feature_governance_cl.py #7051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -142,16 +142,32 @@ def run_test(self): | |
| self.bump_mocktime(156) | ||
| self.generate(self.nodes[0], 1, sync_fun=lambda: self.sync_blocks(self.nodes[0:5])) | ||
|
|
||
| self.log.info("Wait for governance module to catch up with block updates") | ||
| tip_height = self.nodes[0].getblockcount() | ||
| expected_msg = f'CGovernanceManager::UpdatedBlockTip -- nCachedBlockHeight: {tip_height}' | ||
|
|
||
| def governance_tip_updated(node): | ||
| with open(node.debug_log_path, encoding='utf-8') as dl: | ||
| seek_pos = node.debug_log_bytes() - 100 * 1024 # read the last 100 KiB only | ||
| dl.seek(seek_pos if seek_pos > 0 else 0) | ||
| debug_log_part = dl.read() | ||
| return expected_msg in debug_log_part | ||
|
|
||
| for node in self.nodes[0:5]: | ||
| self.wait_until(lambda node=node: governance_tip_updated(node)) | ||
|
|
||
| self.log.info("Bump mocktime to trigger governance cleanup") | ||
| for delta, expected in ( | ||
| (5 * 60, ['UpdateCachesAndClean -- Governance Objects:']), # mark old triggers for deletion | ||
| (5 * 60, ['CleanAndRemoveTriggers -- Removing trigger object']), # mark old triggers for deletion | ||
| (10 * 60, ['UpdateCachesAndClean -- Governance Objects: 0']), # deletion after delay | ||
| ): | ||
| self.mocktime += delta | ||
| for node in self.nodes: | ||
| for node in self.nodes[0:5]: | ||
| with node.assert_debug_log(expected_msgs=expected): | ||
| node.setmocktime(self.mocktime) | ||
| node.mockscheduler(delta) | ||
| self.nodes[5].setmocktime(self.mocktime) | ||
| self.nodes[5].mockscheduler(delta) | ||
|
Comment on lines
159
to
+170
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 5‑minute expectation on The new assertion: (5 * 60, ['CleanAndRemoveTriggers -- Removing trigger object']), # mark old triggers for deletionassumes that the first cleanup pass after the additional block + 5‑minute mocktime jump will always log
and never logs Your current changes (including the new The node‑5 handling here looks good: you still run To make the test robust while keeping your new synchronization and node‑range changes, I’d recommend relaxing the 5‑minute expectation back to a more generic cleanup log (as before), and rely on the 10‑minute - for delta, expected in (
- (5 * 60, ['CleanAndRemoveTriggers -- Removing trigger object']), # mark old triggers for deletion
- (10 * 60, ['UpdateCachesAndClean -- Governance Objects: 0']), # deletion after delay
- ):
+ for delta, expected in (
+ # First run: ensure governance cleanup runs; we only care that UpdateCachesAndClean fired.
+ (5 * 60, ['UpdateCachesAndClean -- Governance Objects:']), # mark old triggers for deletion
+ (10 * 60, ['UpdateCachesAndClean -- Governance Objects: 0']), # deletion after delay
+ ):This keeps the log‑based signal that cleanup ran during the first pass, avoids depending on the precise “Removing trigger object” branch, and should eliminate the intermittent failures you’re seeing.
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Depending on There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @UdjinM6, thank you for the clarification! I now understand the intent. The synchronization step you added (waiting for The approach makes sense: fix the race condition with proper synchronization, then verify the fix with a strict assertion. My concern was misplaced because I didn't account for the fact that the new 🐰💡 ✏️ Learnings added
🧠 Learnings used |
||
|
|
||
| # Confirm in RPC | ||
| for node in self.nodes: | ||
|
|
||
Uh oh!
There was an error while loading. Please reload this page.