-
Notifications
You must be signed in to change notification settings - Fork 1.2k
test: better checks for trigger removal in feature_governance_cl.py #7051
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
Check for the actual "CleanAndRemoveTriggers -- Removing trigger object" log message instead of the generic "UpdateCachesAndClean" message. Exclude isolated node 5 from debug log assertions since it doesn't receive governance updates during this test phase.
✅ No Merge Conflicts DetectedThis PR currently has no conflicts with other open PRs. |
WalkthroughThe PR updates test/functional/feature_governance_cl.py: adds a Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
test/functional/feature_governance_cl.py (1)
145-156: Log expectation change and isolated-node handling look correct; consider clarifying comment.Using
['CleanAndRemoveTriggers -- Removing trigger object']for the 5-minute delta aligns better with the intent of “mark old triggers for deletion”, and restrictingassert_debug_logtoself.nodes[0:5]while still callingsetmocktime/mockscheduleronself.nodes[5]keeps the scheduler-based cleanup running on the isolated node without assuming it has the same governance state or logs. This matches the design that mockscheduler runs regardless of network isolation, while avoiding flaky expectations on node 5’s debug log.To make the special handling of node 5 obvious to future readers, you could add a brief comment:
- self.nodes[5].setmocktime(self.mocktime) - self.nodes[5].mockscheduler(delta) + # Isolated node: still advance scheduler so governance cleanup runs, + # but its state/logs differ, so we don't assert on its debug log here. + self.nodes[5].setmocktime(self.mocktime) + self.nodes[5].mockscheduler(delta)Based on learnings, this preserves scheduler behavior on isolated nodes while tightening the log checks.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/functional/feature_governance_cl.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
test/functional/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Functional tests in test/functional/ must be written in Python (minimum version specified in .python-version) and depend on dashd and dash-node
Files:
test/functional/feature_governance_cl.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: kwvg
Repo: dashpay/dash PR: 6718
File: test/functional/test_framework/test_framework.py:2102-2102
Timestamp: 2025-06-09T16:43:20.996Z
Learning: In the test framework consolidation PR (#6718), user kwvg prefers to limit functional changes to those directly related to MasternodeInfo, avoiding scope creep even for minor improvements like error handling consistency.
Learnt from: kwvg
Repo: dashpay/dash PR: 6761
File: src/chainlock/signing.cpp:247-250
Timestamp: 2025-07-29T14:32:48.369Z
Learning: In PR #6761, kwvg acknowledged a null pointer check issue in ChainLockSigner::Cleanup() method but deferred it to follow-up, consistent with the pattern of avoiding scope creep in refactoring PRs.
📚 Learning: 2025-10-28T08:54:00.392Z
Learnt from: UdjinM6
Repo: dashpay/dash PR: 6926
File: test/functional/feature_governance_cl.py:0-0
Timestamp: 2025-10-28T08:54:00.392Z
Learning: In Dash tests, the scheduler (mockscheduler) operates independently of network state. Isolated nodes should still run scheduler-based cleanup processes like governance cleanup, even if they have different state due to network isolation.
Applied to files:
test/functional/feature_governance_cl.py
🧬 Code graph analysis (1)
test/functional/feature_governance_cl.py (1)
test/functional/test_framework/test_node.py (1)
assert_debug_log(444-472)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: linux64_tsan-build / Build source
- GitHub Check: mac-build / Build source
- GitHub Check: linux64_nowallet-build / Build source
- GitHub Check: arm-linux-build / Build source
- GitHub Check: linux64-build / Build source
- GitHub Check: linux64_sqlite-build / Build source
- GitHub Check: linux64_ubsan-build / Build source
- GitHub Check: linux64_fuzz-build / Build source
- GitHub Check: win64-build / Build source
- GitHub Check: Lint / Run linters
| self.log.info("Bump mocktime to trigger governance cleanup") | ||
| for delta, expected in ( | ||
| (5 * 60, ['UpdateCachesAndClean -- Governance Objects:']), # mark old triggers for deletion | ||
| (5 * 60, ['CleanAndRemoveTriggers -- Removing trigger object']), # mark old triggers for deletion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still fails for me... Same (or similar) error failure rate.
16/18 - feature_governance_cl.py --legacy-wallet failed, Duration: 30 s
stdout:
2025-12-08T20:58:39.765000Z TestFramework (INFO): PRNG seed is: 5872045118807736912
2025-12-08T20:58:39.766000Z TestFramework (INFO): Initializing test directory /tmp/test_runner_∋_🏃_20251209_035833/feature_governance_cl_3
2025-12-08T20:58:39.766000Z TestFramework (INFO): Creating and starting controller node
2025-12-08T20:58:39.767000Z TestFramework (INFO): Creating and starting 1 simple nodes
2025-12-08T20:58:40.046000Z TestFramework (INFO): Generating 5100 coins
2025-12-08T20:58:40.299000Z TestFramework (INFO): Preparing 5 masternodes
2025-12-08T20:58:40.344000Z TestFramework (INFO): Prepared MN 0: collateral_txid=82bc93bf89349708aac16768e7ebe5330f2cf031e5367e535d15b06fc9a31f31, collateral_vout=0, protxHash=aaa9c67e2cc8f8b4b6510ced5a3cdf0913462f752d886d9fccffb4c673ea21f7
2025-12-08T20:58:40.416000Z TestFramework (INFO): Prepared MN 1: collateral_txid=535c3caafe10740e9ae68967f12c6f68392978591f5ebc9b5918b192125f43bd, collateral_vout=1, protxHash=5d7401f5da9847c65e8cba3d7ee5fce016c074dfe8eb33ce073c3a7beee9a44e
2025-12-08T20:58:40.474000Z TestFramework (INFO): Prepared MN 2: collateral_txid=75099402abd1b17588cff28157a04371ad4e30d82ab2c1dd2bab2a319e723833, collateral_vout=0, protxHash=ff86499e81b11086e1887f6c5d89213a3d78e75dd49f1e805528694abe51e8e9
2025-12-08T20:58:40.554000Z TestFramework (INFO): Prepared MN 3: collateral_txid=538a0e51cf42e406f9a9a0fe578a7f84f0542e984f4144bf6723966c789a6c6f, collateral_vout=1, protxHash=d6d623cac961c02049221457b437e33b9dba333330b4b2f2dc5ddf2a49ead836
2025-12-08T20:58:40.629000Z TestFramework (INFO): Prepared MN 4: collateral_txid=8280397065b57cd2eb7ed852333c763b2c0634eb2bdd08eb9d2fd27b45ad328c, collateral_vout=0, protxHash=ae3c413bd9be0db651137bd70a608e11c82b13612c5e11ef54dc91fe194cc25b
2025-12-08T20:58:41.559000Z TestFramework (INFO): Starting 5 masternodes
2025-12-08T20:58:45.270000Z TestFramework (INFO): Make sure ChainLocks are active
2025-12-08T20:58:46.296000Z TestFramework (INFO): Mining quorum: expected_members=4, expected_connections=2, expected_contributions=4, expected_commitments=4, no complains and justfications expected
2025-12-08T20:58:46.709000Z TestFramework (INFO): Moved from block 134 to 216
2025-12-08T20:58:46.710000Z TestFramework (INFO): Expected quorum_0 at:216
2025-12-08T20:58:46.710000Z TestFramework (INFO): Expected quorum_0 hash:0e958d2d231089f89a6259827615a1b3694c4d3711d6f5466ad9cbc8b4186e3d
2025-12-08T20:58:46.710000Z TestFramework (INFO): quorumIndex 0: Waiting for phase 1 (init)
2025-12-08T20:58:47.218000Z TestFramework (INFO): quorumIndex 0: Waiting for quorum connections (init)
2025-12-08T20:58:48.233000Z TestFramework (INFO): Expected quorum_1 at:217
2025-12-08T20:58:48.234000Z TestFramework (INFO): Expected quorum_1 hash:515e1a1ef5e6251ef6a0bee35ff588a0dd7c026ee2c7d013fa62dbbbb53fff34
2025-12-08T20:58:48.234000Z TestFramework (INFO): quorumIndex 1: Waiting for phase 1 (init)
2025-12-08T20:58:48.739000Z TestFramework (INFO): quorumIndex 1: Waiting for quorum connections (init)
2025-12-08T20:58:48.747000Z TestFramework (INFO): quorumIndex 0: Waiting for phase 2 (contribute)
2025-12-08T20:58:49.259000Z TestFramework (INFO): quorumIndex 1: Waiting for phase 2 (contribute)
2025-12-08T20:58:50.272000Z TestFramework (INFO): quorumIndex 0: Waiting for phase 3 (complain)
2025-12-08T20:58:50.783000Z TestFramework (INFO): quorumIndex 1: Waiting for phase 3 (complain)
2025-12-08T20:58:51.294000Z TestFramework (INFO): quorumIndex 0: Waiting for phase 4 (justify)
2025-12-08T20:58:51.805000Z TestFramework (INFO): quorumIndex 1: Waiting for phase 4 (justify)
2025-12-08T20:58:52.315000Z TestFramework (INFO): quorumIndex 0: Waiting for phase 5 (commit)
2025-12-08T20:58:52.827000Z TestFramework (INFO): quorumIndex 1: Waiting for phase 5 (commit)
2025-12-08T20:58:54.846000Z TestFramework (INFO): quorumIndex 0: Waiting for phase 6 (finalization)
2025-12-08T20:58:54.940000Z TestFramework (INFO): quorumIndex 1: Waiting for phase 6 (finalization)
2025-12-08T20:58:55.444000Z TestFramework (INFO): Mining final commitments
2025-12-08T20:58:55.547000Z TestFramework (INFO): Waiting for quorum(s) to appear in the list
2025-12-08T20:58:55.548000Z TestFramework (INFO): h(228) quorums: {'llmq_test': ['0e958d2d231089f89a6259827615a1b3694c4d3711d6f5466ad9cbc8b4186e3d'], 'llmq_test_dip0024': ['0e958d2d231089f89a6259827615a1b3694c4d3711d6f5466ad9cbc8b4186e3d', '515e1a1ef5e6251ef6a0bee35ff588a0dd7c026ee2c7d013fa62dbbbb53fff34'], 'llmq_test_platform': []}
2025-12-08T20:58:56.601000Z TestFramework (INFO): New quorum: height=216, quorumHash=0e958d2d231089f89a6259827615a1b3694c4d3711d6f5466ad9cbc8b4186e3d, quorumIndex=0, minedBlock=4ea6acb820635e95c34ecda699cfddfd001d4059dea79e48f2c3ff14997acdf5
2025-12-08T20:58:56.601000Z TestFramework (INFO): New quorum: height=217, quorumHash=515e1a1ef5e6251ef6a0bee35ff588a0dd7c026ee2c7d013fa62dbbbb53fff34, quorumIndex=1, minedBlock=4ea6acb820635e95c34ecda699cfddfd001d4059dea79e48f2c3ff14997acdf5
2025-12-08T20:58:56.603000Z TestFramework (INFO): Expecting ChainLock for 5f3c7069b25adc6803b35aee26574668eb2b1eb77a14ff487a14dd289a15fec7
2025-12-08T20:58:56.604000Z TestFramework (INFO): Expecting ChainLock for 5f3c7069b25adc6803b35aee26574668eb2b1eb77a14ff487a14dd289a15fec7
2025-12-08T20:58:56.604000Z TestFramework (INFO): Expecting ChainLock for 5f3c7069b25adc6803b35aee26574668eb2b1eb77a14ff487a14dd289a15fec7
2025-12-08T20:58:56.605000Z TestFramework (INFO): Expecting ChainLock for 5f3c7069b25adc6803b35aee26574668eb2b1eb77a14ff487a14dd289a15fec7
2025-12-08T20:58:56.605000Z TestFramework (INFO): Expecting ChainLock for 5f3c7069b25adc6803b35aee26574668eb2b1eb77a14ff487a14dd289a15fec7
2025-12-08T20:58:56.606000Z TestFramework (INFO): Expecting ChainLock for 5f3c7069b25adc6803b35aee26574668eb2b1eb77a14ff487a14dd289a15fec7
2025-12-08T20:58:57.643000Z TestFramework (INFO): Prepare proposals
2025-12-08T20:58:58.743000Z TestFramework (INFO): Submit proposals
2025-12-08T20:58:58.744000Z TestFramework (INFO): Isolate a node so that it would miss votes and triggers
2025-12-08T20:58:59.250000Z TestFramework (INFO): Vote proposals
2025-12-08T20:58:59.256000Z TestFramework (INFO): Move into sb maturity window
2025-12-08T20:58:59.294000Z TestFramework (INFO): Wait for new trigger and votes on non-isolated nodes
2025-12-08T20:59:00.454000Z TestFramework (INFO): Move remaining n blocks until the next Superblock
2025-12-08T20:59:01.548000Z TestFramework (INFO): Mine superblock
2025-12-08T20:59:02.568000Z TestFramework (INFO): Expecting ChainLock for 04de2992f72e31f3f25ec29fa2a4e59a83abc3299cf53c031f7e3aa0a8f984a0
2025-12-08T20:59:02.569000Z TestFramework (INFO): Mine (superblock cycle + 1) blocks on non-isolated nodes to forget about this trigger
2025-12-08T20:59:04.883000Z TestFramework (INFO): Bump mocktime to trigger governance cleanup
2025-12-08T20:59:06.895000Z TestFramework (ERROR): Assertion failed
Traceback (most recent call last):
File "/home/knst/projects/dash-v21/test/functional/test_framework/test_framework.py", line 163, in main
self.run_test()
~~~~~~~~~~~~~^^
File "/home/knst/projects/dash-v21/test/functional/feature_governance_cl.py", line 152, in run_test
with node.assert_debug_log(expected_msgs=expected):
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/contextlib.py", line 148, in __exit__
next(self.gen)
~~~~^^^^^^^^^^
File "/home/knst/projects/dash-v21/test/functional/test_framework/test_node.py", line 472, in assert_debug_log
self._raise_assertion_error('Expected messages "{}" does not partially match log:\n\n{}\n\n'.format(str(expected_msgs), print_log))
~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/knst/projects/dash-v21/test/functional/test_framework/test_node.py", line 202, in _raise_assertion_error
raise AssertionError(self._node_msg(msg))
AssertionError: [node 0] Expected messages "['CleanAndRemoveTriggers -- Removing trigger object']" does not partially match log:
- 2025-12-08T20:59:04.883375Z (mocktime: 1417720063) [ http] [httpserver.cpp:250] [http_request_cb] [http] Received a POST request for / from 127.0.0.1:44258
- 2025-12-08T20:59:04.883409Z (mocktime: 1417720063) [httpworker.3] [rpc/request.cpp:180] [parse] [rpc] ThreadRPCServer method=setmocktime user=__cookie__
- 2025-12-08T20:59:04.883434Z (mocktime: 1417720363) [httpworker.3] [httprpc.cpp:92] [~RpcHttpRequest] [bench] HTTP RPC request handled: user=__cookie__ command=setmocktime external=false status=200 elapsed_time_ms=0
- 2025-12-08T20:59:04.883622Z (mocktime: 1417720363) [ http] [httpserver.cpp:250] [http_request_cb] [http] Received a POST request for / from 127.0.0.1:44258
- 2025-12-08T20:59:04.883651Z (mocktime: 1417720363) [httpworker.0] [rpc/request.cpp:180] [parse] [rpc] ThreadRPCServer method=mockscheduler user=__cookie__
- 2025-12-08T20:59:04.883682Z (mocktime: 1417720363) [httpworker.0] [httprpc.cpp:92] [~RpcHttpRequest] [bench] HTTP RPC request handled: user=__cookie__ command=mockscheduler external=false status=200 elapsed_time_ms=0
- 2025-12-08T20:59:04.884260Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1393] [RequestOrphanObjects] [gobject] CGovernanceObject::RequestOrphanObjects -- number objects = 0
- 2025-12-08T20:59:04.884265Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:470] [CheckAndRemove] [gobject] CGovernanceManager::UpdateCachesAndClean
- 2025-12-08T20:59:04.884276Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1519] [CleanAndRemoveTriggers] [gobject] CGovernanceManager::CleanAndRemoveTriggers -- mapTrigger.size() = 1
- 2025-12-08T20:59:04.884280Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1535] [CleanAndRemoveTriggers] [gobject] CGovernanceManager::CleanAndRemoveTriggers -- superblock status = 2
- 2025-12-08T20:59:04.884283Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1545] [CleanAndRemoveTriggers] [gobject] CGovernanceManager::CleanAndRemoveTriggers -- Valid trigger found
- 2025-12-08T20:59:04.884287Z (mocktime: 1417720363) [ scheduler] [governance/classes.cpp:354] [IsExpired] [gobject] CSuperblock::IsExpired -- nBlockHeight = 260, nExpirationBlock = 280
- 2025-12-08T20:59:04.884290Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1557] [CleanAndRemoveTriggers] [gobject] CGovernanceManager::CleanAndRemoveTriggers -- NOT marked for removal
- 2025-12-08T20:59:04.884295Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:510] [CheckAndRemove] [gobject] CGovernanceManager::UpdateCachesAndClean -- Checking object for deletion: 2abeabcfe6763edd31dd79a37810b8d3e232784280eafa0245cfa74c24866545, deletion time = 0, time since deletion = 1417720363, delete flag = 0, expired flag = 0
- 2025-12-08T20:59:04.884301Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:575] [CheckAndRemove] [gobject] CGovernanceManager::UpdateCachesAndClean -- Governance Objects: 1 (Proposals: 0, Triggers: 1, Other: 0; Erased: 2), Votes: 4, m_requested_hash_time size=0
- 2025-12-08T20:59:04.884330Z (mocktime: 1417720363) [ scheduler] [validationinterface.cpp:232] [operator()] [validation] BlockConnected: block hash=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081 block height=281
- 2025-12-08T20:59:04.884456Z (mocktime: 1417720363) [ scheduler] [wallet/wallet.h:985] [WalletLogPrintf] [default wallet] AddToWallet 32290848f76583cb739292be4e5c409401780c9fd5edfc0c132567d1bfde1bac new Confirmed (block=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081, height=281, index=0)
- 2025-12-08T20:59:04.884545Z (mocktime: 1417720363) [ scheduler] [validationinterface.cpp:201] [operator()] [validation] UpdatedBlockTip: new block hash=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081 fork block hash=4adfd3d04f33cc338fc074df030a7032b2b449f6d0023e08f0cf1da109df3d00 (in IBD=false)
- 2025-12-08T20:59:04.884551Z (mocktime: 1417720363) [ scheduler] [wallet/wallet.h:985] [WalletLogPrintf] [default wallet] CCoinJoinClientManager::UpdatedBlockTip -- nCachedBlockHeight: 281
- 2025-12-08T20:59:04.884561Z (mocktime: 1417720363) [ scheduler] [masternode/sync.cpp:332] [UpdatedBlockTip] [mnsync] CMasternodeSync::UpdatedBlockTip -- pindexNew->nHeight: 281 fInitialDownload=0
- 2025-12-08T20:59:04.884565Z (mocktime: 1417720363) [ scheduler] [coinjoin/coinjoin.cpp:432] [CheckDSTXes] [coinjoin] CoinJoin::CheckDSTXes -- mapDSTX.size()=0
- 2025-12-08T20:59:04.884575Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1364] [UpdatedBlockTip] [gobject] CGovernanceManager::UpdatedBlockTip -- nCachedBlockHeight: 281
- 2025-12-08T20:59:04.884598Z (mocktime: 1417720363) [ msghand] [net.cpp:4760] [PushMessage] [net] sending ping (8 bytes) peer=2
- 2025-12-08T20:59:04.884640Z (mocktime: 1417720363) [ cl-schdlr] [chainlock/chainlock.cpp:348] [EnforceBestChainLock] [chainlocks] CChainLocksHandler::EnforceBestChainLock -- enforcing block 795b5e45eba62afe5dd398ffa3dde461cedd987d89683225c963c567d33c219d via CLSIG (ChainLockSig(nHeight=277, blockHash=795b5e45eba62afe5dd398ffa3dde461cedd987d89683225c963c567d33c219d))
- 2025-12-08T20:59:04.884667Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1149] [RequestGovernanceObjectVotes] [gobject] CGovernanceManager::RequestGovernanceObjectVotes -- start: vTriggerObjHashes 1 vOtherObjHashes 0 mapAskedRecently 3
- 2025-12-08T20:59:04.884675Z (mocktime: 1417720363) [ msghand] [net.cpp:4760] [PushMessage] [net] sending ping (8 bytes) peer=3
- 2025-12-08T20:59:04.884688Z (mocktime: 1417720363) [ scheduler] [logging/timer.h:57] [Log] [lock] Enter: lock contention ::cs_main, governance/governance.cpp:1173 started
- 2025-12-08T20:59:04.884700Z (mocktime: 1417720363) [ msghand] [logging/timer.h:57] [Log] [lock] Enter: lock contention ::cs_main, validation.cpp:6010 started
- 2025-12-08T20:59:04.884706Z (mocktime: 1417720363) [ msghand] [logging/timer.h:57] [Log] [lock] Enter: lock contention ::cs_main, validation.cpp:6010 completed (0μs)
- 2025-12-08T20:59:04.884738Z (mocktime: 1417720363) [ scheduler] [logging/timer.h:57] [Log] [lock] Enter: lock contention ::cs_main, governance/governance.cpp:1173 completed (0μs)
- 2025-12-08T20:59:04.884744Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1057] [RequestGovernanceObject] [gobject] CGovernanceManager::RequestGovernanceObject -- nHash 2abeabcfe6763edd31dd79a37810b8d3e232784280eafa0245cfa74c24866545 peer=0
- 2025-12-08T20:59:04.884748Z (mocktime: 1417720363) [ msghand] [net.cpp:4760] [PushMessage] [net] sending ping (8 bytes) peer=1
- 2025-12-08T20:59:04.884757Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1078] [RequestGovernanceObject] [gobject] CGovernanceManager::RequestGovernanceObject -- nHash 2abeabcfe6763edd31dd79a37810b8d3e232784280eafa0245cfa74c24866545 nVoteCount 4 peer=0
- 2025-12-08T20:59:04.884762Z (mocktime: 1417720363) [ scheduler] [net.cpp:4760] [PushMessage] [net] sending govsync (221 bytes) peer=0
- 2025-12-08T20:59:04.884772Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1057] [RequestGovernanceObject] [gobject] CGovernanceManager::RequestGovernanceObject -- nHash 2abeabcfe6763edd31dd79a37810b8d3e232784280eafa0245cfa74c24866545 peer=1
- 2025-12-08T20:59:04.884824Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1078] [RequestGovernanceObject] [gobject] CGovernanceManager::RequestGovernanceObject -- nHash 2abeabcfe6763edd31dd79a37810b8d3e232784280eafa0245cfa74c24866545 nVoteCount 4 peer=1
- 2025-12-08T20:59:04.884831Z (mocktime: 1417720363) [ scheduler] [net.cpp:4760] [PushMessage] [net] sending govsync (221 bytes) peer=1
- 2025-12-08T20:59:04.884844Z (mocktime: 1417720363) [ msghand] [net.cpp:4760] [PushMessage] [net] sending ping (8 bytes) peer=0
- 2025-12-08T20:59:04.884854Z (mocktime: 1417720363) [ net] [logging/timer.h:57] [Log] [lock] Enter: lock contention pnode->cs_vSend, net.cpp:2457 started
- 2025-12-08T20:59:04.884865Z (mocktime: 1417720363) [ net] [logging/timer.h:57] [Log] [lock] Enter: lock contention pnode->cs_vSend, net.cpp:2457 completed (0μs)
- 2025-12-08T20:59:04.884877Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1057] [RequestGovernanceObject] [gobject] CGovernanceManager::RequestGovernanceObject -- nHash 2abeabcfe6763edd31dd79a37810b8d3e232784280eafa0245cfa74c24866545 peer=2
- 2025-12-08T20:59:04.884886Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1078] [RequestGovernanceObject] [gobject] CGovernanceManager::RequestGovernanceObject -- nHash 2abeabcfe6763edd31dd79a37810b8d3e232784280eafa0245cfa74c24866545 nVoteCount 4 peer=2
- 2025-12-08T20:59:04.884890Z (mocktime: 1417720363) [ scheduler] [net.cpp:4760] [PushMessage] [net] sending govsync (221 bytes) peer=2
- 2025-12-08T20:59:04.884896Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1057] [RequestGovernanceObject] [gobject] CGovernanceManager::RequestGovernanceObject -- nHash 2abeabcfe6763edd31dd79a37810b8d3e232784280eafa0245cfa74c24866545 peer=3
- 2025-12-08T20:59:04.884904Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1078] [RequestGovernanceObject] [gobject] CGovernanceManager::RequestGovernanceObject -- nHash 2abeabcfe6763edd31dd79a37810b8d3e232784280eafa0245cfa74c24866545 nVoteCount 4 peer=3
- 2025-12-08T20:59:04.884910Z (mocktime: 1417720363) [ scheduler] [net.cpp:4760] [PushMessage] [net] sending govsync (221 bytes) peer=3
- 2025-12-08T20:59:04.884916Z (mocktime: 1417720363) [ scheduler] [governance/governance.cpp:1194] [RequestGovernanceObjectVotes] [gobject] CGovernanceManager::RequestGovernanceObjectVotes -- end: vTriggerObjHashes 0 vOtherObjHashes 0 mapAskedRecently 3
- 2025-12-08T20:59:04.885158Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: pong (8 bytes) peer=2
- 2025-12-08T20:59:04.885187Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: pong (8 bytes) peer=3
- 2025-12-08T20:59:04.885248Z (mocktime: 1417720363) [ scheduler] [net.cpp:2754] [DumpAddresses] [net] Flushed 0 addresses to peers.dat 0ms
- 2025-12-08T20:59:04.885352Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: ssc (8 bytes) peer=0
- 2025-12-08T20:59:04.885439Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: ssc (8 bytes) peer=2
- 2025-12-08T20:59:04.885563Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: ssc (8 bytes) peer=3
- 2025-12-08T20:59:04.885595Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: pong (8 bytes) peer=1
- 2025-12-08T20:59:04.885781Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: pong (8 bytes) peer=0
- 2025-12-08T20:59:04.885814Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: ssc (8 bytes) peer=1
- 2025-12-08T20:59:05.029199Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: inv (37 bytes) peer=2
- 2025-12-08T20:59:05.029213Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:4295] [ProcessMessage] [net] got inv: clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963 new peer=2
- 2025-12-08T20:59:05.029224Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:1640] [RequestObject] [net] RequestObject -- inv=(clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963), current_time=1417720363000000, process_time=1417720363000000, delta=0
- 2025-12-08T20:59:05.029235Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:6508] [SendMessages] [net] Requesting clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963 peer=2
- 2025-12-08T20:59:05.029245Z (mocktime: 1417720363) [ msghand] [net.cpp:4760] [PushMessage] [net] sending getdata (37 bytes) peer=2
- 2025-12-08T20:59:05.029256Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:6536] [SendMessages] [net] SendMessages -- GETDATA -- pushed size = 1 peer=2
- 2025-12-08T20:59:05.029407Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: inv (37 bytes) peer=3
- 2025-12-08T20:59:05.029414Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:4295] [ProcessMessage] [net] got inv: clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963 new peer=3
- 2025-12-08T20:59:05.029422Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:1640] [RequestObject] [net] RequestObject -- inv=(clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963), current_time=1417720363000000, process_time=1417720368000000, delta=5000000
- 2025-12-08T20:59:05.029563Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: clsig (132 bytes) peer=2
- 2025-12-08T20:59:05.029869Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:1531] [EraseObjectRequest] [net] EraseObjectRequest -- inv=(clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963)
- 2025-12-08T20:59:05.033015Z (mocktime: 1417720363) [ msghand] [chainlock/chainlock.cpp:188] [ProcessNewChainLock] [chainlocks] CChainLocksHandler::ProcessNewChainLock -- processed new CLSIG (ChainLockSig(nHeight=281, blockHash=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081)), peer=2
- 2025-12-08T20:59:05.033042Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:1200] [PushInv] [net] PushInv -- adding new inv: clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963 peer=0
- 2025-12-08T20:59:05.033049Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:1200] [PushInv] [net] PushInv -- adding new inv: clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963 peer=1
- 2025-12-08T20:59:05.033057Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:1197] [PushInv] [net] PushInv -- skipping known inv: clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963 peer=2
- 2025-12-08T20:59:05.033068Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:1197] [PushInv] [net] PushInv -- skipping known inv: clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963 peer=3
- 2025-12-08T20:59:05.033080Z (mocktime: 1417720363) [ cl-schdlr] [chainlock/chainlock.cpp:348] [EnforceBestChainLock] [chainlocks] CChainLocksHandler::EnforceBestChainLock -- enforcing block 498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081 via CLSIG (ChainLockSig(nHeight=281, blockHash=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081))
- 2025-12-08T20:59:05.033096Z (mocktime: 1417720363) [ cl-schdlr] [validationinterface.cpp:287] [NotifyChainLock] [validation] Enqueuing NotifyChainLock: notify chainlock at block=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081 cl=ChainLockSig(nHeight=281, blockHash=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081)
- 2025-12-08T20:59:05.033106Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:3687] [ProcessMessage] [net] received: inv (37 bytes) peer=1
- 2025-12-08T20:59:05.033113Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:4295] [ProcessMessage] [net] got inv: clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963 have peer=1
- 2025-12-08T20:59:05.033149Z (mocktime: 1417720363) [ msghand] [net_processing.cpp:6219] [operator()] [net] SendMessages -- queued inv: clsig 1e74097deb461dcd6c86b553bdd5ba36d58bb50c825baa46f15bac8be9a3f963 index=0 peer=0
- 2025-12-08T20:59:05.033155Z (mocktime: 1417720363) [ msghand] [net.cpp:4760] [PushMessage] [net] sending inv (37 bytes) peer=0
- 2025-12-08T20:59:05.033174Z (mocktime: 1417720363) [ scheduler] [validationinterface.cpp:287] [operator()] [validation] NotifyChainLock: notify chainlock at block=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081 cl=ChainLockSig(nHeight=281, blockHash=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081)
- 2025-12-08T20:59:05.033260Z (mocktime: 1417720363) [ scheduler] [coinjoin/coinjoin.cpp:432] [CheckDSTXes] [coinjoin] CoinJoin::CheckDSTXes -- mapDSTX.size()=0
- 2025-12-08T20:59:06.291979Z (mocktime: 1417720363) [ cl-schdlr] [chainlock/chainlock.cpp:348] [EnforceBestChainLock] [chainlocks] CChainLocksHandler::EnforceBestChainLock -- enforcing block 498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081 via CLSIG (ChainLockSig(nHeight=281, blockHash=498b411b0c0908aa813a462c01f7f6ccbdba1969c48042a493851bb2e8aff081))
2025-12-08T20:59:06.946000Z TestFramework (INFO): Stopping nodes
2025-12-08T20:59:07.300000Z TestFramework (WARNING): Not cleaning up dir /tmp/test_runner_∋_🏃_20251209_035833/feature_governance_cl_3
2025-12-08T20:59:07.300000Z TestFramework (ERROR): Test failed. Test logging available at /tmp/test_runner_∋_🏃_20251209_035833/feature_governance_cl_3/test_framework.log
2025-12-08T20:59:07.300000Z TestFramework (ERROR):
2025-12-08T20:59:07.300000Z TestFramework (ERROR): Hint: Call /home/knst/projects/dash-v21/test/functional/combine_logs.py '/tmp/test_runner_∋_🏃_20251209_035833/feature_governance_cl_3' to consolidate all logs
2025-12-08T20:59:07.300000Z TestFramework (ERROR):
2025-12-08T20:59:07.300000Z TestFramework (ERROR): If this failure happened unexpectedly or intermittently, please file a bug and provide a link or upload of the combined log.
2025-12-08T20:59:07.300000Z TestFramework (ERROR): https://github.com/dashpay/dash/issues
2025-12-08T20:59:07.300000Z TestFramework (ERROR):
stderr:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
test/functional/feature_governance_cl.py (2)
146-156: Refinegovernance_tip_updatedto avoid magic height and closure on loop variableThis helper works, but there are two concerns:
- It hard‑codes block height
281in the expected log line, which makes the test brittle if the setup ever changes the tip height at this point.- It closes over
node, which is only bound by the subsequentfor node in self.nodes[0:5]loop. While this is functionally OK given the sequentialwait_untilcalls, it’s non-obvious and is exactly what Ruff’s B007 is warning about.You can make this both clearer and more robust by capturing the current tip height once and passing
nodeexplicitly:- self.log.info("Wait for governance module to catch up with block updates") - def governance_tip_updated(): - expected_msg = 'CGovernanceManager::UpdatedBlockTip -- nCachedBlockHeight: 281' - with open(node.debug_log_path, encoding='utf-8') as dl: - debug_log = dl.read() - if re.search(re.escape(expected_msg), debug_log, flags=re.MULTILINE) is None: - return False - return True - for node in self.nodes[0:5]: - self.wait_until(governance_tip_updated, sleep=0.5, timeout=15) + self.log.info("Wait for governance module to catch up with block updates") + tip_height = self.nodes[0].getblockcount() + expected_msg = f'CGovernanceManager::UpdatedBlockTip -- nCachedBlockHeight: {tip_height}' + + def governance_tip_updated(node): + with open(node.debug_log_path, encoding='utf-8') as dl: + debug_log = dl.read() + return re.search(re.escape(expected_msg), debug_log, flags=re.MULTILINE) is not None + + for node in self.nodes[0:5]: + self.wait_until(lambda node=node: governance_tip_updated(node), sleep=0.5, timeout=15)This removes the magic constant, satisfies B007, and keeps the intent (“all non‑isolated nodes have processed the latest UpdatedBlockTip”) explicit.
159-160: More specific cleanup log expectation looks good, but may still be timing‑sensitiveSwitching the 5‑minute delta expectation to
'CleanAndRemoveTriggers -- Removing trigger object'gives a clearer signal that the trigger was actually marked for removal, instead of matching any genericUpdateCachesAndCleanline. That aligns with the PR goal of tightening the assertion.Given previous intermittent failures around this phase, it’s worth double‑checking (locally or in CI) that this specific line reliably appears for all non‑isolated nodes after the prior governance-tip wait, as the cleanup behavior is quite timing/height dependent.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/functional/feature_governance_cl.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
test/functional/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Functional tests in test/functional/ must be written in Python (minimum version specified in .python-version) and depend on dashd and dash-node
Files:
test/functional/feature_governance_cl.py
🧠 Learnings (5)
📓 Common learnings
Learnt from: kwvg
Repo: dashpay/dash PR: 6718
File: test/functional/test_framework/test_framework.py:2102-2102
Timestamp: 2025-06-09T16:43:20.996Z
Learning: In the test framework consolidation PR (#6718), user kwvg prefers to limit functional changes to those directly related to MasternodeInfo, avoiding scope creep even for minor improvements like error handling consistency.
📚 Learning: 2025-11-24T16:41:22.457Z
Learnt from: CR
Repo: dashpay/dash PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:41:22.457Z
Learning: Applies to src/governance/**/*.{cpp,h} : Governance implementation must support governance objects (proposals, triggers, superblock management) and on-chain voting with tallying
Applied to files:
test/functional/feature_governance_cl.py
📚 Learning: 2025-10-28T08:54:00.392Z
Learnt from: UdjinM6
Repo: dashpay/dash PR: 6926
File: test/functional/feature_governance_cl.py:0-0
Timestamp: 2025-10-28T08:54:00.392Z
Learning: In Dash tests, the scheduler (mockscheduler) operates independently of network state. Isolated nodes should still run scheduler-based cleanup processes like governance cleanup, even if they have different state due to network isolation.
Applied to files:
test/functional/feature_governance_cl.py
📚 Learning: 2025-11-24T16:41:22.457Z
Learnt from: CR
Repo: dashpay/dash PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:41:22.457Z
Learning: Applies to src/coinjoin/**/*.{cpp,h} : CoinJoin implementation must use masternode-coordinated mixing sessions with uniform denomination outputs
Applied to files:
test/functional/feature_governance_cl.py
📚 Learning: 2025-06-09T16:43:20.996Z
Learnt from: kwvg
Repo: dashpay/dash PR: 6718
File: test/functional/test_framework/test_framework.py:2102-2102
Timestamp: 2025-06-09T16:43:20.996Z
Learning: In the test framework consolidation PR (#6718), user kwvg prefers to limit functional changes to those directly related to MasternodeInfo, avoiding scope creep even for minor improvements like error handling consistency.
Applied to files:
test/functional/feature_governance_cl.py
🧬 Code graph analysis (1)
test/functional/feature_governance_cl.py (1)
test/functional/test_framework/test_node.py (2)
debug_log_path(435-436)assert_debug_log(444-472)
🪛 Ruff (0.14.8)
test/functional/feature_governance_cl.py
154-154: Loop control variable node not used within loop body
(B007)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: mac-build / Build source
- GitHub Check: linux64_tsan-build / Build source
- GitHub Check: linux64_sqlite-build / Build source
- GitHub Check: linux64_ubsan-build / Build source
- GitHub Check: linux64-build / Build source
- GitHub Check: linux64_fuzz-build / Build source
- GitHub Check: linux64_nowallet-build / Build source
- GitHub Check: arm-linux-build / Build source
- GitHub Check: win64-build / Build source
- GitHub Check: Lint / Run linters
🔇 Additional comments (2)
test/functional/feature_governance_cl.py (2)
8-8: Import ofreis appropriate hereThe new
reimport is needed for the debug-log polling helper below and is scoped correctly in this test module. No issues.
163-168: Restricting log assertions to non‑isolated nodes while still running the scheduler on the isolated node is correctChanging the loop to:
- Run
assert_debug_logand scheduler only onself.nodes[0:5](non‑isolated nodes), and- Separately advancing
mocktimeandmockscheduleronself.nodes[5]without asserting its logs,fits the intended model where the isolated node continues to run scheduler-based governance cleanup but may legitimately have different governance state and log output. This matches the prior learning that mockscheduler is independent of network connectivity while avoiding false negatives from node 5’s differing view.
53afb68 to
2fce68d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
test/functional/feature_governance_cl.py (1)
146-156: Consider optimizing log polling to track file position.The custom
governance_tip_updatedfunction reads the entire debug log on each iteration, which is less efficient than the existingassert_debug_loginfrastructure that tracks the previous file position (see test_framework/test_node.py:443-471). For consistency and performance, consider tracking the last read position.Additionally, based on the past review comment, the test still fails with "NOT marked for removal" even though the UpdatedBlockTip synchronization appears to succeed. This suggests the synchronization may be necessary but not sufficient to resolve the intermittent failures. You may need to investigate whether additional state needs to be synchronized or if there's a timing issue in the governance cleanup logic itself.
Optional refactor to match assert_debug_log efficiency:
- self.log.info("Wait for governance module to catch up with block updates") - tip_height = self.nodes[0].getblockcount() - expected_msg = f'CGovernanceManager::UpdatedBlockTip -- nCachedBlockHeight: {tip_height}' - - def governance_tip_updated(node): - with open(node.debug_log_path, encoding='utf-8') as dl: - debug_log = dl.read() - return re.search(re.escape(expected_msg), debug_log, flags=re.MULTILINE) is not None - - for node in self.nodes[0:5]: - self.wait_until(lambda node=node: governance_tip_updated(node), sleep=0.5, timeout=15) + self.log.info("Wait for governance module to catch up with block updates") + tip_height = self.nodes[0].getblockcount() + expected_msg = f'CGovernanceManager::UpdatedBlockTip -- nCachedBlockHeight: {tip_height}' + + # Wait for governance tip update on non-isolated nodes + for node in self.nodes[0:5]: + self.wait_until(lambda node=node: node.wait_for_debug_log([expected_msg], timeout=0), sleep=0.5, timeout=15)Note: You would need to add a non-blocking
wait_for_debug_logvariant to the test framework, or use a different approach with position tracking similar toassert_debug_log.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/functional/feature_governance_cl.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
test/functional/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Functional tests in test/functional/ must be written in Python (minimum version specified in .python-version) and depend on dashd and dash-node
Files:
test/functional/feature_governance_cl.py
🧠 Learnings (5)
📓 Common learnings
Learnt from: kwvg
Repo: dashpay/dash PR: 6718
File: test/functional/test_framework/test_framework.py:2102-2102
Timestamp: 2025-06-09T16:43:20.996Z
Learning: In the test framework consolidation PR (#6718), user kwvg prefers to limit functional changes to those directly related to MasternodeInfo, avoiding scope creep even for minor improvements like error handling consistency.
Learnt from: UdjinM6
Repo: dashpay/dash PR: 6786
File: ci/test/04_install.sh:99-101
Timestamp: 2025-08-01T07:46:37.840Z
Learning: In backport PRs like #6786, UdjinM6 prefers to defer non-critical fixes (such as shell command expansion issues) to separate commits/PRs to maintain focus on the primary backport objectives, consistent with the project's pattern of avoiding scope creep.
Learnt from: UdjinM6
Repo: dashpay/dash PR: 6926
File: test/functional/feature_governance_cl.py:0-0
Timestamp: 2025-10-28T08:54:00.392Z
Learning: In Dash tests, the scheduler (mockscheduler) operates independently of network state. Isolated nodes should still run scheduler-based cleanup processes like governance cleanup, even if they have different state due to network isolation.
📚 Learning: 2025-11-24T16:41:22.457Z
Learnt from: CR
Repo: dashpay/dash PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:41:22.457Z
Learning: Applies to src/governance/**/*.{cpp,h} : Governance implementation must support governance objects (proposals, triggers, superblock management) and on-chain voting with tallying
Applied to files:
test/functional/feature_governance_cl.py
📚 Learning: 2025-10-28T08:54:00.392Z
Learnt from: UdjinM6
Repo: dashpay/dash PR: 6926
File: test/functional/feature_governance_cl.py:0-0
Timestamp: 2025-10-28T08:54:00.392Z
Learning: In Dash tests, the scheduler (mockscheduler) operates independently of network state. Isolated nodes should still run scheduler-based cleanup processes like governance cleanup, even if they have different state due to network isolation.
Applied to files:
test/functional/feature_governance_cl.py
📚 Learning: 2025-11-24T16:41:22.457Z
Learnt from: CR
Repo: dashpay/dash PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:41:22.457Z
Learning: Applies to src/coinjoin/**/*.{cpp,h} : CoinJoin implementation must use masternode-coordinated mixing sessions with uniform denomination outputs
Applied to files:
test/functional/feature_governance_cl.py
📚 Learning: 2025-06-09T16:43:20.996Z
Learnt from: kwvg
Repo: dashpay/dash PR: 6718
File: test/functional/test_framework/test_framework.py:2102-2102
Timestamp: 2025-06-09T16:43:20.996Z
Learning: In the test framework consolidation PR (#6718), user kwvg prefers to limit functional changes to those directly related to MasternodeInfo, avoiding scope creep even for minor improvements like error handling consistency.
Applied to files:
test/functional/feature_governance_cl.py
🧬 Code graph analysis (1)
test/functional/feature_governance_cl.py (1)
test/functional/test_framework/test_node.py (2)
debug_log_path(435-436)assert_debug_log(444-472)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: linux64_multiprocess-build / Build source
- GitHub Check: mac-build / Build source
- GitHub Check: win64-build / Build source
- GitHub Check: linux64_nowallet-build / Build source
- GitHub Check: arm-linux-build / Build source
- GitHub Check: linux64_ubsan-build / Build source
- GitHub Check: linux64_fuzz-build / Build source
- GitHub Check: linux64_sqlite-build / Build source
- GitHub Check: linux64-build / Build source
- GitHub Check: linux64_tsan-build / Build source
🔇 Additional comments (3)
test/functional/feature_governance_cl.py (3)
8-8: LGTM - necessary import for log polling.The regex module is correctly imported for the new
governance_tip_updatedhelper function.
164-169: Correct exclusion of isolated node from log assertions.The change correctly limits log assertions to nodes 0-4 while still advancing mocktime and running the scheduler for the isolated node 5. This aligns with the established pattern that the scheduler operates independently of network state, but isolated nodes have different governance state (missing votes and triggers), so the cleanup logs won't match the expected messages.
Based on learnings, this is the right approach for handling isolated nodes in governance tests.
160-160: Unable to verify - codebase access unavailableI attempted to verify the review comment by examining the test file
test/functional/feature_governance_cl.py, but cannot access the repository. Without direct access to the actual code, I cannot confirm:
- Whether line 160 contains the exact assertion shown
- If the log messages ("UpdateCachesAndClean", "CleanAndRemoveTriggers -- Removing trigger object", "CleanAndRemoveTriggers -- NOT marked for removal") are actually generated
- The current test state (passing or failing)
- The trigger expiration and removal logic
- Whether the claims about the "past review" reflect the current codebase state
The original review comment makes specific claims about test failures and trigger removal behavior that require direct code inspection to verify.
…eature_governance_cl.py`
2fce68d to
27ac9f2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/functional/feature_governance_cl.py (1)
145-158:governance_tip_updatedlogic is sound; consider simplifying file access (optional).The new wait for
CGovernanceManager::UpdatedBlockTipon nodes 0–4 matches the intent to let governance catch up before asserting cleanup logs, and correctly excludes the isolated node.If you want to shave a bit of complexity/IO, you can avoid opening
debug.logtwice by using a single handle for both size and tail, e.g.:- def governance_tip_updated(node): - with open(node.debug_log_path, encoding='utf-8') as dl: - seek_pos = node.debug_log_bytes() - 100 * 1024 # read the last 100 KiB only - dl.seek(seek_pos if seek_pos > 0 else 0) - debug_log_part = dl.read() - return expected_msg in debug_log_part + def governance_tip_updated(node): + with open(node.debug_log_path, encoding='utf-8') as dl: + dl.seek(0, 2) + end_pos = dl.tell() + dl.seek(max(0, end_pos - 100 * 1024)) # read the last 100 KiB only + debug_log_part = dl.read() + return expected_msg in debug_log_partThis keeps behavior the same while avoiding an extra open/size pass.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test/functional/feature_governance_cl.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
test/functional/**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
Functional tests in test/functional/ must be written in Python (minimum version specified in .python-version) and depend on dashd and dash-node
Files:
test/functional/feature_governance_cl.py
🧠 Learnings (5)
📓 Common learnings
Learnt from: kwvg
Repo: dashpay/dash PR: 6718
File: test/functional/test_framework/test_framework.py:2102-2102
Timestamp: 2025-06-09T16:43:20.996Z
Learning: In the test framework consolidation PR (#6718), user kwvg prefers to limit functional changes to those directly related to MasternodeInfo, avoiding scope creep even for minor improvements like error handling consistency.
Learnt from: UdjinM6
Repo: dashpay/dash PR: 6786
File: ci/test/04_install.sh:99-101
Timestamp: 2025-08-01T07:46:37.840Z
Learning: In backport PRs like #6786, UdjinM6 prefers to defer non-critical fixes (such as shell command expansion issues) to separate commits/PRs to maintain focus on the primary backport objectives, consistent with the project's pattern of avoiding scope creep.
📚 Learning: 2025-11-24T16:41:22.457Z
Learnt from: CR
Repo: dashpay/dash PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:41:22.457Z
Learning: Applies to src/governance/**/*.{cpp,h} : Governance implementation must support governance objects (proposals, triggers, superblock management) and on-chain voting with tallying
Applied to files:
test/functional/feature_governance_cl.py
📚 Learning: 2025-10-28T08:54:00.392Z
Learnt from: UdjinM6
Repo: dashpay/dash PR: 6926
File: test/functional/feature_governance_cl.py:0-0
Timestamp: 2025-10-28T08:54:00.392Z
Learning: In Dash tests, the scheduler (mockscheduler) operates independently of network state. Isolated nodes should still run scheduler-based cleanup processes like governance cleanup, even if they have different state due to network isolation.
Applied to files:
test/functional/feature_governance_cl.py
📚 Learning: 2025-11-24T16:41:22.457Z
Learnt from: CR
Repo: dashpay/dash PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:41:22.457Z
Learning: Applies to src/coinjoin/**/*.{cpp,h} : CoinJoin implementation must use masternode-coordinated mixing sessions with uniform denomination outputs
Applied to files:
test/functional/feature_governance_cl.py
📚 Learning: 2025-06-09T16:43:20.996Z
Learnt from: kwvg
Repo: dashpay/dash PR: 6718
File: test/functional/test_framework/test_framework.py:2102-2102
Timestamp: 2025-06-09T16:43:20.996Z
Learning: In the test framework consolidation PR (#6718), user kwvg prefers to limit functional changes to those directly related to MasternodeInfo, avoiding scope creep even for minor improvements like error handling consistency.
Applied to files:
test/functional/feature_governance_cl.py
🧬 Code graph analysis (1)
test/functional/feature_governance_cl.py (1)
test/functional/test_framework/test_node.py (3)
debug_log_path(435-436)debug_log_bytes(438-441)assert_debug_log(444-472)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: linux64_tsan-build / Build source
- GitHub Check: arm-linux-build / Build source
- GitHub Check: linux64_nowallet-build / Build source
- GitHub Check: win64-build / Build source
- GitHub Check: mac-build / Build source
- GitHub Check: linux64_sqlite-build / Build source
- GitHub Check: linux64_fuzz-build / Build source
- GitHub Check: linux64_ubsan-build / Build source
- GitHub Check: linux64-build / Build source
- GitHub Check: Lint / Run linters
| self.log.info("Bump mocktime to trigger governance cleanup") | ||
| for delta, expected in ( | ||
| (5 * 60, ['UpdateCachesAndClean -- Governance Objects:']), # mark old triggers for deletion | ||
| (5 * 60, ['CleanAndRemoveTriggers -- Removing trigger object']), # mark old triggers for deletion | ||
| (10 * 60, ['UpdateCachesAndClean -- Governance Objects: 0']), # deletion after delay | ||
| ): | ||
| self.mocktime += delta | ||
| for node in self.nodes: | ||
| for node in self.nodes[0:5]: | ||
| with node.assert_debug_log(expected_msgs=expected): | ||
| node.setmocktime(self.mocktime) | ||
| node.mockscheduler(delta) | ||
| self.nodes[5].setmocktime(self.mocktime) | ||
| self.nodes[5].mockscheduler(delta) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
5‑minute expectation on "CleanAndRemoveTriggers -- Removing trigger object" is too strict and appears flaky.
The new assertion:
(5 * 60, ['CleanAndRemoveTriggers -- Removing trigger object']), # mark old triggers for deletionassumes that the first cleanup pass after the additional block + 5‑minute mocktime jump will always log "Removing trigger object". The combined log in the earlier review, however, shows a valid run where at this point CleanAndRemoveTriggers logs:
CSuperblock::IsExpired -- nBlockHeight = 260, nExpirationBlock = 280CleanAndRemoveTriggers -- NOT marked for removal
and never logs "CleanAndRemoveTriggers -- Removing trigger object" in that pass, while UpdateCachesAndClean -- Governance Objects: 1 ... is present.
Your current changes (including the new UpdatedBlockTip wait at lines 145‑158) don’t change the underlying C++ logic that decides whether a trigger is “marked for removal” or “NOT marked for removal”; they only affect scheduling/catch‑up. So the scenario from the log remains possible, and this assertion will still intermittently fail even when governance behavior is correct.
The node‑5 handling here looks good: you still run setmocktime/mockscheduler on the isolated node but exclude it from assert_debug_log, which matches the intent and the earlier learning that scheduler‑based cleanup must still run on isolated nodes.
To make the test robust while keeping your new synchronization and node‑range changes, I’d recommend relaxing the 5‑minute expectation back to a more generic cleanup log (as before), and rely on the 10‑minute "Governance Objects: 0" plus the final RPC checks to assert actual removal:
- for delta, expected in (
- (5 * 60, ['CleanAndRemoveTriggers -- Removing trigger object']), # mark old triggers for deletion
- (10 * 60, ['UpdateCachesAndClean -- Governance Objects: 0']), # deletion after delay
- ):
+ for delta, expected in (
+ # First run: ensure governance cleanup runs; we only care that UpdateCachesAndClean fired.
+ (5 * 60, ['UpdateCachesAndClean -- Governance Objects:']), # mark old triggers for deletion
+ (10 * 60, ['UpdateCachesAndClean -- Governance Objects: 0']), # deletion after delay
+ ):This keeps the log‑based signal that cleanup ran during the first pass, avoids depending on the precise “Removing trigger object” branch, and should eliminate the intermittent failures you’re seeing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depending on Removing trigger object log line is intentional. Having NOT marked for removal line in logs is a sign of sync issues (scheduler running before block heigh is updated in governance module) 27ac9f2 is designed to solve.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@UdjinM6, thank you for the clarification! I now understand the intent. The synchronization step you added (waiting for UpdatedBlockTip at the current tip height) ensures that the governance module has processed the latest block before the scheduler-based cleanup runs. With that synchronization in place, the strict check for "Removing trigger object" is the correct verification that the fix works as intended. If "NOT marked for removal" were to appear despite the synchronization, that would indeed indicate a problem that the test should catch.
The approach makes sense: fix the race condition with proper synchronization, then verify the fix with a strict assertion. My concern was misplaced because I didn't account for the fact that the new governance_tip_updated wait should eliminate the timing issue.
🐰💡
✏️ Learnings added
Learnt from: UdjinM6
Repo: dashpay/dash PR: 7051
File: test/functional/feature_governance_cl.py:159-170
Timestamp: 2025-12-09T20:40:38.414Z
Learning: In feature_governance_cl.py, before running governance cleanup with mockscheduler, the test must wait for the governance module to catch up with the latest block height by checking for the "CGovernanceManager::UpdatedBlockTip -- nCachedBlockHeight" log message. This synchronization ensures that subsequent cleanup operations see the correct block height and produce deterministic results (e.g., "CleanAndRemoveTriggers -- Removing trigger object" instead of "NOT marked for removal").
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: UdjinM6
Repo: dashpay/dash PR: 6926
File: test/functional/feature_governance_cl.py:0-0
Timestamp: 2025-10-28T08:54:00.392Z
Learning: In Dash tests, the scheduler (mockscheduler) operates independently of network state. Isolated nodes should still run scheduler-based cleanup processes like governance cleanup, even if they have different state due to network isolation.
Learnt from: CR
Repo: dashpay/dash PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-24T16:41:22.457Z
Learning: Applies to src/governance/**/*.{cpp,h} : Governance implementation must support governance objects (proposals, triggers, superblock management) and on-chain voting with tallying
Learnt from: kwvg
Repo: dashpay/dash PR: 6761
File: src/chainlock/signing.cpp:247-250
Timestamp: 2025-07-29T14:32:48.369Z
Learning: In PR #6761, kwvg acknowledged a null pointer check issue in ChainLockSigner::Cleanup() method but deferred it to follow-up, consistent with the pattern of avoiding scope creep in refactoring PRs.
Learnt from: kwvg
Repo: dashpay/dash PR: 6718
File: test/functional/test_framework/test_framework.py:2102-2102
Timestamp: 2025-06-09T16:43:20.996Z
Learning: In the test framework consolidation PR (#6718), user kwvg prefers to limit functional changes to those directly related to MasternodeInfo, avoiding scope creep even for minor improvements like error handling consistency.
Learnt from: UdjinM6
Repo: dashpay/dash PR: 6786
File: ci/test/04_install.sh:99-101
Timestamp: 2025-08-01T07:46:37.840Z
Learning: In backport PRs like #6786, UdjinM6 prefers to defer non-critical fixes (such as shell command expansion issues) to separate commits/PRs to maintain focus on the primary backport objectives, consistent with the project's pattern of avoiding scope creep.
Issue being fixed or feature implemented
Fixes intermittent failures in
feature_governance_cl.py.Alternative to #7041
What was done?
CleanAndRemoveTriggers -- Removing trigger objectlog message instead of the genericUpdateCachesAndCleanmessage.How Has This Been Tested?
Run tests
Breaking Changes
Checklist: