-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
The most common scenario for us is:
- Set the initial
from_head(K)to start syncing even if there is no archival data - Every time there is a new index of archival chunks, call
retainwithfrom_blockset to the current archive height to clean up the old blocks.
However, it fails with heavy consequences in the following scenario:
- Archives stop syncing, so we periodically call
retainwithfrom_blockwith the same blockN - When
Head - Kbecomes greater thanNthe storage starts to clean blocks up toHead - K - The next time we call
retain(N)it says that there is a gap and drops the entire database - It then starts ingestion from some block above the finalized head (2 blocks from head)
- This repeats on every consequent call to
retain(N)
I'd say the database should never be fully dropped without manual intervention. Once we run hotblocks in a separate service, we can add an internal endpoint for that.
Metadata
Metadata
Assignees
Labels
No labels