Skip to content

Conversation

@alpe
Copy link
Contributor

@alpe alpe commented Jan 16, 2026

Different backports from alex/adr023_ha-failover_raft11 branch to improve graceful shutdown restarts.
Minor fixes

@alpe alpe force-pushed the alex/graceful_shtdwn branch from 24880f5 to fd53828 Compare January 16, 2026 12:56
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @alpe, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the system's operational stability and resilience, particularly focusing on how various components manage their lifecycle during restarts and shutdowns. It introduces more robust shutdown sequences that prevent data loss and ensure timely termination, refines the handling of transient 'in-flight' data to avoid processing incomplete states, and improves consistent context propagation for cancellation. These changes collectively contribute to a more predictable and reliable system behavior, especially crucial for high-availability scenarios.

Highlights

  • Graceful Shutdown Improvements: Enhanced shutdown mechanisms for the Submitter and Syncer components, including timeouts for waiting goroutines and draining pending events to prevent data loss and ensure timely termination.
  • Context Propagation and Cancellation: Improved context awareness and propagation across Executor and Syncer functions and loops, allowing for more responsive and controlled cancellation during operations like block production and event processing.
  • In-Flight Data/Header Handling: Introduced specific error handling and logic to skip 'work-in-progress' (WIP) block data and signed headers, preventing the processing of incomplete or temporary state during fetching.
  • Code Refactoring and Reusability: Centralized data validation logic for proposer addresses and signed data into a new assert.go utility, reducing code duplication and improving maintainability in the da_retriever.
  • State Initialization Persistence: Modified the initialization routines in both the Executor and Syncer to ensure that the initial state is consistently persisted to the store using batch operations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@alpe alpe changed the title chore: stable chore: improve graceful shutdown restarts Jan 16, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several backports to improve graceful shutdown and stability, along with some minor fixes. The changes to enhance shutdown logic in submitting/submitter.go and syncing/syncer.go are valuable. However, I've identified a critical race condition in the new shutdown implementation for the Syncer component, which could result in data loss or panics. I have provided a detailed comment and a code suggestion to resolve this issue. Other modifications, such as refactoring assertion logic and handling in-flight data within the cache, are well-implemented and improve the codebase.

Comment on lines 231 to 269
func (s *Syncer) Stop() error {
if s.cancel != nil {
s.cancel()
if s.cancel == nil {
return nil
}

// Drain pending events from the buffer before shutdown to prevent state loss.
// Process remaining events with a timeout to prevent hanging indefinitely.
drainCtx, drainCancel := context.WithTimeout(context.Background(), 5*time.Second)
defer drainCancel()

drained := 0
drainLoop:
for {
select {
case event, ok := <-s.heightInCh:
if !ok {
break drainLoop
}
s.processHeightEvent(drainCtx, &event)
drained++
case <-drainCtx.Done():
s.logger.Warn().Int("remaining", len(s.heightInCh)).Msg("timeout draining height events during shutdown")
break drainLoop
default:
break drainLoop
}
}
if drained > 0 {
s.logger.Info().Int("count", drained).Msg("drained pending height events during shutdown")
}

s.cancel()
s.cancelP2PWait(0)
s.wg.Wait()
s.logger.Info().Msg("syncer stopped")
close(s.heightInCh)
s.cancel = nil
return nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a race condition in the Stop function. The processLoop goroutine is still running and consuming from s.heightInCh while the Stop function also tries to drain the same channel. This can lead to lost events or other unpredictable behavior during shutdown.

To fix this, you should first signal all goroutines to stop by calling s.cancel(), then wait for them to finish using s.wg.Wait(). Only after all other goroutines have stopped should you proceed to drain the s.heightInCh channel. This ensures that the Stop function is the sole consumer of the channel during the drain process.

func (s *Syncer) Stop() error {
	if s.cancel == nil {
		return nil
	}

	// Signal all goroutines to stop and wait for them to finish.
	s.cancel()
	s.cancelP2PWait(0)
	s.wg.Wait()

	// Drain pending events from the buffer before shutdown to prevent state loss.
	// Process remaining events with a timeout to prevent hanging indefinitely.
	drainCtx, drainCancel := context.WithTimeout(context.Background(), 5*time.Second)
	defer drainCancel()

	drained := 0
drainLoop:
	for {
		select {
		case event, ok := <-s.heightInCh:
			if !ok {
				break drainLoop
			}
			s.processHeightEvent(drainCtx, &event)
			drained++
		case <-drainCtx.Done():
			s.logger.Warn().Int("remaining", len(s.heightInCh)).Msg("timeout draining height events during shutdown")
			break drainLoop
		default:
			break drainLoop
		}
	}
	if drained 0 0 {
		s.logger.Info().Int("count", drained).Msg("drained pending height events during shutdown")
	}

	s.logger.Info().Msg("syncer stopped")
	close(s.heightInCh)
	s.cancel = nil
	return nil
}

@alpe alpe force-pushed the alex/graceful_shtdwn branch from fd53828 to e281cfb Compare January 16, 2026 13:03
@github-actions
Copy link
Contributor

github-actions bot commented Jan 16, 2026

The latest Buf updates on your PR. Results from workflow CI / buf-check (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed⏩ skipped✅ passed✅ passedJan 16, 2026, 1:52 PM

@claude
Copy link
Contributor

claude bot commented Jan 16, 2026

Claude finished @alpe's task in 2m 36s —— View job


Code Review: Graceful Shutdown Improvements

I've completed a comprehensive review of PR #2985. This PR improves graceful shutdown and restart behavior across the block package. The changes are well-intentioned, but I've identified several high-priority issues that need to be addressed before merging.


Critical Issues 🔴

None Found - After careful analysis, the WaitGroup usage is correct and there are no critical race conditions.


High Priority Issues 🟡

1. Missing Context Check in Retry Loops - block/internal/syncing/syncer.go:754-780 and block/internal/executing/executor.go:651-677

Severity: High - Can cause shutdown delays

Issue: Retry loops check ctx.Done() for cancellation, but don't check s.ctx.Done() (Syncer's context) or e.ctx.Done() (Executor's context). If the component is shutting down while retrying, it should exit immediately.

Syncer Example (lines 768-772):

select {
case <-time.After(common.MaxRetriesTimeout):
    continue
case <-ctx.Done():  // ← Only checks passed context
    return nil, fmt.Errorf("context cancelled during retry: %w", ctx.Err())
}

Recommendation: Add shutdown context check:

select {
case <-time.After(common.MaxRetriesTimeout):
    continue
case <-ctx.Done():
    return nil, fmt.Errorf("context cancelled during retry: %w", ctx.Err())
case <-s.ctx.Done():  // ← Add this for graceful shutdown
    return nil, fmt.Errorf("syncer shutting down: %w", s.ctx.Err())
}

Same issue in Executor at lines 665-670.

Fix this →


2. In-Flight Data Handling Lacks Retry Mechanism - block/internal/cache/pending_data.go:35-45 and pending_headers.go:37-41

Severity: Medium-High - Potential data loss on restart

Issue: When Height==0 is detected (WIP data/header), errInFlightData is returned and the fetch is skipped. However, there's no mechanism to retry fetching this data later or to ensure it gets properly persisted.

Lines 40-42:

// in the executor, WIP data is temporary stored. skip them until the process is completed
if data.Height() == 0 {
    return nil, errInFlightData
}

Questions:

  1. What happens if the node restarts while data is in-flight?
  2. Will the data be re-fetched or permanently skipped?
  3. Is Height==0 truly transient or can it persist across restarts?

Recommendations:

  • Option A: Document that Height==0 is only transient during block production and will be overwritten before any restart
  • Option B: Add retry tracking for in-flight entries
  • Option C (best): Ensure Height is always set atomically with persistence to avoid Height==0 ever hitting disk

3. Syncer.Stop() Ordering Could Use Clarification - block/internal/syncing/syncer.go:230-268

Severity: Low-High (code correctness vs maintainability)

Gemini's Review Concern: The gemini review identified a potential race condition. After analysis, the code is actually correct, but could benefit from documentation.

Current flow:

  1. Line 236: s.cancel() - signals shutdown
  2. Line 237: s.cancelP2PWait(0) - cancels P2P wait
  3. Line 238: s.wg.Wait() - blocks until all goroutines exit, including processLoop()
  4. Lines 244-259: Drain heightInCh after all goroutines have stopped

Analysis: Once s.wg.Wait() returns, processLoop() has already exited, so there's no concurrent access to heightInCh. The code is safe.

Recommendation: Add clarifying comment:

// Signal shutdown and wait for all goroutines to exit before draining.
// This ensures no concurrent access to heightInCh during the drain phase.
s.cancel()
s.cancelP2PWait(0)
s.wg.Wait()  // processLoop() and workers exit here

Medium Priority Issues 🟠

4. Hardcoded Shutdown Timeouts - Multiple Locations

Severity: Medium

Locations:

  • block/internal/submitting/submitter.go:163 - 5 second timeout
  • block/internal/syncing/syncer.go:240 - 5 second drain timeout

Issue: Hardcoded 5-second timeouts may be insufficient for:

  • Large batch submissions in progress
  • Slow networks
  • DA layer submission delays

Recommendations:

  1. Extract timeouts to constants or configuration
  2. Consider increasing to 30 seconds for production reliability
  3. Add logging when timeouts are hit to help debugging

5. Batch Cleanup Not Guaranteed - block/internal/executing/executor.go:222-235 and block/internal/syncing/syncer.go:316-328

Severity: Low-Medium

Issue: If SetHeight() or UpdateState() fails, the batch is not explicitly cleaned up. While the batch interface might handle this, it's not guaranteed.

Recommendation: Add defer-based cleanup if the Batch interface supports it, or verify that the interface handles cleanup automatically.


Positive Observations ✅

Well-Implemented Features

  1. New assert.go Utility (block/internal/syncing/assert.go)

    • Clean refactoring that reduces code duplication
    • Centralizes validation logic for proposer addresses and signed data
    • Improves maintainability
  2. Batch Operations for State Persistence

    • Both Executor and Syncer now use atomic batch operations
    • Prevents partial state updates
    • Good practice for data consistency
  3. Context-Aware Shutdown

    • Components properly respect context cancellation
    • Clean separation of lifecycle management
    • WaitGroup usage is correct throughout
  4. Drain Logic in Syncer.Stop()

    • Prevents data loss by processing pending events during shutdown
    • Includes timeout to prevent hanging
    • Logging provides visibility
  5. In-Flight Detection Pattern

    • Checking Height==0 to detect WIP data is a good pattern
    • Prevents processing incomplete state
    • Could be enhanced with retry logic

Security Considerations

✅ No Critical Security Issues Found

The code properly:

  • Validates proposer addresses
  • Verifies signatures on DA data
  • Handles context cancellation
  • Uses atomic operations for shared state

Performance Considerations

✅ Generally Good

  • Concurrent processing with proper synchronization
  • Atomic operations for lock-free reads
  • Buffered channels (size 100) reduce blocking
  • Batch operations minimize store round-trips

⚠️ Minor Concern

The retry loops could delay shutdown by up to MaxRetriesTimeout * MaxRetriesBeforeHalt in worst case. Consider adding the s.ctx.Done() check mentioned in issue #1.


Test Coverage

⚠️ Needs Improvement

Codecov Report: 46.72% patch coverage with 65 lines missing coverage

Critical gaps:

  • syncer.go: 38 missing lines, 5 partials (29.5% coverage)
  • assert.go: 4 missing, 4 partials (57.89% coverage)
  • executor.go: 3 missing, 3 partials (25% coverage)

Recommendations:

  1. Add tests for shutdown scenarios (Stop() method)
  2. Test retry logic with context cancellation
  3. Test in-flight data handling
  4. Add integration tests for graceful restart scenarios

Summary & Recommendations

Severity Breakdown

  • 🔴 Critical: 0 issues
  • 🟡 High: 3 issues (shutdown context in retries, in-flight handling, documentation)
  • 🟠 Medium: 2 issues (timeouts, batch cleanup)

Priority Actions

Before Merging (recommended):

  1. Add s.ctx.Done() check in retry loops (Syncer line 768-772, Executor line 665-670)
  2. Document or improve in-flight data retry behavior (clarify if Height==0 can persist across restarts)
  3. ⚠️ Consider increasing shutdown timeouts or making them configurable

Can Be Addressed Later:
4. Add comprehensive shutdown tests
5. Improve test coverage for new code paths
6. Add detailed comments explaining shutdown ordering in Syncer.Stop()


Overall Assessment

This PR makes valuable improvements to graceful shutdown behavior. The code quality is generally good, and the refactoring to assert.go is excellent. The main concerns are:

  1. Missing shutdown context checks in retry loops - This is an easy fix that will improve shutdown responsiveness
  2. Unclear behavior for in-flight data across restarts - Needs documentation or enhancement
  3. Low test coverage - Not a blocker but should be improved

With the recommended fixes for retry loop context handling and in-flight data documentation, this PR will significantly improve system stability during restarts.


@codecov
Copy link

codecov bot commented Jan 16, 2026

Codecov Report

❌ Patch coverage is 46.72131% with 65 lines in your changes missing coverage. Please review.
✅ Project coverage is 59.05%. Comparing base (7d30f97) to head (a6f2162).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
block/internal/syncing/syncer.go 29.50% 38 Missing and 5 partials ⚠️
block/internal/syncing/assert.go 57.89% 4 Missing and 4 partials ⚠️
block/internal/executing/executor.go 25.00% 3 Missing and 3 partials ⚠️
block/internal/cache/pending_headers.go 0.00% 2 Missing and 2 partials ⚠️
block/internal/cache/pending_data.go 50.00% 1 Missing and 1 partial ⚠️
block/internal/submitting/submitter.go 91.66% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2985      +/-   ##
==========================================
- Coverage   59.13%   59.05%   -0.08%     
==========================================
  Files         102      103       +1     
  Lines        9822     9902      +80     
==========================================
+ Hits         5808     5848      +40     
- Misses       3397     3427      +30     
- Partials      617      627      +10     
Flag Coverage Δ
combined 59.05% <46.72%> (-0.08%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@alpe alpe marked this pull request as draft January 16, 2026 13:12
@alpe
Copy link
Contributor Author

alpe commented Jan 16, 2026

@claude review again

@claude
Copy link
Contributor

claude bot commented Jan 16, 2026

Claude finished @alpe's task in 1m 42s —— View job


Code Review Complete

I've completed my review of PR #2985. This PR improves graceful shutdown and restart behavior across several components. Overall, the changes are well-structured, but I've identified several issues that need attention.


Critical Issues

1. Race Condition in Syncer.Stop() - block/internal/syncing/syncer.go:230-268

Issue: The gemini-code-assist review correctly identified a race condition. The Stop() function drains heightInCh while processLoop() may still be consuming from it.

Current implementation (lines 230-268):

func (s *Syncer) Stop() error {
    s.cancel()
    s.cancelP2PWait(0)
    s.wg.Wait()  // ← processLoop stops here
    
    // But then we drain heightInCh...
    for {
        case event := <-s.heightInCh:
            s.processHeightEvent(drainCtx, &event)

Problem: Between s.cancel() and s.wg.Wait(), the processLoop() goroutine (lines 350-365) is still running and consuming from heightInCh. Once processLoop() exits after s.wg.Wait(), the channel is no longer being consumed by any other goroutine, so the drain logic in Stop() should work correctly.

Actually, on closer inspection: The code appears correct! The sequence is:

  1. s.cancel() signals shutdown
  2. s.wg.Wait() waits for processLoop() to exit
  3. Only after processLoop() exits do we drain remaining events

However, there's still a subtle issue: if events are added to heightInCh after processLoop() exits but before the drain loop starts, they could be missed. This is unlikely but possible.

Recommendation: The current implementation is actually safe, but could be made more explicit. Consider adding a comment explaining the ordering guarantees.


High Priority Issues

2. Context Propagation Issue in Syncer.executeTxsWithRetry() - block/internal/syncing/syncer.go:754-780

Issue: The retry loop uses ctx.Done() check but also uses s.ctx for cancellation detection. This creates ambiguity.

Lines 768-772:

select {
case <-time.After(common.MaxRetriesTimeout):
    continue
case <-ctx.Done():  // ← uses passed-in ctx
    return nil, fmt.Errorf("context cancelled during retry: %w", ctx.Err())

But line 756 uses the passed ctx for ExecuteTxs:

newAppHash, _, err := s.exec.ExecuteTxs(ctx, rawTxs, header.Height(), header.Time(), currentState.AppHash)

Problem: If s.ctx is cancelled during shutdown, but the passed ctx is still active, the retry loop will continue forever since it only checks the passed ctx.

Recommendation: Use a combined context or ensure shutdown cancellation propagates properly:

select {
case <-time.After(common.MaxRetriesTimeout):
    continue
case <-ctx.Done():
    return nil, fmt.Errorf("context cancelled during retry: %w", ctx.Err())
case <-s.ctx.Done():  // ← add this
    return nil, fmt.Errorf("syncer shutting down: %w", s.ctx.Err())
}

Same issue exists in block/internal/executing/executor.go:665-670.


3. In-Flight Data Handling Could Lose Events - block/internal/cache/pending_data.go:35-45

Issue: The errInFlightData error is used to skip WIP data, but there's no mechanism to retry fetching this data later.

Lines 40-42:

// in the executor, WIP data is temporary stored. skip them until the process is completed
if data.Height() == 0 {
    return nil, errInFlightData

Problem: If the node restarts while data is "in-flight" (Height == 0), the data will be skipped permanently on subsequent fetches. The cache doesn't track these skipped entries for retry.

Recommendation: Either:

  • Document that Height==0 is only a transient state during block production and will be overwritten
  • Add retry logic or tracking for skipped in-flight entries
  • Ensure Height is always set before persisting to store (better approach)

Same issue in pending_headers.go:37-41.


4. Executor State Persistence Ordering - block/internal/executing/executor.go:222-235

Issue: State is persisted to store using batch operations in initializeState(), but the batch operations could fail partially.

Lines 222-235:

batch, err := e.store.NewBatch(e.ctx)
if err := batch.SetHeight(state.LastBlockHeight); err != nil {
    return fmt.Errorf("failed to set store height: %w", err)
}
if err := batch.UpdateState(state); err != nil {
    return fmt.Errorf("failed to update state: %w", err)
}
if err := batch.Commit(); err != nil {
    return fmt.Errorf("failed to commit batch: %w", err)
}

Observation: The error handling here will return errors before committing, which is good. However, there's no cleanup of the batch if SetHeight or UpdateState fails.

Recommendation: Consider using defer with batch cleanup or ensure the batch interface handles cleanup on error.


Medium Priority Issues

5. Submitter Shutdown Timeout May Be Too Short - block/internal/submitting/submitter.go:150-168

Lines 154-165:

done := make(chan struct{})
go func() {
    s.wg.Wait()
    close(done)
}()
select {
case <-done:
    // All goroutines finished cleanly
case <-time.After(5 * time.Second):
    s.logger.Warn().Msg("submitter shutdown timed out waiting for goroutines, proceeding anyway")
}

Issue: The 5-second timeout might be too short if:

  • DA submission is in progress (could take longer)
  • Network is slow
  • Large batch is being processed

Recommendation: Make the timeout configurable or increase it to 30 seconds. Also consider forcefully canceling in-progress submissions rather than just proceeding.


6. Syncer Drain Timeout Hardcoded - block/internal/syncing/syncer.go:240

Line 240:

drainCtx, drainCancel := context.WithTimeout(context.Background(), 5*time.Second)

Issue: Same as above - hardcoded 5-second timeout may not be sufficient for processing remaining events.

Recommendation: Extract to a constant or make configurable.


7. Missing Error Check in daSubmissionLoop - block/internal/submitting/submitter.go:256-262

Lines 256-262:

s.wg.Add(1)
if s.dataSubmissionMtx.TryLock() {
    go func() {
        defer func() {
            s.dataSubmissionMtx.Unlock()
            s.logger.Debug()...
            s.wg.Done()

Issue: s.wg.Add(1) is called BEFORE TryLock(), but if TryLock fails, the goroutine is never started, leaving the WaitGroup incremented.

Recommendation: Move s.wg.Add(1) inside the if s.dataSubmissionMtx.TryLock() block:

if s.dataSubmissionMtx.TryLock() {
    s.wg.Add(1)
    go func() {

Same issue exists for header submission at lines 196-202.


Positive Observations

✅ Well-Implemented Features

  1. New assert.go utility (block/internal/syncing/assert.go) - Clean refactoring that centralizes validation logic and reduces duplication.

  2. Batch operations for state persistence - Both Executor and Syncer now use batch operations for atomic state updates (lines 316-328 in syncer.go, lines 222-235 in executor.go).

  3. Context-aware retry logic - The retry mechanisms in both Executor and Syncer properly respect context cancellation (though with the issue noted above).

  4. In-flight data detection - The check for Height==0 to detect WIP data is a good pattern for avoiding corruption (though retry handling could be better).

  5. Drain logic for pending events - The concept of draining pending events during shutdown prevents data loss (though the race condition concern needs verification).


Summary

Severity Breakdown:

  • 🔴 Critical: 1 issue (race condition - actually appears safe but needs verification)
  • 🟡 High: 3 issues (context propagation, in-flight data retry, state persistence)
  • 🟠 Medium: 3 issues (shutdown timeouts, WaitGroup management)

Key Recommendations:

  1. Verify the Syncer.Stop() race condition is truly resolved (add tests or comments)
  2. Fix context propagation in retry loops (add s.ctx.Done() check)
  3. Address the WaitGroup imbalance in dataSubmissionLoop
  4. Consider making shutdown timeouts configurable
  5. Document or improve in-flight data retry behavior

The PR achieves its goals of improving graceful shutdown, but these issues should be addressed before merging to ensure robust operation during restarts.


@alpe alpe marked this pull request as ready for review January 16, 2026 13:54
Copy link
Member

@julienrbrt julienrbrt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice cleanups!

@alpe alpe added this pull request to the merge queue Jan 16, 2026
Merged via the queue into main with commit 43c729b Jan 16, 2026
31 checks passed
@alpe alpe deleted the alex/graceful_shtdwn branch January 16, 2026 14:26
alpe added a commit that referenced this pull request Jan 16, 2026
* main:
  fix: inconsistent state detection and rollback (#2983)
  chore: improve graceful shutdown restarts (#2985)
  feat(submitting): add posting strategies (#2973)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants