Skip to content

Conversation

@codchen
Copy link
Collaborator

@codchen codchen commented Dec 18, 2025

Describe your changes and provide context

Create skeleton code and concurrency control for the following model:

1st train: decode (unnecessary if the block fed by consensus is already decoded) and stateless checks
this is not exactly a "train", because the processing of the next block can start even before the current block finishes. However it still needs to send finished blocks to the next train in order
2nd train: I'll just call this ExecuteBlock. It's all stateful (anything stateless should happen in the 1st train) so blocks must be executed one after the other (unless we can establish a dependency graph among blocks)
1. wrap each transaction in the block into a task. A task includes:
a. PreprocessTx - includes fee charging, nonce bumping, value transfer, etc.
b. VMCall/VMCreate - [executes](https://github.com/sei-protocol/sei-chain/pull/2610/files#diff-a3fcc02b6a90cd9324dc5c0aec2f6994195165d9674c7a09e1955499e830837fR5-R6) the contract. Note that this is just the entrypoint into the VM. When VM makes its internal calls, it will use whatever functions it has internally and never calls back to these interfaces
c. PostprocessTx - includes gas refund, and send receipt info to the next train
2. feed the tasks into a [scheduler](https://github.com/sei-protocol/sei-chain/pull/2610/files#diff-5ffdd3eea7ab02b10dc36af3d59ae6d8ee3cf03aeae71d8bad25baf14bf3123bR202-R206)
3. wait for the tasks to finish and commit the changeset
3rd train: writes receipt to disk
also not exactly a "train" and can be as parallelized as resources allow

Testing performed to validate your change

unit test for concurrency control

@github-actions
Copy link

github-actions bot commented Dec 18, 2025

The latest Buf updates on your PR. Results from workflow Buf / buf (pull_request).

BuildFormatLintBreakingUpdated (UTC)
✅ passed✅ passed✅ passed✅ passedDec 23, 2025, 3:43 AM

@codecov
Copy link

codecov bot commented Dec 18, 2025

Codecov Report

❌ Patch coverage is 30.76923% with 45 lines in your changes missing coverage. Please review.
✅ Project coverage is 43.72%. Comparing base (cfa7ea6) to head (8b2aa3f).

Files with missing lines Patch % Lines
giga/executor/executor.go 0.00% 17 Missing ⚠️
giga/executor/tracks/sinks.go 0.00% 14 Missing ⚠️
giga/executor/tracks/execution.go 0.00% 11 Missing ⚠️
giga/executor/tracks/stateless.go 86.95% 2 Missing and 1 partial ⚠️
Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #2631   +/-   ##
=======================================
  Coverage   43.71%   43.72%           
=======================================
  Files        1902     1906    +4     
  Lines      158677   158742   +65     
=======================================
+ Hits        69373    69407   +34     
- Misses      82914    82943   +29     
- Partials     6390     6392    +2     
Flag Coverage Δ
sei-chain 45.68% <30.76%> (-0.01%) ⬇️
sei-cosmos 38.20% <ø> (-0.01%) ⬇️
sei-db 69.06% <ø> (ø)
sei-tendermint 47.33% <ø> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
giga/executor/tracks/stateless.go 86.95% <86.95%> (ø)
giga/executor/tracks/execution.go 0.00% <0.00%> (ø)
giga/executor/tracks/sinks.go 0.00% <0.00%> (ø)
giga/executor/executor.go 0.00% <0.00%> (ø)

... and 19 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@masih
Copy link
Collaborator

masih commented Dec 18, 2025

I can't find any context.Context usage in this highly concurrent pipeline. That usually means foot guns when it comes to highly concurrent go implementations.

The questions I would ask are:

  • how do started goroutines stop? the answer cannot be "when the process is killed".
  • how does ongoing unit of work stop if the upstream no longer wants it done?
  • what chains together a graceful shutdown of the application?

@codchen codchen force-pushed the tony/executor-tracks branch from 5dc7490 to 8b2aa3f Compare December 23, 2025 03:43
Comment on lines +22 to +28
go func() {
for block := range t.blocks {
rcs := t.schedulerFn(block, t.receipts)
t.commitFn(rcs)
t.changeSets <- rcs
}
}()

Check notice

Code scanning / CodeQL

Spawning a Go routine Note

Spawning a Go routine may be a possible source of non-determinism
Comment on lines +18 to +22
go func() {
for receipt := range t.receipts {
t.sinkFn(receipt)
}
}()

Check notice

Code scanning / CodeQL

Spawning a Go routine Note

Spawning a Go routine may be a possible source of non-determinism
Comment on lines +40 to +44
go func() {
for changeSet := range t.changeSets {
t.sinkFn(changeSet)
}
}()

Check notice

Code scanning / CodeQL

Spawning a Go routine Note

Spawning a Go routine may be a possible source of non-determinism
Comment on lines +37 to +53
go func() {
for input := range t.inputs {
completionSignal := make(chan struct{}, 1)
completionSignals.Store(input.GetID(), completionSignal)
output := t.processFn(input)
prevCompletionSignal, ok := completionSignals.Load(input.GetID() - 1)
// in practice it's almost impossible for ok == false, but we'll handle it anyway
for !ok {
time.Sleep(10 * time.Millisecond)
prevCompletionSignal, ok = completionSignals.Load(input.GetID() - 1)
}
<-prevCompletionSignal.(chan struct{})
t.outputs <- output
completionSignal <- struct{}{}
completionSignals.Delete(input.GetID() - 1)
}
}()

Check notice

Code scanning / CodeQL

Spawning a Go routine Note

Spawning a Go routine may be a possible source of non-determinism
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants