Conversation
|
✅ No conflicts with other open PRs targeting |
72a9f6c to
a9749e1
Compare
a9749e1 to
cc0765d
Compare
aca57a2 to
9e98a2f
Compare
| "run-timeout-minutes": "15", | ||
| "touch-web-assets": "true", | ||
| "restore-build-cache-only": "true", | ||
| "increase-timeouts-on-schedule": "true", |
There was a problem hiding this comment.
Why is there an additional qualifier for timeout increases?
There was a problem hiding this comment.
That's how it worked before, we increase the default timeouts for fuzz/race tests and pass them as env vars to bash scripts that run them.
There was a problem hiding this comment.
There was not an extra qualifier before. We simply checked if we had a scheduled run, then increased timeouts.
| "touch-web-assets": "true", | ||
| "restore-build-cache-only": "true", | ||
| "increase-timeouts-on-schedule": "true", | ||
| "print-races": "true", |
There was a problem hiding this comment.
Why do we need an extra variable to ensure that races are printed?
There was a problem hiding this comment.
that's how it already worked: https://github.com/smartcontractkit/chainlink/blob/develop/.github/workflows/ci-core.yml#L419C1-L435C47
There was a problem hiding this comment.
But that code that you linked does not include an extra parameter to indicate if races should be printed. It checks if we are running races via matrix.type.cmd == 'go_core_race_tests', and if so it prints them. If we need a different means of checking whether we are in race mode now, then so be it, but I don't think we should add extra params for each action that is implied.
| "print-races": "true", | |
| "race": "true", |
| "logs-artifact-name": f"go_core_tests_integration_logs_shard_{i}", | ||
| }) | ||
|
|
||
| def add_core_deployment_rows(): |
There was a problem hiding this comment.
Why don't we need touch-web-assets for this one? Isn't that necessary to compile the core binary?
There was a problem hiding this comment.
Because these tests run in deployments folder, which is a separate Go module that doesn't compile core/web package.
There was a problem hiding this comment.
What is the rationale for including tests from that module here? This is otherwise a consistent set of variations on the core tests.
@jmank88 Roughly speaking it is the same, around 17 minutes, because 3 packages mentioned in the PR are resulting in a long tail. If we do go the sharding way then I will go after teams that own them and request they speed them up so that we can reap the benefits of sharding. Until then the only benefit will be less flakyness. |
Huh, I only expected speedup. Why would there be less flakyness? I don't think it is true that:
Too many packages is not inherently a problem because concurrency is already limited. There has to be more to it than that. Is the database contention the problem? |
Hard to get a speed up if we shard by package level if have packages that run for 11 minutes. If these are sped up them we will see a big speed up.
You mean limited by
I think it is one of the problems. Since now every job will start its own DB, issues that were related to DB contention should also have lower impact. |
b0aae99 to
b1e1a14
Compare
Indirectly, yes. Both package and test level concurrency.
IMHO it is important to be certain about this. If avoiding DB contention is the main difference, then there are simpler approaches that we can try, and some of those will benefit local development as well rather than addressing only CI. |
| "pkg/c", | ||
| "pkg/d", | ||
| "pkg/e", | ||
| "pkg/f", |
There was a problem hiding this comment.
not seeing a duplicate case
6ad0e54 to
fbf5288
Compare
fbf5288 to
6d2a935
Compare
|





Approach
This PR shards go_core_tests, go_core_tests_integration, and go_core_deployment_tests in CI to reduce flakiness caused by running too many test packages in parallel in a single job.
The sharding logic is intentionally simple and deterministic. Each existing test runner first computes the exact package list it already intends to run today, then passes that package list to a small Go helper. The helper assigns each package to a shard by hashing the package path with FNV-1a and taking hash % shard_count. This gives a stable, zero-maintenance partitioning scheme: no manual package lists, no balancing tables, and new test packages are picked up automatically.
Testing
Correctness is guarded in two ways. First, a consistency check verifies that for a given package universe, the union of all shards covers every package exactly once. Second, I compared the executed test sets between unsharded and sharded runs by extracting package + test name from gotestsum --jsonfile output and diffing the sorted results. For go_core_tests, go_core_tests_integration, and go_core_deployment_tests, the executed test sets matched exactly between the non-sharded and sharded runs.
Results:
go_core_tests✅go_core_deployment_tests✅go_core_integration_tests✅Rejected approaches
Sharding based on regex/test names
We considered naive sharding by test-name regex, but rejected it because Go executes tests at package granularity and the suite is not evenly distributed by test-name prefix. In practice, leading-letter buckets were highly imbalanced, and the dominant runtime/flakiness hotspots are concentrated in a small number of heavy packages. Package-path hashing gives a simpler, future-proof, zero-maintenance partitioning mechanism that aligns with go test execution semantics.
Follow up work
Speeding up outlier packages
In order to fully unlock workflow execution speed up some of the deployments tests will need to made faster by respective teams. Once that is done plausible p50 of
CI Coremight land in the zone of 10-11 minutes.Current execution swim line is shown below:

Slowest packages that need speeding up:
Sharding fuzz tests
As can be seen on the graph above the next far outlier are fuzz tests. Once we shard them (2 shards should be enough) we should be able to cut down workflow execution time even further.