-
Notifications
You must be signed in to change notification settings - Fork 85
Open
Labels
enhancementNew feature or requestNew feature or requestinfrastructureCI, benchmarks, and toolingCI, benchmarks, and tooling
Description
Problem
The benchmarks/ directory exists with ASV configuration and benchmark files for 13 modules, but:
- Benchmarks are never run in CI — there is no performance regression detection.
- Benchmark results may be stale.
- No coverage for recently added features (dask viewshed, cost_distance dask, emerging_hotspots).
- The CI matrix (
.github/workflows/test.yml) runs pytest on 3 OS × 3 Python versions but has no performance tracking.
For a library whose primary value proposition is performance (numba JIT, GPU, dask), not detecting regressions is a critical gap.
Proposed Fix
- Add an ASV-based benchmark CI job on a fixed Ubuntu runner (consistent hardware for comparison).
- Create/update benchmarks for the top 5 performance-sensitive functions:
slope,proximity,zonal.stats,cost_distance,focal_stats. - Each benchmark should test at least numpy and dask backends at representative sizes (e.g., 2000×2000 for numpy, 10000×10000 for dask).
- Store benchmark results as CI artifacts and compare against the base branch.
- Add a
performancelabel to PRs that touch numba/CUDA kernels for automatic benchmark runs. - Fail CI on >20% regression (configurable threshold).
Impact
A library that bills itself as "fast" and "scalable" needs to prove it on every commit. Without regression detection, a well-intentioned refactoring could introduce a 10× slowdown in a numba kernel and nobody would notice until users complain.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requestinfrastructureCI, benchmarks, and toolingCI, benchmarks, and tooling