-
Notifications
You must be signed in to change notification settings - Fork 85
Closed
Labels
backend-coverageAdding missing dask/cupy/dask+cupy backend supportAdding missing dask/cupy/dask+cupy backend supportenhancementNew feature or requestNew feature or requestgpuCuPy / CUDA GPU supportCuPy / CUDA GPU supporthigh-priorityproximity toolsProximity, allocation, direction, cost distanceProximity, allocation, direction, cost distance
Description
Problem
proximity(), allocation(), and direction() in xrspatial/proximity.py only support numpy and dask+numpy. The numpy path uses a GDAL-ported line-sweep algorithm; the dask path uses scipy.spatial.cKDTree. Neither has GPU support.
These are among the most computationally expensive raster operations. For large rasters (e.g., 30,000 × 30,000 Landsat scenes), proximity computation is a major bottleneck that would benefit enormously from GPU parallelism.
Proposed Approach
CuPy backend (bounded max_distance):
- CUDA kernel (
@cuda.jit) where each thread computes the nearest target within a local window of radiusmax_distance / cellsize. This is embarrassingly parallel and GPU-friendly.
CuPy backend (unbounded distance):
- Implement a parallel Euclidean Distance Transform (EDT) on GPU, which is O(n) and highly parallelisable (row-column decomposition).
dask+cupy backend:
- Use
map_overlapwith the CuPy kernel, with depth derived frommax_distance / cellsize.
Current README Status
| Function | NumPy | Dask | CuPy | Dask GPU |
|---|---|---|---|---|
| Proximity | ✅ | ✅ | ❌ | ❌ |
| Allocation | ✅ | ✅ | ❌ | ❌ |
| Direction | ✅ | ✅ | ❌ | ❌ |
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
backend-coverageAdding missing dask/cupy/dask+cupy backend supportAdding missing dask/cupy/dask+cupy backend supportenhancementNew feature or requestNew feature or requestgpuCuPy / CUDA GPU supportCuPy / CUDA GPU supporthigh-priorityproximity toolsProximity, allocation, direction, cost distanceProximity, allocation, direction, cost distance