Skip to content

Add GPU (CuPy) backends for proximity, allocation, and direction #901

@brendancol

Description

@brendancol

Problem

proximity(), allocation(), and direction() in xrspatial/proximity.py only support numpy and dask+numpy. The numpy path uses a GDAL-ported line-sweep algorithm; the dask path uses scipy.spatial.cKDTree. Neither has GPU support.

These are among the most computationally expensive raster operations. For large rasters (e.g., 30,000 × 30,000 Landsat scenes), proximity computation is a major bottleneck that would benefit enormously from GPU parallelism.

Proposed Approach

CuPy backend (bounded max_distance):

  • CUDA kernel (@cuda.jit) where each thread computes the nearest target within a local window of radius max_distance / cellsize. This is embarrassingly parallel and GPU-friendly.

CuPy backend (unbounded distance):

  • Implement a parallel Euclidean Distance Transform (EDT) on GPU, which is O(n) and highly parallelisable (row-column decomposition).

dask+cupy backend:

  • Use map_overlap with the CuPy kernel, with depth derived from max_distance / cellsize.

Current README Status

Function NumPy Dask CuPy Dask GPU
Proximity
Allocation
Direction

Metadata

Metadata

Assignees

No one assigned

    Labels

    backend-coverageAdding missing dask/cupy/dask+cupy backend supportenhancementNew feature or requestgpuCuPy / CUDA GPU supporthigh-priorityproximity toolsProximity, allocation, direction, cost distance

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions