-
Notifications
You must be signed in to change notification settings - Fork 1
Further OSP modules
Besides BSP scheduling, OSP offers several tools and algorithms for further related problems.
Partitioning is essentially the simpler version of scheduling where the vertices are only assigned to processors, but not to (super)steps. It is a relevant problem in a variety of applications where the time dimension can be omitted in our model.
Note that a partitioning can always be obtained from a BSP schedule by stripping away the time dimension. In include/osp/bsp/scheduler/LoadBalanceScheduler, there are BSP schedulers that are designed specifically with this aim, i.e., to ensure that this derived partitioning has desirable properties.
In include/osp/partitioning, OSP also provides a module for the classical hypergraph partitioning problem, where the partitioning is subject to a balance constraint, and its objective is to minimize the
In include/osp/pebbling, the toolbox also offers algorithms for a more complex variant of the BSP scheduling problem where the vertical I/O costs (incurred due to limited fast memory) are also captured accurately. This is called the pebbling problem in OSP, and named MBSP model in our paper [3]. The model can be obtained either as the extension of Multiprocessor Red-Blue pebbling with weights, or as the extension of BSP with vertical I/O cost in a two-level memory hierarchy.
This model results in a much more complex scheduling problem. OSP provides two ways to devise pebbling schedules. On the one hand, there are two ILP formulations to address the problem holistically: a full ILP for smaller DAGs, and a divide-and-conquer approach that is also viable for slightly larger DAGs. On the other hand, pebbling schedules can be obtained via a two-stage approach, specifying a starting BSP schedule, a fast memory limit and a memory management (e.g. cache eviction) strategy; OSP then computes the pebbling schedule one would obtain by running this schedule with the given memory management strategy.
Since sparse triangular system solving is a prominent application of DAG scheduling, and one of the first applications of OSP [4]. Hence there is also specific SpTRSV functionality included in OSP. This includes an efficient kernel implementation of the problem (in include/osp/auxiliary/sptrsv_simulator), a reordering tool on the problem to improve locality, and a test suite for evaluating schedulers.
There are also two separate tools to visualize BSP schedules for smaller graphs.
The Sankey tool is located in tools/sankey_visualization, and also has a separate readme file in third/SankeyPlots/README.md.
The GraphViz tool is in tools/graphviz_visualization. This one takes a BspSchedule in .dot format, and can be invoked with the python script plot_graphviz.py.
[1] Papp, P. A., Anegg, G., & Yzelman, A-J. N. (2023, June). Partitioning hypergraphs is hard: Models, inapproximability, and applications. In Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and Architectures (pp. 415-425). (full preprint)
[2] Çatalyürek, V., & Aykanat, C. (2001). A fine-grain hypergraph model for 2D decomposition of sparse matrices. In 15th IEEE International Parallel and Distributed Processing Symposium (pp. 1199-1204). 2001.
[3] Papp, P. A., Böhnlein, T., & Yzelman, A-J. N. Multiprocessor Scheduling with Memory Constraints: Fundamental Properties and Finding Optimal Solutions. In Proceedings of the 54th International Conference on Parallel Processing (pp. 238-247). 2025. (full preprint)
[4] Böhnlein, T., Papp, P. A., Steiner, R. S., & Yzelman, A-J. N. Efficient Parallel Scheduling for Sparse Triangular Solvers. In 2025 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (pp. 1263-1265). 2025. (full preprint)