Skip to content

Utilities

Dom Heinzeller edited this page Dec 26, 2025 · 3 revisions

Spack-stack provides several utilities that complement the Spack extension described elsewhere to streamline development workflows for spack-stack and catch common issues before they become a hard-to-identify problem.

Duplicate checker

The utility located in util/show_duplicate_packages.py parses spack.lock and detects duplicates. Usage is as follows:

# In an active environment ($SPACK_ENV set), after concretization:
${SPACK_STACK_DIR}/util/show_duplicate_packages.py

In any case, the identification of any duplicates will yield a return code of 1. The -i option can be invoked multiple times to skip specific package names. See the GitHub action workflows for examples.

Permissions checker

The utility located in util/check_permissions.sh can be run inside any spack-stack environment directory intended for multiple users (i.e., on an HPC or cloud platform). It will return errors if the environment directory is inaccessible to non-owning users and groups (i.e., if o+rx not set), as well as if any directories or files have permissions that make them inaccessible to other users.

LDD checker (Linux only)

The util/ldd_check.py utility should be run for new installations to ensure that no shared library or executable that uses shared libraries is missing a shared library dependency. If the script returns a warning for a given file, this may indicate that Spack's RPATH substitution has not been properly applied. In some instances, missing library dependencies may not indicate a problem, such as a library that is intended to be found through $LD_LIBRARY_PATH after, say, a compiler or MPI environment module is loaded. Though these paths should probably also be RPATH-ified, such instances of harmless missing dependencies may be ignored with the --ignore option by specifying a Python regular expression to be excluded from consideration (see example below), or can be permanently whitelisted by modifying the whitelist variable at the top of the ldd_check.py script itself (in which case please submit a PR). The script searches the install/ subdirectory of a given path and runs ldd on all shared objects. The base path to be search can be specified as a lone positional argument, and by default is the current directory. In practice, this should be $SPACK_ENV for the environment in question. This utility is available for Linux only.

cd ${SPACK_ENV} && ../../util/ldd_check.py

or

# Check for missing shared dependencies, but ignore missing libfoo*
util/ldd_check.py $SPACK_ENV --ignore '^libfoo.+'

libirc checker (Linux only)

The util/check_libirc.sh utility should be run for new installations with Intel oneAPI (icx, icpx, ifort or ifx). In an active environment after a successful spack install, execute the following command to check if any of the shared libraries or executables is linked to libirc.so. See https://github.com/JCSDA/spack-stack/issues/1436 for some background context and why we need to avoid libirc.so. If libirc.so is linked to a shared library or executable in a spack-stack environment, please create an issue in the spack-stack GitHub repository (https://github.com/JCSDA/spack-stack/issues). For downstream applications, see here.

cd ${SPACK_ENV} && ../../util/check_libirc.sh

Parallel install

The util/parallel_install.sh utility runs parallel installations by launching multiple spack install instances as background processes. It can be run as an executable or sourced; the latter option will cause the launched jobs to be associated with the current shell environment. It takes the number of spack install instances to launch and the number of threads per instance as arguments, in that order, and accepts optional arguments which are applied to each spack install instance. For instance, util/parallel_install.sh 4 8 --fail-fast will run four instances of spack install -j8 --fail-fast &. Output files are automatically saved under the current Spack environment directory, $SPACK_ENV.

Note. The parallel_install.sh utility runs all installation instances on a single node, therefore be respectful of other users and of system usage policies, such as computing limits on HPC login nodes.

Fetch Cargo and Go dependencies

The utilities fetch_cargo_deps.py, install_rust.sh, and fetch_go_deps.py are described in the Creating and maintaining Spack mirrors page.

Acorn utilities

The util/acorn/ directory provides scripting for spack-stack builds through PBS Pro on Acorn. To use them, copy them into the directory of the Spack environment you wish to build, set the number of nodes to use (make sure #PBS -l select=X and mpiexec -n X use the same value for X), and run qsub build.pbs. Note that the temporary directory specification uses a soft link where the referent location depends on the node (this is to avoid compiling directly on LFS, which frequently fails when working with small files as when cloning git repositories). For parallel installations on Acorn, 2-6 is a reasonable range for the number of nodes (MPI proc analogs), and 6-8 is a reasonable number for the number of threads (note that for #PBS -l ncpus=Y in build.pbs, Y should match the -j argument for spack install in spackinstall.sh).

NRL batch install

The util/nrl/batch_install.sh script is used to run complete spack-stack workflows for creating and maintaining mirrors, building environments, populating binary caches, and deploying environments for developpment and operational use on a number of NRL systems.

Clone this wiki locally