Skip to content

[package list]

lumi-CPEtools

License information

The lumi-CPEtools packages are developed by the LUMI User Support Team and licensed under the GNU General Public License version 3.0, a copy of which can be found in the LICENSE file in the source repository.

User documentation (central installation)

Getting help

The tools in lumi-CPETtools are documented through manual pages that can be viewed on LUMI after loading the module. Start with man lumi-CPEtools.

Commands provided:

  • xldd: An ldd-like program to show which versions of Cray PE libraries are used by an executable.

  • serial_check: Serial program can print core and host allocation and affinity information.

  • omp_check: OpenMP program can print core and host allocation and affinity information.

  • mpi_check: MPI program can print core and host allocation and affinity information. It is also suitable to test heterogeneous jobs.

  • hybrid_check: Hybrid MPI/OpenMP program can print core and host allocation and affinity information. It is also suitable to test heterogeneous jobs. It encompasses the full functionality of serial_check, omp_check and mpi_check.

  • gpu_check (from version 1.1 on): A hybrid MPI/OpenMP program that prints information about thread and GPU binding/mapping on Cray EX Bardpeak nodes as in LUMI-G, based on the ORNL hello_jobstep program. (AMD GPU nodes only)

The various *_check programs are designed to test CPU and GPU binding in Slurm and are the LUST recommended way to experiment with those bindings.

Acknowledgements

The code for hybrid_check and its offsprings serial_check, omp_check and mpi_check is inspired by the xthi program used in the 4-day LUMI comprehensive courses. The hybrid_check program has been used succesfully on other clusters also, also non-Cray or non-HPE clusters.

The gpu_check program (lumi-CPEtools 1.1 and later) builds upon the hello_jobstep code from ORNL. The program is specifically for the HPE Cray EX Bard Peak nodes and will not work correctly without reworking on other AMD GPU systems or on NVIDIA GPU systems.

The lumi-CPEtools code is developed by LUST in the lumi-CPEtools repository on the LUMI supercomputer GitHub.

Pre-installed modules (and EasyConfigs)

To access module help and find out for which stacks and partitions the module is installed, use module spider lumi-CPEtools/<version>.

EasyConfig:

User-installable modules (and EasyConfigs)

Install with the EasyBuild-user module:

eb <easyconfig> -r
To access module help after installation and get reminded for which stacks and partitions the module is installed, use module spider lumi-CPEtools/<version>.

EasyConfig:

Technical documentation (central installation)

lumi-CPEtools is developed by the LUST team.

EasyBuild

The EasyConfig is our own development as this is also our own tool. We provide full versions for each Cray PE, and a restricted version using the S?YSTEM toolchain for the CrayEnv software stack.

Version 1.0

  • The EasyConfig is our own design.

Version 1.1

  • The EasyConfig build upon the 1.0 one but with some important changes as there is now a tool that should only be installed in partition/G. So there are now makefile targets and additional variables for the Makefile.

Technical documentation (user EasyBuild installation)

EasyBuild

Version 1.1 for Open MPI

  • The EasyConfigs are similar to those for the Cray MPICH versions, but

    • Compilers need to be set manually in buildopts to use the Open MPI compiler wrappers.

    • Before building some modules need to be unloaded again (which ones depends on the specific OpenMPI module).

Archived EasyConfigs

The EasyConfigs below are additonal easyconfigs that are not directly available on the system for installation. Users are advised to use the newer ones and these archived ones are unsupported. They are still provided as a source of information should you need this, e.g., to understand the configuration that was used for earlier work on the system.