Skip to content

Documentation links

Note that documentation, and especially web based documentation, is very fluid. Links change rapidly and were correct when this page was developed right after the course. However, there is no guarantee that they are still correct when you read this and will only be updated at the next course on the pages of that course.

This documentation page is far from complete but bundles a lot of links mentioned during the presentations, and some more.

Web documentation

Man pages

A selection of man pages explicitly mentioned during the course:

Via the module system

Most HPE Cray PE modules contain links to further documentation. Try module help cce etc.

From the commands themselves

PrgEnv C C++ Fortran
PrgEnv-cray craycc --help crayCC --help crayftn --help
craycc --craype-help crayCC --craype-help crayftn --craype-help
PrgEnv-gnu gcc --help g++ --help gfortran --help
PrgEnv-aocc clang --help clang++ --help flang --help
PrgEnv-amd amdclang --help amdclang++ --help amdflang --help
Compiler wrappers cc --help CC --help ftn --help

For the PrgEnv-gnu compiler, the --help option only shows a little bit of help information, but mentions further options to get help about specific topics.

Further commands that provide extensive help on the command line:

  • rocm-smi --help, even on the login nodes.

Documentation of other Cray EX systems

Note that these systems may be configured differently, and this especially applies to the scheduler. So not all documentations of those systems applies to LUMI. Yet these web sites do contain a lot of useful information.

  • Archer2 documentation. Archer2 is the national supercomputer of the UK, operated by EPCC. It is an AMD CPU-only cluster. Two important differences with LUMI are that (a) the cluster uses AMD Rome CPUs with groups of 4 instead of 8 cores sharing L3 cache and (b) the cluster uses Slingshot 10 instead of Slinshot 11 which has its own bugs and workarounds.

    It includes a page on cray-python referred to during the course.

  • ORNL Frontier User Guide and ORNL Crusher Qucik-Start Guide. Frontier is the first USA exascale cluster and is built up of nodes that are very similar to the LUMI-G nodes (same CPA and GPUs but a different storage configuration) while Crusher is the 192-node early access system for Frontier. One important difference is the configuration of the scheduler which has 1 core reserved in each CCD to have a more regular structure than LUMI.

  • KTH Dardel documentation. Dardel is the Swedish "baby-LUMI" system. Its CPU nodes use the AMD Rome CPU instead of AMD Milan, but its GPU nodes are the same as in LUMI.

  • Setonix User Guide. Setonix is a Cray EX system at Pawsey Supercomputing Centre in Australia. The CPU and GPU compute nodes are the same as on LUMI.