Skip to content

[package list]

CP2K

License information

The source of CP2K is open and freely available for everybody under the GPL license.

(See also the LICENSE file in the CP2K GitHub.)

User documentation

CP2K is a CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. In general, it runs well on LUMI-C and several of the simulation methods such as LS-DFT and RPA calculations can utilize the GPUs on LUMI-G with some speed-up.

Installing CP2K

We provide automatic installation scripts for several versions of CP2K. In general, the installation procedure is described on the EasyBuild page. For example, the step by step procedure for installing CP2K 2023.1 with GPU support is:

  1. Load the LUMI software environment: module load LUMI/22.08.
  2. Select the LUMI-G partition: module load partition/G.
  3. Load the EasyBuild module: module load EasyBuild-user.

Then you can run the install command

$ eb CP2K-2023.1-cpeGNU-22.08-GPU.eb -r

The installation process is quite slow. It can take up to 1 hour to compile everything, but afterwards, you will have a module called "CP2K/2023.1-cpeGNU-22.08-GPU" installed in your home directory. Load the module to use it:

$ module load CP2K/2023.1-cpeGNU-22.08-GPU

The CP2K binary cp2k.psmp will now be in your PATH. Launch CP2K via the Slurm scheduler, e.g. srun cp2k.psmp. Please note that you must do module load LUMI/22.08 partition/G to see the CP2K module in the module system. The same applies to the Slurm batch scripts which you send to the compute nodes.

You can see other versions of CP2K that can be automatically installed in the same way by running the EasyBuild command

$ eb -S CP2K

or by checking the list further down this page or by checking the LUMI-EasyBuild-contrib repository on GitHub directly.

We build the CP2K executables with bindings to several external libraries activated: currently COSMA, SpLA, SpFFT, spglib, HDF5, LibXSMM, LibXC, FFTW3, Libvori and HipFFT+HipBLAS from ROCm.

Example batch scripts

A typical CP2K batch job using 4 compute nodes on LUMI-C with 2 OpenMP thread per rank:

#!/bin/bash
#SBATCH -J H2O
#SBATCH -N 4
#SBATCH --partition=small
#SBATCH -t 00:10:00
#SBATCH --mem=200G
#SBATCH --exclusive --no-requeue
#SBATCH -A project_465000XXX
#SBATCH --ntasks-per-node=64
#SBATCH -c 2

export OMP_PLACES=cores
export OMP_PROC_BIND=close
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

module load LUMI/22.08
module load partition/C
module load CP2K/2022.1-cpeGNU-22.08
srun cp2k.psmp -i H2O-256.inp -o H2O-256.out

Running on LUMI-G requires careful process bindning to CPU and GPUs. Here, we run a batch job on 4 LUMI-G compute nodes with 8 MPI ranks (1 per GPU) and 6 OpenMP threads per rank.

#!/bin/bash
#SBATCH -J lsdft
#SBATCH -p small-g
#SBATCH -A project_465000XXX
#SBATCH --time=00:30:00
#SBATCH --nodes=4
#SBATCH --gres=gpu:8
#SBATCH --exclusive
#SBATCH --ntasks-per-node=8
#SBATCH --cpus-per-task=6

export OMP_PLACES=cores
export OMP_PROC_BIND=close
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

ulimit -s unlimited
export OMP_STACKSIZE=512M

export MPICH_OFI_NIC_POLICY=GPU
export MPICH_GPU_SUPPORT_ENABLED=1

module load LUMI/22.08
module load partition/G
module load CP2K/2023.1-cpeGNU-22.08-GPU
module load rocm/5.3.3

srun --cpu-bind=mask_cpu:7e000000000000,7e00000000000000,7e0000,7e000000,7e,7e00,7e00000000,7e0000000000 ./select_gpu.sh cp2k.psmp -i H2O-dft-ls.inp -o H2O-dft-ls.out

The select_gpu.sh helper script is useful to get the GPU to CPU binding correct on LUMI.

$ cat select_gpu.sh 
#!/bin/bash

export ROCR_VISIBLE_DEVICES=$SLURM_LOCALID
exec $*

This script is useful for many applications using GPU on LUMI, not only CP2K.

Tuning recommendations

  • In general, try to use parallelization using both MPI and OpenMP. Use at least OMP_NUM_THREADS=2, and when running larger jobs (say more than 16 compute nodes), it often faster with OMP_NUM_THREADS=4/8.
  • When running on LUMI-G, run using 8 MPI ranks per compute node, where each rank has access to 1 GPU in the same NUMA zone. This also means that you have to OMP_NUM_THREADS=6-7 to utilize all CPU cores. Please note that using all 64 cores will not work as the first core in each CCD is reserved for the operating system, so that only 56 cores are available.

User-installable modules (and EasyConfigs)

Install with the EasyBuild-user module:

eb <easyconfig> -r
To access module help after installation and get reminded for which stacks and partitions the module is installed, use module spider CP2K/<version>.

EasyConfig:

Technical documentation

EasyBuild

Brief descriptions of available EasyConfigs

  • CP2K-9.1-cpeGNU-21.08.eb: The EasyConfig file is a direct port of the CSCS one.
  • CP2K-9.1-cpeGNU-21.12.eb: Same as above, but compiled with Cray Programming Environment 21.12. No signficiant performance difference observed. Links to a newer ELPA version (2021.11.001).
  • CP2K-2022.1-cpeGNU-22.08.eb: CP2K 2022.1 released compiled with Cray Programming Environment 22.08, built with PLUMED 2.8.0, Libxc 5.2.2, and Libvori 220621.
  • CP2K-2023.1-cpeGNU-22.08-GPU.eb: CP2K 2023.1 release compiled with AMD GPU support enabled for CP2K itself and several of the libraries (SpFFT, SpLA). Cray Programming Environment 22.08 used together with the unsupported rocm/5.3.3 module installed by the LUMI Support Team, as CP2K requires at ROCm 5.3.3.
  • CP2K-2023.1-cpeGNU-22.12-CPU.eb: A CPU-only build of CP2K release 2023.1 compiled with the GNU compilers and with support for PLUMED.
  • CP2K-2023.2-cpeGNU-22.12-CPU.eb: A CPU-only build of CP2K release 2023.2 compiled with the GNU compilers and with support for PLUMED.
  • CP2K-2024.1-cpeGNU-23.09-GPU.eb: CP2K 2024.1 release compiled with AMD GPU support enabled for CP2K itself and several of the libraries (SpFFT, SpLA). Cray Programming Environment 23.09 used together with the unsupported rocm/5.6.1 module installed by the LUMI Support Team.

Archived EasyConfigs

The EasyConfigs below are additonal easyconfigs that are not directly available on the system for installation. Users are advised to use the newer ones and these archived ones are unsupported. They are still provided as a source of information should you need this, e.g., to understand the configuration that was used for earlier work on the system.