Skip to content

LAMMPS GPU and MLIAP in Expanse

Pedro Antonio Santos Flórez edited this page Oct 12, 2021 · 1 revision

Below is a short instruction regarding the compilation of lammps using GPU and how to use it in XSEDE/Expanse

  1. Download the version of lammps you want and copy it to your Expanse area.
  2. Check your project name
    module load sdsc
    expanse-client user -p
  1. Request an interactive session on the GPU nodes.
     srun --partition=gpu-shared --pty --account=projectname --ntasks-per-node=10 --nodes=1 --mem=96G --gpus=1 -t 01:00:00 --wait=0 --export=ALL /bin/bash
  1. In your interactive session load the necessary modules:
     module load gpu
     module load openmpi
     module load cuda10.2/toolkit
     module load cmake
  1. Go to your lammps folder and build lammps:
     mkdir build
     cd build
     cmake -D PKG_GPU=on -D GPU_API=cuda -D PKG_ML-SNAP=on -D PKG_ML-IAP=on -D CMAKE_LIBRARY_PATH=/cm/local/apps/cuda-driver/libs/current/lib64 -D GPU_ARCH=sm_70 ../cmake
     make -j 10
  1. If you need to install another package.
     cmake -D PKG_PACKAGENAME=on ../cmake
     cmake  --build . (or make) or (make -j 10)
  1. You can also delete compiled objects, libraries and executables with.
     cmake --build . --target clean (or make clean)
  1. after building lammps, you can quit the interactive session.
     exit
  1. Build your jobscript
#!/bin/bash
#SBATCH --job-name="jobname"
#SBATCH --output="outputfile"
#SBATCH --partition=gpu-shared
#SBATCH --nodes=1
#SBATCH --account=projectname
#SBATCH --gpus=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=10
#SBATCH --mem=96G
#SBATCH --export=ALL
#SBATCH -t 48:00:00

## Note: Change the account projectname above to your allocation
## You can use max. --mem=96G per gpu
## You can use max. 10 cpus per gpu (--cpus-per-task and OMP_NUM_THREADS)
## Each GPU node has 4 GPUs,  ~384GB of memory and 40 cores
## To use the entire node use --partition=gpu and max. --mem=374G

## Environment
module purge
module load gpu
module load slurm
module load openmpi
module load cuda10.2/toolkit/10.2.89

export OMP_NUM_THREADS=10
export OMPI_MCA_btl_openib_allow_ib=true
srun --mpi=pmi2 -n 1 --cpus-per-task=1 ~/lammps/build/lmp -sf gpu -pk gpu 1 -in inputfile
  1. Try to run some example (inputfile)
    sbatch jobscript
  1. check your job status
    squeue -u username