[gmx-users] Running on multiple GPUs
Searle Duay
searle.duay at uconn.edu
Fri Apr 20 20:12:06 CEST 2018
Hello,
I am trying to run a simulation using Gromacs 2018 on 2 GPUs of PSC
Bridges. I submitted the following SLURM bash script:
#!/bin/bash
#SBATCH
-J p100_1n_2g
#SBATCH -o %j.out
#SBATCH -N 1
#SBATCH
-n 32
#SBATCH --ntasks-per-node=32
#SBATCH -p GPU
#SBATCH --gres=gpu:p100:2
#SBATCH -t 01:00:00
#SBATCH --mail-type=BEGIN,END,FAIL
#SBATCH --mail-user=searle.duay at uconn.edu
set echo
set -x
source /opt/packages/gromacs-GPU-2018/bin/GMXRC
module load intel/18.0.0.128 gcc/4.8.4 cuda/9.0 icc/16.0.3 mpi/intel_mpi
echo SLURM_NPROCS= $SLURM_NPROCS
cd $SCRATCH/prot_umbrella/gromacs/conv
gmx_mpi mdrun -deffnm umbrella0 -pf pullf-umbrella0.xvg -px
pullx-umbrella0.xvg -v
exit
It was running but I noticed that it only uses one GPU on a node that has 2
GPUs. I tried changing the command to:
mpirun -np $SLURM_NPROCS gmx_mpi mdrun -v -deffnm umbrella0 ...
But it says that:
Fatal error:
Your choice of number of MPI ranks and amount of resources results in using
1
OpenMP threads per rank, which is most likely inefficient. The optimum is
usually between 2 and 6 threads per rank. If you want to run with this
setup,
specify the -ntomp option. But we suggest to change the number of MPI ranks.
I am wondering for the right command to use the 2 GPUs available on one
node that is available, or if GROMACS decides automatically for the number
of GPUs that it will use.
Thank you!
--
Searle Aichelle S. Duay
Ph.D. Student
Chemistry Department, University of Connecticut
searle.duay at uconn.edu
More information about the gromacs.org_gmx-users
mailing list