[gmx-users] GROMACS 2018 MDRun: Multiple Ranks/GPU Issue

Hollingsworth, Bobby louishollingsworth at g.harvard.edu
Fri Apr 6 22:14:42 CEST 2018

Hello all,

I'm tuning MDrun on a node with 24 intel skylake cores (2X12) and 2 V100
GPUs. MPI-enabled (no thread-mpi) Gromacs 2018 ("2018.0") is compiled with
GCC, CUDA, OpenMPI, and OpenBLAS. I am trying to assign the two GPUs to
four ranks. My run command is:

mpirun -np 4 mdrun_mpi_s_g -ntomp 6 -pme cpu -nb gpu -gputasks 0011 -deffnm

However, I'm getting the error:

Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:

NOTE: You assigned the same GPU ID(s) to multiple ranks, which is a good
idea if you have measured the performance of alternatives.

Program:     mdrun_mpi_s_g, version 2018
Source file: src/gromacs/gpu_utils/gpu_utils.cu (line 127)
MPI rank:    1 (out of 4)

Fatal error:
cudaFuncGetAttributes failed: all CUDA-capable devices are busy or

The launch configuration works with -np 2 and -ntomp 12. Presumably, there
is an issue with GPUs being split across ranks. Any advice here? Thanks!

Louis "Bobby" Hollingsworth
Ph.D. Student, Biological and Biomedical Sciences, Harvard University
B.S. Chemical Engineering, B.S. Biochemistry, B.A. Chemistry, Virginia Tech
Honors College '17

More information about the gromacs.org_gmx-users mailing list