[gmx-users] GROMACS 2018 MDRun: Multiple Ranks/GPU Issue
pall.szilard at gmail.com
Sat Apr 7 18:25:49 CEST 2018
Your GPUs are in process exclusive mode which makes it impossible for
multiple ranks to use a GPU; see nvidia-smi --compute-mode option.
On Fri, Apr 6, 2018 at 10:14 PM, Hollingsworth, Bobby
<louishollingsworth at g.harvard.edu> wrote:
> Hello all,
> I'm tuning MDrun on a node with 24 intel skylake cores (2X12) and 2 V100
> GPUs. MPI-enabled (no thread-mpi) Gromacs 2018 ("2018.0") is compiled with
> GCC, CUDA, OpenMPI, and OpenBLAS. I am trying to assign the two GPUs to
> four ranks. My run command is:
> mpirun -np 4 mdrun_mpi_s_g -ntomp 6 -pme cpu -nb gpu -gputasks 0011 -deffnm
> However, I'm getting the error:
> Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:
> NOTE: You assigned the same GPU ID(s) to multiple ranks, which is a good
> idea if you have measured the performance of alternatives.
> Program: mdrun_mpi_s_g, version 2018
> Source file: src/gromacs/gpu_utils/gpu_utils.cu (line 127)
> MPI rank: 1 (out of 4)
> Fatal error:
> cudaFuncGetAttributes failed: all CUDA-capable devices are busy or
> The launch configuration works with -np 2 and -ntomp 12. Presumably, there
> is an issue with GPUs being split across ranks. Any advice here? Thanks!
> Louis "Bobby" Hollingsworth
> Ph.D. Student, Biological and Biomedical Sciences, Harvard University
> B.S. Chemical Engineering, B.S. Biochemistry, B.A. Chemistry, Virginia Tech
> Honors College '17
> Gromacs Users mailing list
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users