[gmx-users] The problem of utilizing multiple GPU

sunyeping sunyeping at aliyun.com
Fri Sep 6 22:15:54 CEST 2019

Hello Szilárd Páll
Thank you for you reply. I tried your command:

  gmx mdrun -ntmpi 7 -npme 1 -nb gpu -pme gpu -bonded gpu -gpuid 0,2,4,6 -gputask 001122334

but got the following error information:

    Using 7 MPI threads
Using 10 OpenMP threads per tMPI thread

Program:     gmx mdrun, version 2019.3
Source file: src/gromacs/taskassignment/taskassignment.cpp (line 255)
Function:    std::vector<std::vector<gmx::GpuTaskMapping> >::value_type gmx::runTaskAssignment(const std::vector<int>&, const std::vector<int>&, const gmx_hw_info_t&, const gmx::MDLogger&, const t_commrec*, const gmx_multisim_t*, const gmx::PhysicalNodeCommunicator&, const std::vector<gmx::GpuTask>&, bool, PmeRunMode)
MPI rank:    0 (out of 7)

Inconsistency in user input:
There were 7 GPU tasks found on node localhost.localdomain, but 4 GPUs were
available. If the GPUs are equivalent, then it is usually best to have a
number of tasks that is a multiple of the number of GPUs. You should
reconsider your GPU task assignment, number of ranks, or your use of the -nb,
-pme, and -npme options, perhaps after measuring the performance you can get.

Could you tell me how to correct this?
Best regards,

You have 2x Xeon Gold 6150 which is 2x 18 = 36 cores; Intel CPUs
support 2 threads/core (HyperThreading), hence the 72.

You will not be able to scale efficiently over 8 GPUs in a single
simulation with the current code; while performance will likely
improve in the next release, due to PCI bus and PME scaling
limitations, even with GROMACS 2020 it is unlikely you will see much
benefit beyond 4 GPUs.

Try running on 3-4 GPUs with at least 2 ranks on each, and one
separate PME rank. You might also want to use every second GPU rather
than the first four to avoid overloading the PCI bus; e.g.
gmx mdrun -ntmpi 7 -npme 1 -nb gpu -pme gpu -bonded gpu -gpuid 0,2,4,6
-gputask 001122334


More information about the gromacs.org_gmx-users mailing list