[gmx-users] mpirun and gmx_mpi

Mahmood Naderan nt_mahmood at yahoo.com
Wed Jul 25 13:43:46 CEST 2018


It is stated that 


mpirun -np 4 gmx mdrun -ntomp 6 -nb gpu -gputasks 00

Starts gmx mdrun on a machine with two nodes, usingfour total ranks, each rank with six OpenMP threads,and both ranks on a node sharing GPU with ID 0.



Questions are:
1- Why gmx_mpi is not used?
2- How two nodes were specified in the command line?
3- Total four ranks and two ranks on GPU?!
4- Is that using Nvidia MPS? Due to the single GPU device which is shared between ranks.


Regards,
Mahmood 

    On Wednesday, July 25, 2018, 1:05:10 AM GMT+4:30, Szilárd Páll <pall.szilard at gmail.com> wrote:  

That choice depends on whether you want to run across multiple compute nodes; the former can not while the latter, as it is (by default) indicates that it's using an MPI library, can run across nodes. Both can be used on GPUs as long as the programs were built with GPU support. 

I recommend that you check the documentation:
http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node
--
Szilárd
  


More information about the gromacs.org_gmx-users mailing list