[gmx-users] mpirun and gmx_mpi

Szilárd Páll pall.szilard at gmail.com
Wed Jul 25 15:21:19 CEST 2018


On Wed, Jul 25, 2018 at 1:43 PM Mahmood Naderan <nt_mahmood at yahoo.com>
wrote:

> It is stated that
>
>
> mpirun -np 4 gmx mdrun -ntomp 6 -nb gpu -gputasks 00
>
> Starts gmx mdrun
> <http://manual.gromacs.org/documentation/current/onlinehelp/gmx-mdrun.html#mdrun-mpi>
> on a machine with two nodes, using four total ranks, each rank with six
> OpenMP threads, and both ranks on a node sharing GPU with ID 0.
>
>
>
> Questions are:
>
> 1- Why gmx_mpi is not used?
>

Though not wrong, that's a typo, the default binary suffix of MPI builds is
"_mpi", but it can be changed:
http://manual.gromacs.org/documentation/2018/install-guide/index.html#changing-the-names-of-gromacs-binaries-and-libraries


>
> 2- How two nodes were specified in the command line?
>

The way you launch an MPI program on multiple nodes is specific to the
cluster setup and not specific to GROMACS. For the details of how you
launch MPI programs on the hardware you have access to, you need to consult
the documentation of your cluster.


>
> 3- Total four ranks and two ranks on GPU?!
>

Yes. What is the question?

4- Is that using Nvidia MPS? Due to the single GPU device which is shared
> between ranks.
>

If you use MPS on the compute nodes, it will use MPS. If you don't,
processes will share GPUs, but execution will be somewhat less efficient.

--
Szilárd


>
>
> Regards,
> Mahmood
>
>
> On Wednesday, July 25, 2018, 1:05:10 AM GMT+4:30, Szilárd Páll <
> pall.szilard at gmail.com> wrote:
>
>
> That choice depends on whether you want to run across multiple compute
> nodes; the former can not while the latter, as it is (by default) indicates
> that it's using an MPI library, can run across nodes. Both can be used on
> GPUs as long as the programs were built with GPU support.
>
> I recommend that you check the documentation:
>
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#examples-for-mdrun-on-one-node
>
> --
> Szilárd
>
>


More information about the gromacs.org_gmx-users mailing list