[gmx-users] condition on MPI and GPU tasks

Szilárd Páll pall.szilard at gmail.com
Wed Jul 23 01:03:13 CEST 2014


On Wed, Jul 23, 2014 at 12:32 AM, Sikandar Mashayak
<symashayak at gmail.com> wrote:
> Hi,
>
> I am checking out GPU performance of Gromacs5.0 on a single node of a
> cluster.
> The node has two 8-core Sandy Bridge Xeon E5-2670 and two NVIDIA K20x GPUs.
>
> My question - is there a restriction on how many number of MPI tasks can be
> used per GPU task?

No, there is none. In fact, it is rarely optimal to run only one PP
rank per GPU, that is two (PP) ranks in a dual socket/GPU node.
However, using two ranks per physical core, that is 32 per node if you
have HT enabled is not advantageous.

> I observe that I could only perform mdrun with same number of MPI tasks as
> GPU tasks. I use one OpenMP thread per MPI task. If I use more MPI tasks
> than no. of GPUs, I get an error:
>
>
> Fatal error:
> Incorrect launch configuration: mismatching number of PP MPI processes and
> GPUs per node.
> mdrun_mpi was started with 4 PP MPI processes per node, but you provided 2
> GPUs.

You are not providing the correct PP rank to GPU mapping; please
consult the docs:
http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Heterogenous_parallelization.3a_using_GPUs

Cheers,
--
Szilárd

> Thanks,
> Sikandar
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list