[gmx-users] Launch hybrid MPI/openMP run on multiple nodes

Mark Abraham mark.j.abraham at gmail.com
Fri Jan 15 15:48:09 CET 2016


Hi,

Thus you should find out how your system administrators intended you to use
the resources.

Mark

On Fri, 15 Jan 2016 15:01 Szilárd Páll <pall.szilard at gmail.com> wrote:

> Hi,
>
> Your job scheduler and/or MPI launcher is most likely to blame. The fact
> that mdrun warns about the logical core (hardware thread) mismatch means
> that the OpenMP runtime thinks that you should be using one thread per
> rank. This typically means that the MPI launcher or job scheduler set an
> affinity mask for each rank and likely mdrun also skipped pinning threads
> because of this.
>
> This should however alone not cause nodes to be empty, but rather it would
> cause ranks to overlap and run on the same core.
>
> In any case, you should pass the correct ranks/node threads/rank settigns
> to your launchers; to ensure correct rank placement you'll have to either
> set up affinities through the scheduler or let mdrun do it.
>
> Cheers,
> --
> Szilárd
>
> On Fri, Jan 15, 2016 at 9:25 AM, Chunlei ZHANG <chunleizhang.pku at gmail.com
> >
> wrote:
>
> > Dear GMX developers and users,
> >
> > I have a cluster of 24 nodes, each having two 10-core intel CPUs.
> > Gromacs 5.1 is compiled by using intel mpi (version 5.1.1) and mkl.
> >
> > I can successfully run a simulation by using pure MPI (480 MPI
> processes).
> > But the performance is not good and the log file of mdrun suggests using
> > fewer MPI processes.
> > I try to launch 240 MPI processes, each using 2 openMP threads, by the
> > command:
> > mpirun -ppn 20 -np 240 gmx_mpi mdrun -ntomp 2
> >
> > But only a fraction of the nodes are running mdrun and the log file says:
> >
> > Number of logical cores detected (20) does not match the number reported
> by
> > OpenMP (1).
> > Consider setting the launch configuration manually!
> >
> > Does anyone know how to solve this problem?
> > Thanks in advance.
> >
> > Best,
> > Chunlei
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list