[gmx-users] Gromacs 4.6.7 with MPI and OpenMP

Mark Abraham mark.j.abraham at gmail.com
Fri May 8 17:15:34 CEST 2015


On Fri, May 8, 2015 at 4:28 PM Malcolm Tobias <mtobias at wustl.edu> wrote:

>
> Mark,
>
> On Friday 08 May 2015 13:48:30 Mark Abraham wrote:
>
> > What kind of simulation are you testing with? A reaction-field water box
> > will have almost nothing to do on the CPU, so no real change with
> #threads.
> > Check with your users, but a PME test case is often more appropriate.
>
> I have no idea, I have very little background with molecular dynamics.
> All I can say is I can run the very same input files on our old cluster and
> observe the expected behavior, but with this same system on our new cluster
> I'm not seeing the OpenMP threads launch like I'd expect.
>

OK, that sounds like a system config issue, rather than a not-relevant
test-case.


> > > > > Number of CPUs detected (16) does not match the number reported by
> > > OpenMP
> > > > > (1).
> ...
> >
> > OK. Well that 1 reported by mdrun is literally the return value from
> > calling omp_get_num_procs(), so the solution is to look for what part of
> > the ecosystem is setting that to 1 and give that a slap ;-) IIRC the use
> of
> > -ntomp 4 means mdrun will go and use 4 threads anyway, but it'd be good
> to
> > fix the wider context.
>
> I think this is likely the problem, for whatever reason my build of
> GROMACs thinks there's only 1 core and is not launching the OpenMP threads.
>
> I've never been mistaken for a C programmer, but when I run my little
> Hello World OpenMP code it seems to behave as I expect.  If I request 8 of
> the CPU cores on a 16 core system:
>
> qsub -I -l nodes=1:ppn=8:gpus=2,walltime=24:00:00
>
> omp_get_num_procs reports 8 cores:
>
> [mtobias at gpu22 C]$ ./a.out
> hello world
> hello world
>  8
> hello world
>  8
>
> FWIW, I ran the same GROMACs run outside of the queuing system to verify
> that the CPUSETs were not causing the issue.
>

MPI gets a chance to play with OMP_NUM_THREADS (and pinning!), too, so your
tests suggest the issue lies there. Your program presumably was
MPI-unaware, so I would check its behaviour when run under MPI as above,
and with 2 MPI procs, each with 4 cores.

> Sure, you need root access. You don't need it for running cmake when that
> > runs a pile of unsecure code ;-)
>
> Fair enough point.  I guess I've learned to trust scientific programmers
> too much over the years ;-)
>
> > YMMV but hyperthreads were generally not useful with GROMACS 4.6. That is
> > changing for newer hardware and GROMACS, however.
>
> We've got hyperthreading disabled on all of our systems.
>

OK. I think that some ways of setting that up will report the 8 hardware
threads in use from ppn=8, rather than the 16 you see, but I've no idea
what people do to get that done. Your field, not mine ;-)

Mark


> Malcolm
>
> >
> > Mark
> >
> >
> > > Cheers,
> > > Malcolm
> > >
> > >
> > > --
> > > Malcolm Tobias
> > > 314.362.1594
> > >
> > >
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > >
> >
> --
> Malcolm Tobias
> 314.362.1594
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list