[gmx-users] Gromacs 4.6.7 with MPI and OpenMP
mtobias at wustl.edu
Mon May 11 16:30:56 CEST 2015
On Friday 08 May 2015 15:15:31 Mark Abraham wrote:
> > FWIW, I ran the same GROMACs run outside of the queuing system to verify
> > that the CPUSETs were not causing the issue.
> MPI gets a chance to play with OMP_NUM_THREADS (and pinning!), too, so your
> tests suggest the issue lies there. Your program presumably was
> MPI-unaware, so I would check its behaviour when run under MPI as above,
> and with 2 MPI procs, each with 4 cores.
Thanks, this is just the clue I was looking for. I extended my example so each MPI process would call OMP_NUM_THREADS. For whatever reason, my build of OpenMPI always resulted in OMP_NUM_THREADS reporting 1 core. Strangely, my rather-ancient version of OpenMPI on the old cluster doesn't exhibit this behavior, nor do other MPI implementations on the new cluster like mvapich2.
The solution in my case was just to rebuild Gromacs using mvapich2 and everything appears to be behaving normally.
Thanks for everyone's help on this.
More information about the gromacs.org_gmx-users