[gmx-developers] odd resource-division behaviour gromacs git master

Johannes Wagner johannes.wagner at h-its.org
Mon Aug 28 16:00:59 CEST 2017


Hey all,

I have an odd behaviour of how threads are divided between OMP and MPI for gromacs 2017 git master (25.08.).

Following setup: CentOS 7, 2x8core Xeon v4, HT enabled, 1x1080 GPU and compiling with gcc 4.8.5, -DGMX_GPU=ON, hwloc installed

If I run something CPU only with -nb cpu , it picks 1 MPI x 32 OMP threads and I get the following fatal error:

Fatal error:
Your choice of 1 MPI rank and the use of 32 total threads leads to the use of
32 OpenMP threads, whereas we expect the optimum to be with more MPI ranks
with 1 to 6 OpenMP threads. If you want to run with this many OpenMP threads,
specify the -ntomp option. But we suggest to increase the number of MPI ranks
(option -ntmpi).

If I run the same system with CPU+GPU, it picks 1 MPI x 32 OMP threads and works.

If I do the exact same on a 2x8 core Xeon v3, HT enabled, 2xTitanX GPU, also compiling with -DGMX_GPU=ON. Running this with or without -nb cpu gives 8 MPI x 4 OMP and works. No errors.

Compiling gromacs with -DGMX_GPU=OFF gives for both machines 32 MPI x 1 OMP threads and is working.

Does anyone have a clue what causes the different behaviour on the 2 machines? Is it single vs. dual gpu? And moreover why do I get that fatal error? And why is there a different MPI/OMP configuration depending on compiling with or without gpu support, but in both cases running on cpu only?

I did not file a bug report on redmine, but could also do. Any hints appreciated!


cheers, Johannes
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20170828/8c33e4de/attachment.html>


More information about the gromacs.org_gmx-developers mailing list