Re: [gmx-developers] Oversubscribing on 4.62 with MPI / OpenMP
hess@kth.se
hess at kth.se
Thu Apr 25 09:43:59 CEST 2013
Hi
Yes, that is expected.
Combined MPI+ OpenMP is always slower than either of the two, except close to the scaling limit.
Two OpenMP threads give the least overhead, especially with hyperthreading. Although turning of hyperthreading is then probably faster.
Cheers,
Berk
----- Reply message -----
From: "Jochen Hub" <jhub at gwdg.de>
To: "Discussion list for GROMACS development" <gmx-developers at gromacs.org>
Subject: [gmx-developers] Oversubscribing on 4.62 with MPI / OpenMP
Date: Thu, Apr 25, 2013 09:37
Am 4/24/13 9:53 PM, schrieb Mark Abraham:
> I suspect -np 2 is not starting a process on each node like I suspect
> you think it should, because all the symptoms are consistent with that.
> Possibly the Host field in the .log file output is diagnostic here.
> Check how your your MPI configuration works.
I fixed the issue with the mpi call. I make sure, that only one MPI
process is started per node (mpiexec -n 2 -npernode=1 or -bynode) . The
oversubscription warning does not appear, so everything seems fine.
However, the performance is quite poor with MPI/OpenMP. Example:
(100 kAtoms, PME, Verlet, cutoffs at 1nm nstlist=10)
16 MPI processes: 6.8 ns/day
2 MPI processes, 8 OpenMP threads pre MPI process: 4.46 ns/day
4 MPI / 4 OpenMP each does not improve things.
I use an icc13, and I tried different MPI implementations (Mvapich 1.8,
openmpi 1.33)
Is that expected?
Many thanks,
Jochen
>
> Mark
>
> On Apr 24, 2013 7:47 PM, "Jochen Hub" <jhub at gwdg.de
> <mailto:jhub at gwdg.de>> wrote:
>
> Hi,
>
> I have a problem related to the oversubscribing issue reported on
> Feb 5 in the user list - yet it seems different.
>
> I use the latest git 4.62 with icc13 and MVAPICH2/1.8.
>
> I run on 2 nodes with, each with 2 Xeon Harpertowns (E5472).
>
> export OMP_NUM_THREADS=1
> mpiexec -np 16 mdrun
>
> everyhting is fine - reasonable performance. With
>
> export OMP_NUM_THREADS=8
> mpiexec -np 2 mdrun
>
> I get the warning:
>
> WARNING: Oversubscribing the available 8 logical CPU cores with 16
> threads. This will cause considerable performance loss!
>
> And the simulation is indeed very slow.
>
> According to Berk's suggestion in the thread "MPI oversubscription"
> in the user list I have added print statements into
> src/gmxlib/gmx_detect___hardware.c to check for the sysconf(...)
> functions. I receive 8 in each MPI process:
>
> ret at _SC_NPROCESSORS_ONLN = 8
> ret at _SC_NPROCESSORS_ONLN = 8
>
> Here, some more information from the log file (the two nodes are
> apparently detected) - so I am a bit lost.
>
> Can someone give a hint how to solve this?
>
> Many thanks,
> Jochen
>
> Host: r104i1n8 pid: 12912 nodeid: 0 nnodes: 2
> Gromacs version: VERSION 4.6.2-dev
> Precision: single
> Memory model: 64 bit
> MPI library: MPI
> OpenMP support: enabled
> GPU support: disabled
> invsqrt routine: gmx_software_invsqrt(x)
> CPU acceleration: SSE4.1
> FFT library: fftw-3.3.1-sse2
> Large file support: enabled
> RDTSCP usage: disabled
> Built on: Wed Apr 24 17:07:56 CEST 2013
> Built by: nicjohub at r104i1n0 [CMAKE]
> Build OS/arch: Linux 2.6.16.60-0.97.1-smp x86_64
> Build CPU vendor: GenuineIntel
> Build CPU brand: Intel(R) Xeon(R) CPU E5472 @ 3.00GHz
> Build CPU family: 6 Model: 23 Stepping: 6
> Build CPU features: apic clfsh cmov cx8 cx16 lahf_lm mmx msr pdcm
> pse sse2 sse3 sse4.1 ssse3
> C compiler: /sw/comm/mvapich2/1.8-intel/__bin/mpicc Intel
> icc (ICC) 13.0.1 20121010
> C compiler flags: -msse4.1 -std=gnu99 -Wall -ip
> -funroll-all-loops -O3 -DNDEBUG
>
> Detecting CPU-specific acceleration.
> Present hardware specification:
> Vendor: GenuineIntel
> Brand: Intel(R) Xeon(R) CPU E5472 @ 3.00GHz
> Family: 6 Model: 23 Stepping: 6
> Features: apic clfsh cmov cx8 cx16 lahf_lm mmx msr pdcm pse sse2
> sse3 sse4.1 ssse3
> Acceleration most likely to fit this hardware: SSE4.1
> Acceleration selected at GROMACS compile time: SSE4.1
>
>
>
>
> --
> ------------------------------__---------------------
> Dr. Jochen Hub
> Computational Molecular Biophysics Group
> Institute for Microbiology and Genetics
> Georg-August-University of Göttingen
> Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
> Phone: +49-551-39-14189 <tel:%2B49-551-39-14189>
> http://cmb.bio.uni-goettingen.__de/ <http://cmb.bio.uni-goettingen.de/>
> ------------------------------__---------------------
> --
> gmx-developers mailing list
> gmx-developers at gromacs.org <mailto:gmx-developers at gromacs.org>
> http://lists.gromacs.org/__mailman/listinfo/gmx-__developers
> <http://lists.gromacs.org/mailman/listinfo/gmx-developers>
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-developers-request at __gromacs.org
> <mailto:gmx-developers-request at gromacs.org>.
>
>
>
--
---------------------------------------------------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---------------------------------------------------
--
gmx-developers mailing list
gmx-developers at gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-developers
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-developers-request at gromacs.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20130425/b35addb8/attachment.html>
More information about the gromacs.org_gmx-developers
mailing list