[gmx-developers] plans for mdrun features to deprecate in GROMACS 5.0

Szilárd Páll szilard.pall at cbr.su.se
Tue Sep 17 15:30:54 CEST 2013


One limitation of leaving OpenMP the only option for PD runs is that
OpenMP scaling is far from stellar when running across multiple NUMA
domains, most notably on AMD, but not only. While on a dual socket
8-core Sandy Bridge with 1000s of atoms/core you get typically 60-85%
scaling across two sockets, on an dual 16-core/8-module AMD Piledriver
it's more like 20-60% and on previous generation AMD-s it's even worse
(not to mention quad-socket machines).

While some improvements with DD + multi-threading may be needed to
improve scaling with high thread count/rank, this is quite feasible
even with MPI+OpenMP, while pushing OpenMP across NUMA regions will
hardly work.

I'm wondering, would it be feasible to provide a "relaxed" DD
code-path in combination with strongly limiting how small the domain
size can be (or is this similar to what Carsten suggests)?

--
Szilárd


On Mon, Sep 16, 2013 at 5:15 PM, XAvier Periole <x.periole at rug.nl> wrote:
>
> I'll have to look at what we did and get back to you guys ...
>
> I think I got stuck at the running on one node with pd and got distracted by something else ... I'll need to get back to this in more details.
>
> XAvier.
>
> On Sep 16, 2013, at 17:04, "Shirts, Michael (mrs5pt)" <mrs5pt at eservices.virginia.edu> wrote:
>
>>
>>> Berk, the use of OpenMP on a single node ... Should work indeed. We tried this
>>> for REMD using one node per replica each having exotic bonded terms, but we
>>> failed for reason I forgot.
>>
>> Was this Hamiltonia replica exchange in 4.6?  If so, let me know about any
>> failures to see if it's a problem with insufficient documentation or an
>> underlying bug with the REMD/exotic bonded interaction that needs to be
>> fixed.
>>
>> Best,
>> ~~~~~~~~~~~~
>> Michael Shirts
>> Assistant Professor
>> Department of Chemical Engineering
>> University of Virginia
>> michael.shirts at virginia.edu
>> (434)-243-1821
>>
>>
>>
>> --
>> gmx-developers mailing list
>> gmx-developers at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-developers-request at gromacs.org.
> --
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-developers-request at gromacs.org.



More information about the gromacs.org_gmx-developers mailing list