[gmx-developers] plans for mdrun features to deprecate in GROMACS 5.0

Erik Lindahl erik.lindahl at scilifelab.se
Tue Sep 17 15:49:55 CEST 2013


Hi,

I actually don't think it is overly difficult to port anything to domain decomposition, in particular not if performance isn't critical (which it isn't if particle decomposition is an alternative :-).

The problem is that it is slightly easier initially to just support PD, and that has meant even some new code only supported PD (guilty as charged - our first version of generalized born was PD only). This in turn augments the problem, since the existence of PD means developers might add more code that only works with PD, and suddenly we have quite a few parts that don't work properly with our main parallelization algorithm.

I'm a bit afraid that a "relaxed DD" would have exactly the same effect. Thus, I would rather advocate some extra semi-stupid communication routines that might kill your performance compared to vanilla runs, but it _will_ work with normal DD.

Cheers,

Erik


On Sep 17, 2013, at 3:30 PM, Szilárd Páll <szilard.pall at cbr.su.se> wrote:

> One limitation of leaving OpenMP the only option for PD runs is that
> OpenMP scaling is far from stellar when running across multiple NUMA
> domains, most notably on AMD, but not only. While on a dual socket
> 8-core Sandy Bridge with 1000s of atoms/core you get typically 60-85%
> scaling across two sockets, on an dual 16-core/8-module AMD Piledriver
> it's more like 20-60% and on previous generation AMD-s it's even worse
> (not to mention quad-socket machines).
> 
> While some improvements with DD + multi-threading may be needed to
> improve scaling with high thread count/rank, this is quite feasible
> even with MPI+OpenMP, while pushing OpenMP across NUMA regions will
> hardly work.
> 
> I'm wondering, would it be feasible to provide a "relaxed" DD
> code-path in combination with strongly limiting how small the domain
> size can be (or is this similar to what Carsten suggests)?
> 
> --
> Szilárd
> 
> 
> On Mon, Sep 16, 2013 at 5:15 PM, XAvier Periole <x.periole at rug.nl> wrote:
>> 
>> I'll have to look at what we did and get back to you guys ...
>> 
>> I think I got stuck at the running on one node with pd and got distracted by something else ... I'll need to get back to this in more details.
>> 
>> XAvier.
>> 
>> On Sep 16, 2013, at 17:04, "Shirts, Michael (mrs5pt)" <mrs5pt at eservices.virginia.edu> wrote:
>> 
>>> 
>>>> Berk, the use of OpenMP on a single node ... Should work indeed. We tried this
>>>> for REMD using one node per replica each having exotic bonded terms, but we
>>>> failed for reason I forgot.
>>> 
>>> Was this Hamiltonia replica exchange in 4.6?  If so, let me know about any
>>> failures to see if it's a problem with insufficient documentation or an
>>> underlying bug with the REMD/exotic bonded interaction that needs to be
>>> fixed.
>>> 
>>> Best,
>>> ~~~~~~~~~~~~
>>> Michael Shirts
>>> Assistant Professor
>>> Department of Chemical Engineering
>>> University of Virginia
>>> michael.shirts at virginia.edu
>>> (434)-243-1821
>>> 
>>> 
>>> 
>>> --
>>> gmx-developers mailing list
>>> gmx-developers at gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>>> Please don't post (un)subscribe requests to the list. Use the
>>> www interface or send it to gmx-developers-request at gromacs.org.
>> --
>> gmx-developers mailing list
>> gmx-developers at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-developers-request at gromacs.org.
> --
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-developers-request at gromacs.org.




More information about the gromacs.org_gmx-developers mailing list