[gmx-developers] plans for mdrun features to deprecate in GROMACS 5.0

Berk Hess hess at kth.se
Mon Sep 16 16:29:49 CEST 2013


On 09/16/2013 04:19 PM, Carsten Kutzner wrote:
> Hi Xavier,
>
> On Sep 16, 2013, at 3:55 PM, XAvier Periole <x.periole at rug.nl> wrote:
>> You might have guest yourself that dropping the particle decomposition is a problem for me since I took the time to fill up a report on a bug that I explain quite substantially how and why it was a problem.
>>
>> pd remains the only manner to run a system with unusual bonded terms that span a large part of the box. dd and dlb simply do not go through and there is then no way to run the system.
> There is functionality in groupcoord.c that might provide the framework for what
> you want to do using domain decomposition. It will work if just a small fraction of
> all atoms are involved in these long-range special interactions. The framework
> allows you to communicate a (small) group of atoms (defined by an index group)
> to all the nodes - then you can define your own potential function and forces
> for that group. It would involve a little bit of coding, though :)
> It will also work with bigger groups, but the scaling will suffer then. This
> functionality is used for essential dynamics / flooding and enforced
> rotation in combination with domain decomposition.
>
> Best,
>    Carsten
Also with OpenMP thread parallelization you can still run in parallel 
without PD and DD within a single node. Although this limits the number 
of cores, it is an order of magnitude better than running one a single core.
Even though some of the more exotic algorithms (or your own algorithm) 
might not be OpenMP parallelized, all the rest of the code will be and 
your whole simulation should still scale reasonably.

Cheers,

Berk
>> I used pd in a few application where the relative orientation of proteins were restrains using distance, angle and dihedral angles. the restrains were subsequently treated using a 6d-wham algorithm to obtain the PMF … I guess this won't be possible anymore … very very very sad!
>>
>> While using coarse grain model pd was also often the only solution to run a system during the equilibration. It was never clear why but non-equilibrated system have severe trouble to run using dd.
>>
>> to conclude removing particle decomposition is a disaster.
>>
>> XAvier.
>>
>> On Sep 10, 2013, at 10:35 PM, Mark Abraham <mark.j.abraham at gmail.com> wrote:
>>
>>> Hi,
>>>
>>> The near-silence has been deafening, so these issues are regarded as
>>> closed. Anybody wanting a significant deviation from that plan will
>>> find the onus on them to do the work! :-)
>>>
>>> Cheers,
>>>
>>> Mark
>>>
>>> On Wed, Aug 14, 2013 at 5:06 PM, Mark Abraham <mark.j.abraham at gmail.com> wrote:
>>>> Hi gmx-users and gmx-developers,
>>>>
>>>> There are a number of features of GROMACS that we plan to drop for 5.0
>>>> (scheduled for early 2014). We don’t like doing this, but if things
>>>> are broken or cause developers pain, then they will go unless there is
>>>> manpower to support them. We’d like to keep you informed and hear how
>>>> much pain any of this might cause. Some features will be dropped
>>>> entirely, and others are likely to be reduced to explicit support only
>>>> for some cases. Some discussion has already occurred here
>>>> http://redmine.gromacs.org/issues/1292.
>>>>
>>>> Things we plan to drop entirely:
>>>> * particle decomposition (see below)
>>>> * current QM support (this will be dropped, work on a replacement is
>>>> underway, planned for 5.0)
>>>> * writing of pair distance and/or time-averaged pair distance to
>>>> energy files during simulations with position/orientation restraints
>>>> * reaction-field-nec
>>>> * Encad-shift
>>>> * mdrun -ionize
>>>> * GCT
>>>> * mdrun -seppot
>>>> * mdrun -ffscan
>>>> * OpenMM support
>>>>
>>>> There are several algorithms (e.g. fancy kinds of restraints) that
>>>> have only ever worked with particle decomposition (if they work at
>>>> all...). We plan to support these only in serial.
>>>>
>>>> Things that will likely only work in serial (ie. single-domain DD):
>>>> * ensemble- and time-averaged distance restraints
>>>> * L-BFGS energy minimization
>>>> * Generalized Born
>>>>
>>>> In some cases, “in serial” might mean “in parallel (with DD) with an
>>>> extra communication stage that will make it work, but might scale
>>>> poorly.” Or “in parallel but if things diffuse too far, the simulation
>>>> will crash.” If you have working examples of any of the above in
>>>> parallel, we would be most interested to hear from you. We’d like to
>>>> construct test cases that show what works now, so that later if we are
>>>> able to support some kind of parallelism, we can show that it still
>>>> works.
>>>>
>>>> Things that won’t support constraints (because the implementations are
>>>> broken or missing):
>>>> * L-BFGS energy minimization
>>>> * MTTK pressure coupling
>>>>
>>>> As always, what goes into GROMACS depends on people putting the work
>>>> in. If something above would affect you, then do speak up.
>>>> Contributions of working test cases are particularly valuable, but in
>>>> the end you might have to be the one to write the code to make the
>>>> test pass. You will have the option of continuing to use old code,
>>>> too!
>>>>
>>>> Cheers,
>>>>
>>>> Mark
>>> --
>>> gmx-developers mailing list
>>> gmx-developers at gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>>> Please don't post (un)subscribe requests to the list. Use the
>>> www interface or send it to gmx-developers-request at gromacs.org.
>> -- 
>> gmx-developers mailing list
>> gmx-developers at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-developers-request at gromacs.org.
>
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics
> Am Fassberg 11, 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/grubmueller/kutzner
> http://www.mpibpc.mpg.de/grubmueller/sppexa
>




More information about the gromacs.org_gmx-developers mailing list