[gmx-users] FEP and loss of performance

David Mobley dmobley at gmail.com
Wed Apr 6 15:31:42 CEST 2011


Hi,

This doesn't sound like normal behavior. In fact, this is not what I
typically observe. While there may be a small performance difference,
it is probably at the level of a few percent. Certainly not a factor
of more than 10.

You may want to provide an mdp file and topology, etc. so someone can
see if they can reproduce your problem.

Thanks.


On Wed, Apr 6, 2011 at 7:59 AM, Luca Bellucci <lcbllcc at gmail.com> wrote:
> I followed your suggestions and i tried to perform a MD run wit GROMACS and
> NAMD for dialanine peptide in a water box. The cell side cubic box was 40 A.
>
> GROMACS:
> With the free energy module there is a drop in gromacs performance of about
> 10/20 fold.
> Standard MD:      Time:          6.693       6.693    100.0
> Free energy MD:   Time:    136.113    136.113    100.0
>
> NAMD:
> With free energy module there is not a  drop in performance so evident as in
> gromacs.
> Standard MD   6.900000
> Free energy MD 9.600000
>
> I would like to point out that this kind of calculation is common, in fact in
> the manual of gromacs 4.5.3 it is reported  " There is a special option system
> that couples all molecules types in the system. This can be useful for
> equilibrating a system [..] ".
>
> Actually, I would understand if there is a solution to resolve the drop in
> gromacs performance for this kind of calculation.
>
> Luca
>
>
>
>> I don't know if it is possible or not. I think that you can enhance
>> your chances of developer attention if you develop a small and simple
>> test system that reproduces the slowdown and very explicitly state
>> your case for why you can't use some other method. I would suggest
>> posting that to the mailing list and, if you don't get any response,
>> post it as an enhancement request on the redmine page (or whatever has
>> taken over from bugzilla).
>>
>> Good luck,
>> Chris.
>>
>> -- original message --
>>
>>
>> Yes i am testing the possibility to perform an Hamiltonian-REMD
>> Energy barriers can be overcome  increasing the temperature system or
>> scaling potential energy  with a lambda value, these methods are
>> "equivalent". Both have advantages and disavantages, at this stage it is
>> not the right place to debate on it. The main problem seems to be how to
>> overcome to the the loss of gromacs performance in such calculation.  At
>> this moment it seems an intrinsic code problem.
>> Is it possible?
>>
>> >  >> Dear Chris and Justin
>> > >>
>> > >>/  Thank you for your precious suggestions
>> >
>> > />>/  This is a test that i perform in a single machine with 8 cores
>> > />>/  and gromacs 4.5.4.
>> > />>/
>> > />>/  I am trying  to enhance the  sampling of a protein using the
>> > decoupling scheme />>/  of the free energy module of gromacs.  However
>> > when i decouple only the />>/  protein, the protein collapsed. Because i
>> > simulated in NVT i thought that />>/  this was an effect of the solvent.
>> > I was trying to decouple also the solvent />>/  to understand the system
>> > behavior.
>> > />>/
>> > />
>> >
>> > >Rather than suspect that the solvent is the problem, it's more likely
>> > >that decoupling an entire protein simply isn't stable.  I have never
>> > >tried
>> > >
>> > > anything that enormous, but the volume change in the system could be
>> > > unstable, along with any number of factors, depending on how you
>> > > approach it.
>> > >
>> > >If you're looking for better sampling, REMD is a much more robust
>> > >approach
>> > >
>> > > than trying to manipulate the interactions of huge parts of your system
>> > > using the free energy code.
>> >
>> > Presumably Luca is interested in some type of hamiltonian exchange where
>> > lambda represents the interactions between the protein and the solvent?
>> > This can actually be a useful method for enhancing sampling. I think it's
>> > dangerous if we rely to heavily on "try something else". I still see no
>> > methodological reason a priori why there should be any actual slowdown,
>> > so that makes me think that it's an implementation thing, and there is
>> > at least the possibility that this is something that could be fixed as
>> > an enhancement.
>> >
>> > Chris.
>> >
>> >
>> > -Justin
>> >
>> > >/   I expected a loss of performance, but not so drastic.
>> >
>> > />/  Luca
>> > />/
>> > />>/  Load balancing problems I can understand, but why would it take
>> > longer />>/  in absolute time? I would have thought that some nodes would
>> > simple be />>/  sitting idle, but this should not cause an increase in
>> > the overall />>/  simulation time (15x at that!).
>> > />>/
>> > />>/  There must be some extra communication?
>> > />>/
>> > />>/  I agree with Justin that this seems like a strange thing to do, but
>> > />>/  still I think that there must be some underlying coding issue
>> > (probably />>/  one that only exists because of a reasonable assumption
>> > that nobody />>/  would annihilate the largest part of their system).
>> > />>/
>> > />>/  Chris.
>> > />>/
>> > />>/  Luca Bellucci wrote:
>> > />>>/  /  Hi Chris,
>> > />>/  />/  thank for the suggestions,
>> > />>/  />/  in the previous mail there is a mistake because
>> > />>/  />/  couple-moltype = SOL (for solvent) and not "Protein_chaim_P".
>> > />>/  />/  Now the problem of the load balance seems reasonable, because
>> > />>/  />/  the water box is large ~9.0 nm.
>> > />>/  /
>> > />>/  Now your outcome makes a lot more sense.  You're decoupling all of
>> > the />>/  solvent? I don't see how that is going to be physically stable
>> > or terribly /
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
David Mobley
dmobley at gmail.com
504-383-3662



More information about the gromacs.org_gmx-users mailing list