[gmx-users] Strong egative energy drift (losing energy) in explicit water AMBER protein simulation

Justin A. Lemkul jalemkul at vt.edu
Wed Jun 13 16:59:29 CEST 2012



On 6/13/12 10:48 AM, ms wrote:
> On 13/06/12 16:36, Justin A. Lemkul wrote:
>> Here, you're not preserving any of the previous state information.
>> You're picking up from 2 ns, but not passing a .cpt file to grompp - the
>> previous state is lost. Is that what you want? In conjunction with
>> "gen_vel = no" I suspect you could see some instabilities.
>
> This is interesting -I have to ask the guys who devised the group's standard
> procedure :)
>
>>> mpirun -np 8 mdrun_d -v -deffn 1AKI_production_GPU -s
>>> 1AKI_production_GPU.tpr
>>> -g 1AKI_production_GPU.log -c 1AKI_production_GPU.gro -o
>>> 1AKI_production_GPU.trr
>>> -g 1AKI_production_GPU.log -e 1AKI_production_GPU.edr
>>>
>>
>> As an aside, proper use of -deffnm (not -deffn) saves you all of this
>> typing :)
>>
>> mpirun -np 8 mdrun_d -v -deffnm 1AKI_production_GPU
>>
>> That's all you need.
>
> FFFFUUUU that's why -deffn it didn't work! silly me. Thanks!
>
>>> I am using Gromacs 4.5.5 compiled in double precision.
>>>
>>> I am very rusty with Gromacs, since I last dealt molecular dynamics
>>> more than 1
>>> year ago :) , so probably I am missing something obvious. Any hint on
>>> where
>>> should I look for to solve the problem? (Also, advice on if the .mdp
>>> is indeed
>>> correct for CUDA simulations are welcome)
>>>
>>
>> I see the same whenever I run on GPU, but my systems are always implicit
>> solvent. Do you get reasonable performance with an explicit solvent PME
>> system on GPU? I thought that was supposed to be really slow.
>  >
>> Do you observe similar effects on CPU? My tests have always indicated
>> that equivalent systems on CPU are far more stable (energetically and
>> structurally) than on GPU. I have never had any real luck on GPU. I get
>> great performance, and then crashes ;)
>
> Sorry, perhaps I wasn't clear. This was on normal CPUs! I was trying to get the
> system working on CPU and to see how it behaved before diving in the GPU misty
> sea...
>

Ah, sorry - with everything being named "GPU" it threw me off.  I guess I should 
have known based on the energy terms.  When running on GPU, very little 
information is printed (something I've complained about before) - you only get 
Potential, Kinetic, Total, Temperature, and anything related to constraints.  I 
think it's due to limitations in OpenMM, not Gromacs (something that should be 
improved in upcoming versions).

A few things to look at based on the .mdp file:

1. No constraints?  Even with a 1-fs timestep, you probably need to be 
constraining all least the h-bonds.

2. nstlist set to 2 is not going to give wrong results, but it's incredibly 
time-consuming to do neighbor searching that often.  A value of 5 or 10 is 
probably more appropriate.

3. COM removal of multiple groups can lead to bad energy conservation.

4. What happens when you use the Andersen thermostat?  That's not implemented 
yet for CPU calculations (though it was recently pushed into the 4.6 development 
branch).  Your comment regarding GPU is fine, but I would think grompp would 
complain.

5. Why not use dispersion correction?

-Justin

-- 
========================================

Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

========================================





More information about the gromacs.org_gmx-users mailing list