[gmx-users] Strong egative energy drift (losing energy) in explicit water AMBER protein simulation
ms
devicerandom at gmail.com
Wed Jun 13 17:49:02 CEST 2012
On 13/06/12 16:59, Justin A. Lemkul wrote:
>
>
> On 6/13/12 10:48 AM, ms wrote:
>> On 13/06/12 16:36, Justin A. Lemkul wrote:
>>> Here, you're not preserving any of the previous state information.
>>> You're picking up from 2 ns, but not passing a .cpt file to grompp - the
>>> previous state is lost. Is that what you want? In conjunction with
>>> "gen_vel = no" I suspect you could see some instabilities.
>>
>> This is interesting -I have to ask the guys who devised the group's
>> standard
>> procedure :)
>>
>>>> mpirun -np 8 mdrun_d -v -deffn 1AKI_production_GPU -s
>>>> 1AKI_production_GPU.tpr
>>>> -g 1AKI_production_GPU.log -c 1AKI_production_GPU.gro -o
>>>> 1AKI_production_GPU.trr
>>>> -g 1AKI_production_GPU.log -e 1AKI_production_GPU.edr
>>>>
>>>
>>> As an aside, proper use of -deffnm (not -deffn) saves you all of this
>>> typing :)
>>>
>>> mpirun -np 8 mdrun_d -v -deffnm 1AKI_production_GPU
>>>
>>> That's all you need.
>>
>> FFFFUUUU that's why -deffn it didn't work! silly me. Thanks!
>>
>>>> I am using Gromacs 4.5.5 compiled in double precision.
>>>>
>>>> I am very rusty with Gromacs, since I last dealt molecular dynamics
>>>> more than 1
>>>> year ago :) , so probably I am missing something obvious. Any hint on
>>>> where
>>>> should I look for to solve the problem? (Also, advice on if the .mdp
>>>> is indeed
>>>> correct for CUDA simulations are welcome)
>>>>
>>>
>>> I see the same whenever I run on GPU, but my systems are always implicit
>>> solvent. Do you get reasonable performance with an explicit solvent PME
>>> system on GPU? I thought that was supposed to be really slow.
>> >
>>> Do you observe similar effects on CPU? My tests have always indicated
>>> that equivalent systems on CPU are far more stable (energetically and
>>> structurally) than on GPU. I have never had any real luck on GPU. I get
>>> great performance, and then crashes ;)
>>
>> Sorry, perhaps I wasn't clear. This was on normal CPUs! I was trying
>> to get the
>> system working on CPU and to see how it behaved before diving in the
>> GPU misty
>> sea...
>>
>
> Ah, sorry - with everything being named "GPU" it threw me off. I guess I
> should have known based on the energy terms. When running on GPU, very
> little information is printed (something I've complained about before) -
> you only get Potential, Kinetic, Total, Temperature, and anything
> related to constraints. I think it's due to limitations in OpenMM, not
> Gromacs (something that should be improved in upcoming versions).
>
> A few things to look at based on the .mdp file:
>
> 1. No constraints? Even with a 1-fs timestep, you probably need to be
> constraining all least the h-bonds.
Ok. We usually don't constrain with 1-fs timestep, and since the gmx
website said that most restrains were unsupported, I didn't feel like
adding them. Will ask about this here.
> 2. nstlist set to 2 is not going to give wrong results, but it's
> incredibly time-consuming to do neighbor searching that often. A value
> of 5 or 10 is probably more appropriate.
I have to ask why we use this value as default, and thanks for the tip
-however this seems not relevant now :)
> 3. COM removal of multiple groups can lead to bad energy conservation.
OK, good to know.
> 4. What happens when you use the Andersen thermostat? That's not
> implemented yet for CPU calculations (though it was recently pushed into
> the 4.6 development branch). Your comment regarding GPU is fine, but I
> would think grompp would complain.
I am unsure of what do you mean. On the gmx website it reads:
"Temperature control: Supported only with the sd/sd1, bd,
md/md-vv/md-vv-avek integrators. OpenMM implements only the Andersen
thermostat. All values for tcoupl are thus accepted and equivalent to
andersen. Multiple temperature coupling groups are not supported, only
tc-grps=System will work."
So it seems that *every* choice of mine means "andersen" in that
context. Am I wrong?
> 5. Why not use dispersion correction?
True, why not? :)
Will give it a shot.
thanks!
m.
--
Massimo Sandal, Ph.D.
http://devicerandom.org
More information about the gromacs.org_gmx-users
mailing list