[gmx-developers] Conserved energy drifts more with shorter time steps

David van der Spoel spoel at xray.bmc.uu.se
Wed Jun 3 14:31:52 CEST 2015


On 03/06/15 11:58, Berk Hess wrote:
> On 03/06/15 11:30 , David van der Spoel wrote:
>> On 03/06/15 11:22, Berk Hess wrote:
>>> On 03/06/15 11:14 , David van der Spoel wrote:
>>>> On 03/06/15 10:36, Berk Hess wrote:
>>>>> On 03/06/15 10:29 , David van der Spoel wrote:
>>>>>> On 03/06/15 10:02, Berk Hess wrote:
>>>>>>> On 03/06/15 9:57 , David van der Spoel wrote:
>>>>>>>> On 02/06/15 15:24, Berk Hess wrote:
>>>>>>>>> On 2015-06-02 15:20, Anton Feenstra wrote:
>>>>>>>>>> On 02-06-15 15:14, Berk Hess wrote:
>>>>>>>>>>> On 2015-06-02 15:08, David van der Spoel wrote:
>>>>>>>>>>>> On 02/06/15 14:30, Berk Hess wrote:
>>>>>>>>>>>>> On 2015-06-02 14:26, David van der Spoel wrote:
>>>>>>>>>>>>>> On 02/06/15 14:05, Berk Hess wrote:
>>>>>>>>>>>>>>> I assume David is looking at the conserved energy quantity.
>>>>>>>>>>>>>> Indeed, averages and drift over 900 ps, 1000 molecules:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Energy            Average   Err.Est. RMSD Tot-Drift
>>>>>>>>>>>>>> -------------------------------------------------------------------------------
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Total Energy      20460.7        2.5 778.213 12.4265 (kJ/mol)
>>>>>>>>>>>>>> Conserved En.     -4384.7        380 778.283 2665.29 (kJ/mol)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> But I just noticed that Verlet - buffer - tolerance is not
>>>>>>>>>>>>>>> mentioned,
>>>>>>>>>>>>>>> so I assume it is default. The drift estimate is an
>>>>>>>>>>>>>>> overestimate
>>>>>>>>>>>>>>> which is tighter for shorter pair list lifetimes. I guess
>>>>>>>>>>>>>>> that is
>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>> explanation. Set it very low, so you are sure the energy
>>>>>>>>>>>>>>> drift is
>>>>>>>>>>>>>>> not
>>>>>>>>>>>>>>> affected by the (too small) buffer.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This may be the default.
>>>>>>>>>>>>>>    verlet-buffer-tolerance        = 0.005
>>>>>>>>>>>>>> should we set it to 1e-5 or something like that?
>>>>>>>>>>>>> It depends on how low drift you want to be able to measure.
>>>>>>>>>>>>> You
>>>>>>>>>>>>> need to
>>>>>>>>>>>>> set it lower than the drift you want to be able to measure.
>>>>>>>>>>>> Ok, will try. What about the time step dependence? Any clue?
>>>>>>>>>>> I explained that before, see 4 steps above.
>>>>>>>>>>>
>>>>>>>>>>> Berk
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Berk
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Berk
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Jun 2, 2015 1:45 PM, "Shirts, Michael R. (mrs5pt)"
>>>>>>>>>>>>>>> <mrs5pt at eservices.virginia.edu> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> How do we run more NVE than with the settings below?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>     tcoupl                         = Nose-Hoover
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Would make it not NVE.
>>>>>>>>>>>>>> Turning tcoupl off would not keep the temperature constant of
>>>>>>>>>>>>>> course,
>>>>>>>>>>>>>> seeing that there is a drift in the conserved energy this
>>>>>>>>>>>>>> means
>>>>>>>>>>>>>> that
>>>>>>>>>>>>>> the temperature will change.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> However, the main question was why does "conserved-energy
>>>>>>>>>>>>>> conservation" get worse when the time step is reduced?
>>>>>>>>>>>>>> Integration
>>>>>>>>>>>>>> should presumably become more accurate, or not? Or does one
>>>>>>>>>>>>>> need
>>>>>>>>>>>>>> double precision for short time steps?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Yes, at least that is what I saw for my and Berk's 1999 JCC
>>>>>>>>>> paper.
>>>>>>>>>> At single precision, drift would not go down below a certain time
>>>>>>>>>> step. I have no idea anymore which timestep precisely, and I'm
>>>>>>>>>> sure
>>>>>>>>>> this would be system specific.
>>>>>>>>>>
>>>>>>>>>> I also recall drift indeed going up at (extreme) short
>>>>>>>>>> timesteps. My
>>>>>>>>>> intuition for that at the time was that arithmetic errors would
>>>>>>>>>> accumulate faster for calculating many very small delta's.
>>>>>>>>> In that case it was due to the constraint algorithms. The
>>>>>>>>> velocity was
>>>>>>>>> corrected with the coordinate difference divided by the time step.
>>>>>>>>> Thus
>>>>>>>>> bit noise is amplified inversely proportional with the time
>>>>>>>>> step. I
>>>>>>>>> fixed this some years ago for LINCS and SHAKE, but SETTLE still
>>>>>>>>> has
>>>>>>>>> this
>>>>>>>>> property.
>>>>>>>>>
>>>>>>>>> David is not using constraints. So I think the issue here is, as I
>>>>>>>>> mailed before, that the pairlist buffer estimate is more less
>>>>>>>>> accurate,
>>>>>>>>> more of an overestimate, for longer pairlist lifetimes (larger
>>>>>>>>> time
>>>>>>>>> steps with fixed nslist), which leads to less drift.
>>>>>>>>>
>>>>>>>>> Berk
>>>>>>>> Unfortunately setting the verlet-buffer to 1e-6 (instead of default
>>>>>>>> 0.005) does not help. We get the following drift:
>>>>>>>>
>>>>>>>> Drift:
>>>>>>>> Dt   Single(5e-3)  Double(5-e3)  Single(1e-6)
>>>>>>>> 1.0    240            166          191
>>>>>>>> 0.5     46            -12           68
>>>>>>>> 0.2    170            -7           137
>>>>>>>> 0.1    328            -6           353
>>>>>>>> 0.005  564            -4           573
>>>>>>>>
>>>>>>>> Seems this is still a precision issue, or not?
>>>>>>>>
>>>>>>> Yes, this still seems a precision issue.
>>>>>>> But from this I can't see how large the drift is. What is the energy
>>>>>>> drift per atom per ns?
>>>>>>> The drift could be so small that you are affected by bit noise in
>>>>>>> the
>>>>>>> coordinate update.
>>>>>> 14 atoms, 1000 molecules, 900 ps, so divided by 12600, that means
>>>>>> that
>>>>>> for 0.05 fs the drift is ~0.04 kJ/mol/atom/ns
>>>>>>
>>>>>> But why would the 0.5 fs be 10 times more accurate than the 0.05 fs?
>>>>>> Would that be because one does many different additions instead of
>>>>>> one
>>>>>> big one? But why would that not cancel out?
>>>>>> We see this for 6 different systems.
>>>>>>
>>>>> My guess is that, but that's just a guess:
>>>>> x_i+1 = x_i + dt*v_i+1/2
>>>>> results in incrementing few bits of x_i+1. Each increment gives a bit
>>>>> rounding error. 10x smaller time step gives 10x more bit rouding.
>>>>> But I
>>>>> think that should lead to a diffusive errors, so sqrt(10) times larger
>>>>> drift.
>>>>> I have a master student working on investigating exactly this, but
>>>>> progress is slow.
>>>> OK, good to know. Did he/she try to just make the update algorithm
>>>> double?
>>> That's one aspect he should look into.
>>>>
>>>> By the way, if you grep for invdt in mdlib/* there are quite some real
>>>> variables invdt (1/dt) in the constraint code, but that is another
>>>> issue.
>>> That's not an issue. We don't need the time step accurate to more than
>>> single precision. The issue is that we are adding increments that only
>>> affects the last few bits of the value your incrementing, so we could
>>> use an invdt with even fewer bits than float without affecting the
>>> accuracy.
>>
>> Looking at the update code
>>            vn             = gstat[ga].u[d] + accel[ga][d]*dt + vnrel;
>>            v[n][d]        = vn;
>>            xprime[n][d]   = x[n][d]+vn*dt;
>> Makeing vn double precision could be sufficient, I'll give it a go.
> How would that help?
> The problem is that the increment vn*dt is very small compared to x. You
> need more bits in x, not in vn or dt.

Problem is that we compute the new velocity in double (since dt is 
double), but then round it to float in vn, then we mulitply vn by dt 
again (a double operation). So if we remove the rounding in the 
intermediate step it may help. We're testing this now.


>
> Berk
>>
>>>
>>> Berk
>>>>
>>>>
>>>>>
>>>>> Berk
>>>>>
>>>>
>>>>
>>>
>>
>>
>


-- 
David van der Spoel, Ph.D., Professor of Biology
Dept. of Cell & Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone:	+46184714205.
spoel at xray.bmc.uu.se    http://folding.bmc.uu.se


More information about the gromacs.org_gmx-developers mailing list