[gmx-users] Gromacs Version 5.0.2 - Bug #1603

Mark Abraham mark.j.abraham at gmail.com
Wed Oct 8 18:42:14 CEST 2014

On Wed, Oct 8, 2014 at 4:29 PM, Siva Dasetty <sdasett at g.clemson.edu> wrote:

> Dear All,
> I am using gpu enabled gromacs version 5.0.2 and I am checking if my
> simulation is still affected by bug #1603
> http://redmine.gromacs.org/issues/1603.
> Below is the PP-PME load balancing part of my log file
>  PP/PME load balancing changed the cut-off and PME settings:
>            particle-particle                    PME
>             rcoulomb  rlist            grid      spacing   1/beta
>    initial  1.000 nm  1.092 nm     108 108 120   0.119 nm  0.320 nm
>    final    1.482 nm  1.574 nm      72  72  80   0.178 nm  0.475 nm
>  cost-ratio           3.00             0.30
>  (note that these numbers concern only part of the total PP and PME load)
> The release notes says,
> If, for your simulation, the final rcoulomb value (1.368 here) is different
> from the initial one (1.000 here), then so was the LJ cutoff for
> short-ranged interactions, and the model physics was not what you asked
> for.
> Does that mean the initial and final values are supposed to be the same?

No. The point of the tuning is to change the rcoulomb value to maximize
performance while maintaining the quality of the electrostatic
approximation you chose in the .mdp file. If you were using one of the
affected versions (5.0 or 5.0.1), then the normal-but-not-guaranteed change
of rcoulomb led to an inappropriate change of rvdw, which is why it is a
relevant diagnostic for whether an actual simulation from the versions with
wrong code was affected in practice.

But the normal change of rcoulomb neither confirms nor denies that the bug
is fixed in 5.0.2. You could observe that the NPT density in 5.0.2 agrees
with a CPU-only (or 4.6.x GPU) calculation, for example.


More information about the gromacs.org_gmx-users mailing list