[gmx-users] Dynamic updating of nstlist in gromacs 5 screws up simulation results
Mark Abraham
mark.j.abraham at gmail.com
Wed Jun 24 22:47:29 CEST 2015
... and first check that your observation of a differece is statistically
significant. Two trajectories will be different, even from the same GROMACS
version.
Mark
On Wed, Jun 24, 2015 at 10:42 PM Szilárd Páll <pall.szilard at gmail.com>
wrote:
> Hi,
>
> I think it is not very likely that the nstlist tuning would cause very
> different results - especially not as a direct effect of the less
> frequent pair search. With nstlist updated rlist also gets
> recalculated; I've just checked and the estimates between 4.6 and 5.0
> have not changed.
>
> You can set nstlist on the command line (starting with v5.0) using the
> "-nstlist" option, in v4.6 this is not exposed on the mdrun command
> line interface, but it is supported through the GMX_NSTLIST
> environment variable.
>
> I suggest to try running v5.0 with -nstlist 10 and possibly also v4.6
> with the increased nstlist=20.
>
> Cheer,s
> --
> Szilárd
>
>
> On Wed, Jun 24, 2015 at 7:53 PM, Michael Daily <mdaily.work at gmail.com>
> wrote:
> > Hi,
> >
> > A colleague of mine and I have recently discovered that gromacs 4.6.4 and
> > 5.0.4 give very different results in the simulation of a biomimetic
> polymer
> > using an AMBER-based force field. She found that gromacs 5 mdrun is
> > dynamically changing nstlist from 10 to 20 or 40 depending on the
> > simulation, as documented below. Is it possible that this is causing the
> > favored conformation of the polymer to change? Is there a way in GMX 5 to
> > prevent this dynamic updating from happening?
> >
> > ### log file ###
> > ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> > H. J. C. Berendsen, D. van der Spoel and R. van Drunen
> > GROMACS: A message-passing parallel molecular dynamics implementation
> > Comp. Phys. Comm. 91 (1995) pp. 43-56
> > -------- -------- --- Thank You --- -------- --------
> >
> >
> > NOTE: Error occurred during GPU detection:
> > CUDA driver version is insufficient for CUDA runtime version
> > Can not use GPU acceleration, will fall back to CPU kernels.
> >
> > Changing nstlist from 10 to 20, rlist from 1 to 1.029
> >
> > Input Parameters:
> > integrator = md
> > tinit = 0
> > dt = 0.002
> > nsteps = 100000000
> > init-step = 0
> > simulation-part = 1
> > comm-mode = Linear
> > nstcomm = 100
> > bd-fric = 0
> > ld-seed = 2891701100
> > emtol = 10
> > emstep = 0.01
> > niter = 20
> > fcstep = 0
> > nstcgsteep = 1000
> > nbfgscorr = 10
> > rtpi = 0.05
> > nstxout = 5000
> > nstvout = 5000
> > nstfout = 5000
> > nstlog = 5000
> > nstcalcenergy = 100
> > nstenergy = 5000
> > nstxout-compressed = 5000
> > compressed-x-precision = 1000
> > cutoff-scheme = Verlet
> > nstlist = 20
> > ns-type = Grid
> > pbc = xyz
> > periodic-molecules = FALSE
> > verlet-buffer-tolerance = 0.005
> > rlist = 1.029
> > rlistlong = 1.029
> > nstcalclr = 10
> > coulombtype = PME
> > coulomb-modifier = Potential-shift
> > rcoulomb-switch = 0
> > rcoulomb = 1
> > epsilon-r = 1
> > epsilon-rf = inf
> > vdw-type = Cut-off
> > vdw-modifier = Potential-shift
> > rvdw-switch = 0
> > rvdw = 1
> > DispCorr = No
> > table-extension = 1
> > fourierspacing = 0.12
> > fourier-nx = 52
> > fourier-ny = 52
> > fourier-nz = 52
> > pme-order = 4
> > ewald-rtol = 1e-05
> > ewald-rtol-lj = 0.001
> > lj-pme-comb-rule = Geometric
> > ewald-geometry = 0
> > epsilon-surface = 0
> > implicit-solvent = No
> > gb-algorithm = Still
> > nstgbradii = 1
> > rgbradii = 1
> > gb-epsilon-solvent = 80
> > gb-saltconc = 0
> > gb-obc-alpha = 1
> > gb-obc-beta = 0.8
> > gb-obc-gamma = 4.85
> > gb-dielectric-offset = 0.009
> > sa-algorithm = Ace-approximation
> > sa-surface-tension = 2.05016
> > tcoupl = Nose-Hoover
> > nsttcouple = 5
> > nh-chain-length = 1
> > print-nose-hoover-chain-variables = FALSE
> > pcoupl = Parrinello-Rahman
> > pcoupltype = Isotropic
> > nstpcouple = 10
> > tau-p = 1
> > compressibility (3x3):
> > compressibility[ 0]={ 4.50000e-05, 0.00000e+00, 0.00000e+00}
> > compressibility[ 1]={ 0.00000e+00, 4.50000e-05, 0.00000e+00}
> > compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 4.50000e-05}
> > ref-p (3x3):
> > ref-p[ 0]={ 1.00000e+00, 0.00000e+00, 0.00000e+00}
> > ref-p[ 1]={ 0.00000e+00, 1.00000e+00, 0.00000e+00}
> > ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 1.00000e+00}
> > refcoord-scaling = No
> > posres-com (3):
> > posres-com[0]= 0.00000e+00
> > posres-com[1]= 0.00000e+00
> > posres-com[2]= 0.00000e+00
> > posres-comB (3):
> > posres-comB[0]= 0.00000e+00
> > posres-comB[1]= 0.00000e+00
> > posres-comB[2]= 0.00000e+00
> > QMMM = FALSE
> > QMconstraints = 0
> > QMMMscheme = 0
> > MMChargeScaleFactor = 1
> > qm-opts:
> > ngQM = 0
> > constraint-algorithm = Lincs
> > continuation = FALSE
> > Shake-SOR = FALSE
> > shake-tol = 0.0001
> > lincs-order = 4
> > lincs-iter = 1
> > lincs-warnangle = 30
> > nwall = 0
> > wall-type = 9-3
> > wall-r-linpot = -1
> > wall-atomtype[0] = -1
> > wall-atomtype[1] = -1
> > wall-density[0] = 0
> > wall-density[1] = 0
> > wall-ewald-zfac = 3
> > pull = no
> > rotation = FALSE
> > interactiveMD = FALSE
> > disre = No
> > disre-weighting = Conservative
> > disre-mixed = FALSE
> > dr-fc = 1000
> > dr-tau = 0
> > nstdisreout = 100
> > orire-fc = 0
> > orire-tau = 0
> > nstorireout = 100
> > free-energy = no
> > cos-acceleration = 0
> > deform (3x3):
> > deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> > deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> > deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> > simulated-tempering = FALSE
> > E-x:
> > n = 0
> > E-xt:
> > n = 0
> > E-y:
> > n = 0
> > E-yt:
> > n = 0
> > E-z:
> > n = 0
> > E-zt:
> > n = 0
> > swapcoords = no
> > adress = FALSE
> > userint1 = 0
> > userint2 = 0
> > userint3 = 0
> > userint4 = 0
> > userreal1 = 0
> > userreal2 = 0
> > userreal3 = 0
> > userreal4 = 0
> > grpopts:
> > nrdf: 291.972 31383 29.9972 29.9972
> > ref-t: 300 300 300 300
> > tau-t: 0.2 0.2 0.2 0.2
> > annealing: No No No No
> > annealing-npoints: 0 0 0 0
> > acc: 0 0 0
> > nfreeze: N N N
> > energygrp-flags[ 0]: 0
> >
> > Initializing Domain Decomposition on 24 ranks
> > Dynamic load balancing: auto
> > Will sort the charge groups at every domain (re)decomposition
> > Initial maximum inter charge-group distances:
> > two-body bonded interactions: 0.425 nm, LJ-14, atoms 11 17
> > multi-body bonded interactions: 0.425 nm, Proper Dih., atoms 11 17
> > Minimum cell size due to bonded interactions: 0.467 nm
> > Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.772
> nm
> > Estimated maximum distance required for P-LINCS: 0.772 nm
> > This distance will limit the DD cell size, you can override this with
> -rcon
> > Guess for relative PME load: 0.20
> > Will use 18 particle-particle and 6 PME only ranks
> > This is a guess, check the performance at the end of the log file
> > Using 6 separate PME ranks, as guessed by mdrun
> > Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
> > Optimizing the DD grid for 18 cells with a minimum initial size of 0.965
> nm
> > The maximum allowed number of cells is: X 5 Y 5 Z 4
> > Domain decomposition grid 3 x 2 x 3, separate PME ranks 6
> > PME domain decomposition: 3 x 2 x 1
> > Interleaving PP and PME ranks
> > This rank does only particle-particle work.
> >
> > Domain decomposition rank 0, coordinates 0 0 0
> >
> > Using 24 MPI threads
> > Using 1 OpenMP thread per tMPI thread
> >
> > Detecting CPU SIMD instructions.
> > Present hardware specification:
> > Vendor: GenuineIntel
> > Brand: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
> > Family: 6 Model: 44 Stepping: 2
> > Features: aes apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc
> pcid
> > pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
> > SIMD instructions most likely to fit this hardware: SSE4.1
> > SIMD instructions selected at GROMACS compile time: SSE4.1
> >
> >
> > No GPUs detected
> >
> > Will do PME sum in reciprocal space for electrostatic interactions.
> >
> > ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> > U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G.
> > Pedersen
> > A smooth particle mesh Ewald method
> > J. Chem. Phys. 103 (1995) pp. 8577-8592
> > -------- -------- --- Thank You --- -------- --------
> >
> > Will do ordinary reciprocal space Ewald sum.
> > Using a Gaussian width (1/beta) of 0.320163 nm for Ewald
> > Cut-off's: NS: 1.029 Coulomb: 1 LJ: 1
> > System total charge: 0.000
> > Generated table with 1014 data points for Ewald.
> > Tabscale = 500 points/nm
> > Generated table with 1014 data points for LJ6.
> > Tabscale = 500 points/nm
> > Generated table with 1014 data points for LJ12.
> > Tabscale = 500 points/nm
> > Generated table with 1014 data points for 1-4 COUL.
> > Tabscale = 500 points/nm
> > Generated table with 1014 data points for 1-4 LJ6.
> > Tabscale = 500 points/nm
> > Generated table with 1014 data points for 1-4 LJ12.
> > Tabscale = 500 points/nm
> >
> > Using SSE4.1 4x4 non-bonded kernels
> >
> > Using Lorentz-Berthelot Lennard-Jones combination rule
> >
> > Potential shift: LJ r^-12: -1.000e+00 r^-6: -1.000e+00, Ewald -1.000e-05
> > Initialized non-bonded Ewald correction tables, spacing: 9.33e-04 size:
> 2176
> >
> > Removing pbc first time
> > Pinning threads with an auto-selected logical core stride of 1
> >
> > Initializing Parallel LINear Constraint Solver
> >
> > ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> > B. Hess
> > P-LINCS: A Parallel Linear Constraint Solver for molecular simulation
> > J. Chem. Theory Comput. 4 (2008) pp. 116-122
> > -------- -------- --- Thank You --- -------- --------
> >
> > The number of constraints is 152
> > There are inter charge-group constraints,
> > will communicate selected coordinates each lincs iteration
> >
> > ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> > S. Miyamoto and P. A. Kollman
> > SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for
> Rigid
> > Water Models
> > J. Comp. Chem. 13 (1992) pp. 952-962
> > -------- -------- --- Thank You --- -------- --------
> >
> >
> > Linking all bonded interactions to atoms
> > There are 16397 inter charge-group exclusions,
> > will use an extra communication step for exclusion forces for PME
> >
> > The initial number of communication pulses is: X 1 Y 1 Z 1
> > The initial domain decomposition cell size is: X 1.66 nm Y 2.49 nm Z
> 1.44 nm
> >
> > The maximum allowed distance for charge groups involved in interactions
> is:
> > non-bonded interactions 1.029 nm
> > (the following are initial values, they could change due to box
> deformation)
> > two-body bonded interactions (-rdd) 1.029 nm
> > multi-body bonded interactions (-rdd) 1.029 nm
> > atoms separated by up to 5 constraints (-rcon) 1.437 nm
> >
> > When dynamic load balancing gets turned on, these settings will change
> to:
> > The maximum number of communication pulses is: X 1 Y 1 Z 1
> > The minimum size for domain decomposition cells is 1.029 nm
> > The requested allowed shrink of DD cells (option -dds) is: 0.80
> > The allowed shrink of domain decomposition cells is: X 0.62 Y 0.41 Z 0.72
> > The maximum allowed distance for charge groups involved in interactions
> is:
> > non-bonded interactions 1.029 nm
> > two-body bonded interactions (-rdd) 1.029 nm
> > multi-body bonded interactions (-rdd) 1.029 nm
> > atoms separated by up to 5 constraints (-rcon) 1.029 nm
> >
> >
> > Making 3D domain decomposition grid 3 x 2 x 3, home cell index 0 0 0
> >
> > Center of mass motion removal mode is Linear
> > We have the following groups for center of mass motion removal:
> > 0: rest
> > There are: 15861 Atoms
> > Charge group distribution at step 0: 853 904 881 863 893 859 857 913 880
> > 866 892 881 893 888 889 884 888 877
> >
> > Constraining the starting coordinates (step 0)
> >
> > Constraining the coordinates at t0-dt (step 0)
> > RMS relative constraint deviation after constraining: 0.00e+00
> > Initial temperature: 299.753 K
> >
> > Started mdrun on rank 0 Wed Mar 4 19:25:32 2015
> > Step Time Lambda
> > 0 0.00000 0.00000
> >
> >
> > --
> > ====================================
> > Michael D. Daily
> > Postdoctoral research associate
> > Pacific Northwest National Lab (PNNL)
> > 509-375-4581
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list