[gmx-users] mdrun error messages
Andy Chao
achao at energiaq.com
Wed Jul 16 06:04:55 CEST 2014
Dear GROMACS Users:
As I mentioned, I got the following log file when I used the "mdrun"
command. I installed GROMACS on my virtual machine. Is there any solution
to this problem?
Thanks!
Andy
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- --- Thank You --- -------- --------
Changing rlist from 1.05 to 1 for non-bonded 4x4 atom kernels
Input Parameters:
integrator = steep
nsteps = 200
init-step = 0
cutoff-scheme = Verlet
ns_type = Grid
nstlist = 10
ndelta = 2
nstcomm = 100
comm-mode = Linear
nstlog = 1000
nstxout = 0
nstvout = 0
nstfout = 0
nstcalcenergy = 100
nstenergy = 1000
nstxtcout = 0
init-t = 0
delta-t = 0.001
xtcprec = 1000
fourierspacing = 0.12
nkx = 48
nky = 48
nkz = 48
pme-order = 4
ewald-rtol = 1e-05
ewald-geometry = 0
epsilon-surface = 0
optimize-fft = FALSE
ePBC = xyz
bPeriodicMols = FALSE
bContinuation = FALSE
bShakeSOR = FALSE
etc = No
bPrintNHChains = FALSE
nsttcouple = -1
epc = No
epctype = Isotropic
nstpcouple = -1
tau-p = 1
ref-p (3x3):
ref-p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compress (3x3):
compress[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compress[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compress[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
refcoord-scaling = No
posres-com (3):
posres-com[0]= 0.00000e+00
posres-com[1]= 0.00000e+00
posres-com[2]= 0.00000e+00
posres-comB (3):
posres-comB[0]= 0.00000e+00
posres-comB[1]= 0.00000e+00
posres-comB[2]= 0.00000e+00
verlet-buffer-drift = 0.005
rlist = 1
rlistlong = 1
nstcalclr = 10
rtpi = 0.05
coulombtype = PME
coulomb-modifier = Potential-shift
rcoulomb-switch = 0
rcoulomb = 1
vdwtype = Cut-off
vdw-modifier = Potential-shift
rvdw-switch = 0
rvdw = 1
epsilon-r = 1
epsilon-rf = inf
tabext = 1
implicit-solvent = No
gb-algorithm = Still
gb-epsilon-solvent = 80
nstgbradii = 1
rgbradii = 1
gb-saltconc = 0
gb-obc-alpha = 1
gb-obc-beta = 0.8
gb-obc-gamma = 4.85
gb-dielectric-offset = 0.009
sa-algorithm = Ace-approximation
sa-surface-tension = 2.05016
DispCorr = No
bSimTemp = FALSE
free-energy = no
nwall = 0
wall-type = 9-3
wall-atomtype[0] = -1
wall-atomtype[1] = -1
wall-density[0] = 0
wall-density[1] = 0
wall-ewald-zfac = 3
pull = no
rotation = FALSE
disre = No
disre-weighting = Conservative
disre-mixed = FALSE
dr-fc = 1000
dr-tau = 0
nstdisreout = 100
orires-fc = 0
orires-tau = 0
nstorireout = 100
dihre-fc = 0
em-stepsize = 0.01
em-tol = 10
niter = 20
fc-stepsize = 0
nstcgsteep = 1000
nbfgscorr = 10
ConstAlg = Lincs
shake-tol = 0.0001
lincs-order = 4
lincs-warnangle = 30
lincs-iter = 1
bd-fric = 0
ld-seed = 1993
cos-accel = 0
deform (3x3):
deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
adress = FALSE
userint1 = 0
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
grpopts:
nrdf: 22677
ref-t: 0
tau-t: 0
anneal: No
ann-npoints: 0
acc: 0 0 0
nfreeze: N N N
energygrp-flags[ 0]: 0
efield-x:
n = 0
efield-xt:
n = 0
efield-y:
n = 0
efield-yt:
n = 0
efield-z:
n = 0
efield-zt:
n = 0
bQMMM = FALSE
QMconstraints = 0
QMMMscheme = 0
scalefactor = 1
qm-opts:
ngQM = 0
Using 1 MPI thread
Using 1 OpenMP thread
Detecting CPU-specific acceleration.
Present hardware specification:
Vendor: GenuineIntel
Brand: Intel(R) Xeon(R) CPU E5-1603 0 @ 2.80GHz
Family: 6 Model: 45 Stepping: 7
Features: aes apic avx clfsh cmov cx8 cx16 lahf_lm mmx msr pclmuldq popcnt
pse sse2 sse3 sse4.1 sse4.2 ssse3
Acceleration most likely to fit this hardware: AVX_256
Acceleration selected at GROMACS compile time: SSE4.1
Binary not matching hardware - you might be losing performance.
Acceleration most likely to fit this hardware: AVX_256
Acceleration selected at GROMACS compile time: SSE4.1
Will do PME sum in reciprocal space.
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G.
Pedersen
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- --- Thank You --- -------- --------
Will do ordinary reciprocal space Ewald sum.
Using a Gaussian width (1/beta) of 0.320163 nm for Ewald
Cut-off's: NS: 1 Coulomb: 1 LJ: 1
System total charge: 0.000
Generated table with 1000 data points for Ewald.
Tabscale = 500 points/nm
Generated table with 1000 data points for LJ6.
Tabscale = 500 points/nm
Generated table with 1000 data points for LJ12.
Tabscale = 500 points/nm
Generated table with 1000 data points for 1-4 COUL.
Tabscale = 500 points/nm
Generated table with 1000 data points for 1-4 LJ6.
Tabscale = 500 points/nm
Generated table with 1000 data points for 1-4 LJ12.
Tabscale = 500 points/nm
Using SSE4.1 4x4 non-bonded kernels
Using geometric Lennard-Jones combination rule
Potential shift: LJ r^-12: 1.000 r^-6 1.000, Ewald 1.000e-05
Initialized non-bonded Ewald correction tables, spacing: 6.60e-04 size: 3033
Removing pbc first time
Pinning threads with an auto-selected logical core stride of 1
Initializing LINear Constraint Solver
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
LINCS: A Linear Constraint Solver for molecular simulations
J. Comp. Chem. 18 (1997) pp. 1463-1472
-------- -------- --- Thank You --- -------- --------
More information about the gromacs.org_gmx-users
mailing list