[gmx-users] my log file to the mdrun error message that on Tue 15, July...
Szilárd Páll
pall.szilard at gmail.com
Wed Jul 16 14:53:41 CEST 2014
Hi,
I don't see anything obviously wrong with your setup; there are two
peculiarities that I suggest looking into:
- you seem to be running in a virtualized environment (at least the
hostname indicates this); check if the "flags" /proc/cpuinfo contains
"rdtscp" and if it does not try setting GMX_USE_RDTSCP=OFF;
- your build host is capable only of SSE4.1, try building with
GMX_CPU_ACCELERATION=AVX_256 as the log message indicates; I suspect
that this may not help as an AVX-capable CPU can execute SSE4.1
instructions too;
If none of the above helps, you could try "dropping" to SSE2 and see
if that works.
cheers,
--
Szilárd
On Wed, Jul 16, 2014 at 4:02 AM, Andy Chao <achao at energiaq.com> wrote:
> Dear GROMACS Users:
>
> Here is my log file..
>
> Please let me know how to fix this problem.
>
> Thanks!
>
> Andy
>
> Log file opened on Tue Jul 15 21:01:52 2014
> Host: server-Virtual-Machine pid: 10019 nodeid: 0 nnodes: 1
> Gromacs version: VERSION 4.6.5
> Precision: single
> Memory model: 32 bit
> MPI library: thread_mpi
> OpenMP support: enabled
> GPU support: disabled
> invsqrt routine: gmx_software_invsqrt(x)
> CPU acceleration: SSE4.1
> FFT library: fftw-3.3.3-sse2-avx
> Large file support: enabled
> RDTSCP usage: enabled
> Built on: Sun Dec 15 03:59:22 UTC 2013
> Built by: buildd at roseapple [CMAKE]
> Build OS/arch: Linux 3.2.0-37-generic i686
> Build CPU vendor: GenuineIntel
> Build CPU brand: Intel(R) Xeon(R) CPU E5530 @ 2.40GHz
> Build CPU family: 6 Model: 26 Stepping: 5
> Build CPU features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr
> nonstop_tsc pdcm popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
> C compiler: /usr/bin/i686-linux-gnu-gcc GNU gcc-4.8.real
> (Ubuntu/Linaro 4.8.2-10ubuntu1) 4.8.2
> C compiler flags: -msse4.1 -Wextra -Wno-missing-field-initializers
> -Wno-sign-compare -Wall -Wno-unused -Wunused-value -Wno-unused-parameter
> -Wno-array-bounds -Wno-maybe-uninitialized -Wno-strict-overflow
> -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast -O3
> -DNDEBUG
>
>
> :-) G R O M A C S (-:
>
> Gromacs Runs On Most of All Computer Systems
>
> :-) VERSION 4.6.5 (-:
>
> Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
> Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
> Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans,
> Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
> Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
> Michael Shirts, Alfons Sijbers, Peter Tieleman,
>
> Berk Hess, David van der Spoel, and Erik Lindahl.
>
> Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> Copyright (c) 2001-2012,2013, The GROMACS development team at
> Uppsala University & The Royal Institute of Technology, Sweden.
> check out http://www.gromacs.org for more information.
>
> This program is free software; you can redistribute it and/or
> modify it under the terms of the GNU Lesser General Public License
> as published by the Free Software Foundation; either version 2.1
> of the License, or (at your option) any later version.
>
> :-) mdrun (-:
>
>
> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
> GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
> molecular simulation
> J. Chem. Theory Comput. 4 (2008) pp. 435-447
> -------- -------- --- Thank You --- -------- --------
>
>
> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
> Berendsen
> GROMACS: Fast, Flexible and Free
> J. Comp. Chem. 26 (2005) pp. 1701-1719
> -------- -------- --- Thank You --- -------- --------
>
>
> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> E. Lindahl and B. Hess and D. van der Spoel
> GROMACS 3.0: A package for molecular simulation and trajectory analysis
> J. Mol. Mod. 7 (2001) pp. 306-317
> -------- -------- --- Thank You --- -------- --------
>
>
> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> H. J. C. Berendsen, D. van der Spoel and R. van Drunen
> GROMACS: A message-passing parallel molecular dynamics implementation
> Comp. Phys. Comm. 91 (1995) pp. 43-56
> -------- -------- --- Thank You --- -------- --------
>
>
> Changing rlist from 1.05 to 1 for non-bonded 4x4 atom kernels
>
> Input Parameters:
> integrator = steep
> nsteps = 200
> init-step = 0
> cutoff-scheme = Verlet
> ns_type = Grid
> nstlist = 10
> ndelta = 2
> nstcomm = 100
> comm-mode = Linear
> nstlog = 1000
> nstxout = 0
> nstvout = 0
> nstfout = 0
> nstcalcenergy = 100
> nstenergy = 1000
> nstxtcout = 0
> init-t = 0
> delta-t = 0.001
> xtcprec = 1000
> fourierspacing = 0.12
> nkx = 48
> nky = 48
> nkz = 48
> pme-order = 4
> ewald-rtol = 1e-05
> ewald-geometry = 0
> epsilon-surface = 0
> optimize-fft = FALSE
> ePBC = xyz
> bPeriodicMols = FALSE
> bContinuation = FALSE
> bShakeSOR = FALSE
> etc = No
> bPrintNHChains = FALSE
> nsttcouple = -1
> epc = No
> epctype = Isotropic
> nstpcouple = -1
> tau-p = 1
> ref-p (3x3):
> ref-p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> ref-p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> compress (3x3):
> compress[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> compress[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> compress[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> refcoord-scaling = No
> posres-com (3):
> posres-com[0]= 0.00000e+00
> posres-com[1]= 0.00000e+00
> posres-com[2]= 0.00000e+00
> posres-comB (3):
> posres-comB[0]= 0.00000e+00
> posres-comB[1]= 0.00000e+00
> posres-comB[2]= 0.00000e+00
> verlet-buffer-drift = 0.005
> rlist = 1
> rlistlong = 1
> nstcalclr = 10
> rtpi = 0.05
> coulombtype = PME
> coulomb-modifier = Potential-shift
> rcoulomb-switch = 0
> rcoulomb = 1
> vdwtype = Cut-off
> vdw-modifier = Potential-shift
> rvdw-switch = 0
> rvdw = 1
> epsilon-r = 1
> epsilon-rf = inf
> tabext = 1
> implicit-solvent = No
> gb-algorithm = Still
> gb-epsilon-solvent = 80
> nstgbradii = 1
> rgbradii = 1
> gb-saltconc = 0
> gb-obc-alpha = 1
> gb-obc-beta = 0.8
> gb-obc-gamma = 4.85
> gb-dielectric-offset = 0.009
> sa-algorithm = Ace-approximation
> sa-surface-tension = 2.05016
> DispCorr = No
> bSimTemp = FALSE
> free-energy = no
> nwall = 0
> wall-type = 9-3
> wall-atomtype[0] = -1
> wall-atomtype[1] = -1
> wall-density[0] = 0
> wall-density[1] = 0
> wall-ewald-zfac = 3
> pull = no
> rotation = FALSE
> disre = No
> disre-weighting = Conservative
> disre-mixed = FALSE
> dr-fc = 1000
> dr-tau = 0
> nstdisreout = 100
> orires-fc = 0
> orires-tau = 0
> nstorireout = 100
> dihre-fc = 0
> em-stepsize = 0.01
> em-tol = 10
> niter = 20
> fc-stepsize = 0
> nstcgsteep = 1000
> nbfgscorr = 10
> ConstAlg = Lincs
> shake-tol = 0.0001
> lincs-order = 4
> lincs-warnangle = 30
> lincs-iter = 1
> bd-fric = 0
> ld-seed = 1993
> cos-accel = 0
> deform (3x3):
> deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
> adress = FALSE
> userint1 = 0
> userint2 = 0
> userint3 = 0
> userint4 = 0
> userreal1 = 0
> userreal2 = 0
> userreal3 = 0
> userreal4 = 0
> grpopts:
> nrdf: 22677
> ref-t: 0
> tau-t: 0
> anneal: No
> ann-npoints: 0
> acc: 0 0 0
> nfreeze: N N N
> energygrp-flags[ 0]: 0
> efield-x:
> n = 0
> efield-xt:
> n = 0
> efield-y:
> n = 0
> efield-yt:
> n = 0
> efield-z:
> n = 0
> efield-zt:
> n = 0
> bQMMM = FALSE
> QMconstraints = 0
> QMMMscheme = 0
> scalefactor = 1
> qm-opts:
> ngQM = 0
> Using 1 MPI thread
> Using 1 OpenMP thread
>
> Detecting CPU-specific acceleration.
> Present hardware specification:
> Vendor: GenuineIntel
> Brand: Intel(R) Xeon(R) CPU E5-1603 0 @ 2.80GHz
> Family: 6 Model: 45 Stepping: 7
> Features: aes apic avx clfsh cmov cx8 cx16 lahf_lm mmx msr pclmuldq popcnt
> pse sse2 sse3 sse4.1 sse4.2 ssse3
> Acceleration most likely to fit this hardware: AVX_256
> Acceleration selected at GROMACS compile time: SSE4.1
>
>
> Binary not matching hardware - you might be losing performance.
> Acceleration most likely to fit this hardware: AVX_256
> Acceleration selected at GROMACS compile time: SSE4.1
>
> Will do PME sum in reciprocal space.
>
> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G.
> Pedersen
> A smooth particle mesh Ewald method
> J. Chem. Phys. 103 (1995) pp. 8577-8592
> -------- -------- --- Thank You --- -------- --------
>
> Will do ordinary reciprocal space Ewald sum.
> Using a Gaussian width (1/beta) of 0.320163 nm for Ewald
> Cut-off's: NS: 1 Coulomb: 1 LJ: 1
> System total charge: 0.000
> Generated table with 1000 data points for Ewald.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for LJ6.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for LJ12.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for 1-4 COUL.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for 1-4 LJ6.
> Tabscale = 500 points/nm
> Generated table with 1000 data points for 1-4 LJ12.
> Tabscale = 500 points/nm
>
> Using SSE4.1 4x4 non-bonded kernels
>
> Using geometric Lennard-Jones combination rule
>
> Potential shift: LJ r^-12: 1.000 r^-6 1.000, Ewald 1.000e-05
> Initialized non-bonded Ewald correction tables, spacing: 6.60e-04 size: 3033
>
> Removing pbc first time
> Pinning threads with an auto-selected logical core stride of 1
>
> Initializing LINear Constraint Solver
>
> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
> LINCS: A Linear Constraint Solver for molecular simulations
> J. Comp. Chem. 18 (1997) pp. 1463-1472
> -------- -------- --- Thank You --- -------- --------
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list