[gmx-users] Normal Mode Analysis -- Expected Output

Bryan Roessler bryanroessler at gmail.com
Sat Apr 13 00:25:11 CEST 2013


My problem was here:

>nsteps  = 100000
which should read
nsteps  = 1

I was under the assumption that the step numbers would correspond to the
number of calculated eigenvectors. All eigenvectors are calculated in step
1.

Thanks,

Bryan


On Thu, Apr 11, 2013 at 12:33 PM, David van der Spoel
<spoel at xray.bmc.uu.se>wrote:

> On 2013-04-11 17:57, Bryan Roessler wrote:
>
>> Hello,
>>
>> I am running a normal mode analysis on a ~1500AA protein with the
>> following
>> mdp parameters:
>>
>> Log file opened on Tue Apr  9 09:55:00 2013
>> Host: uv1  pid: 128985  nodeid: 0  nnodes:  64
>> Gromacs version:    VERSION 4.6.1
>> Precision:          double
>> Memory model:       64 bit
>> MPI library:        MPI
>> OpenMP support:     disabled
>> GPU support:        disabled
>> invsqrt routine:    gmx_software_invsqrt(x)
>> CPU acceleration:   AVX_256
>> FFT library:        fftw-3.3.2-sse2
>> Large file support: enabled
>> RDTSCP usage:       enabled
>> Built on:           Fri Mar 15 09:20:59 CDT 2013
>> Built by:           asndcy at uv [CMAKE]
>> Build OS/arch:      Linux 3.0.58-0.6.6-default x86_64
>> Build CPU vendor:   GenuineIntel
>> Build CPU brand:    Intel(R) Xeon(R) CPU E5-2667 0 @ 2.90GHz
>> Build CPU family:   6   Model: 45   Stepping: 7
>> Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
>> nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1
>> sse4.2 ssse3 tdt x2apic
>> C compiler:         /opt/sgi/mpt/mpt-2.07/bin/**mpicc GNU gcc (GCC) 4.7.2
>> C compiler flags:   -mavx   -Wextra -Wno-missing-field-**initializers
>> -Wno-sign-compare -Wall -Wno-unused -Wunused-value -Wno-unknown-pragmas
>> -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast  -O3
>> -DNDEBUG
>>
>>
>>                           :-)  G  R  O  M  A  C  S  (-:
>>
>>                     Good gRace! Old Maple Actually Chews Slate
>>
>>                              :-)  VERSION 4.6.1  (-:
>>
>>          Contributions from Mark Abraham, Emile Apol, Rossen Apostolov,
>>             Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar,
>>
>>       Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph
>> Junghans,
>>          Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff,
>>             Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz,
>>                  Michael Shirts, Alfons Sijbers, Peter Tieleman,
>>
>>                 Berk Hess, David van der Spoel, and Erik Lindahl.
>>
>>         Copyright (c) 1991-2000, University of Groningen, The Netherlands.
>>           Copyright (c) 2001-2012,2013, The GROMACS development team at
>>          Uppsala University & The Royal Institute of Technology, Sweden.
>>              check out http://www.gromacs.org for more information.
>>
>>           This program is free software; you can redistribute it and/or
>>         modify it under the terms of the GNU Lesser General Public License
>>          as published by the Free Software Foundation; either version 2.1
>>               of the License, or (at your option) any later version.
>>
>>      :-)  /opt/asn/apps/gromacs_4.6.1/**bin/mdrun_mpi_d (double
>> precision)  (-:
>>
>>
>> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
>> B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
>> GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
>> molecular simulation
>> J. Chem. Theory Comput. 4 (2008) pp. 435-447
>> -------- -------- --- Thank You --- -------- --------
>>
>>
>> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
>> D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J.
>> C.
>> Berendsen
>> GROMACS: Fast, Flexible and Free
>> J. Comp. Chem. 26 (2005) pp. 1701-1719
>> -------- -------- --- Thank You --- -------- --------
>>
>>
>> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
>> E. Lindahl and B. Hess and D. van der Spoel
>> GROMACS 3.0: A package for molecular simulation and trajectory analysis
>> J. Mol. Mod. 7 (2001) pp. 306-317
>> -------- -------- --- Thank You --- -------- --------
>>
>>
>> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
>> H. J. C. Berendsen, D. van der Spoel and R. van Drunen
>> GROMACS: A message-passing parallel molecular dynamics implementation
>> Comp. Phys. Comm. 91 (1995) pp. 43-56
>> -------- -------- --- Thank You --- -------- --------
>>
>>
>> Changing rlist from 1.47 to 1.4 for non-bonded 4x4 atom kernels
>>
>> Input Parameters:
>>     integrator           = nm
>>     nsteps               = 100000
>>     init-step            = 0
>>     cutoff-scheme        = Verlet
>>     ns_type              = Grid
>>     nstlist              = 10
>>     ndelta               = 2
>>     nstcomm              = 100
>>     comm-mode            = Linear
>>     nstlog               = 1000
>>     nstxout              = 500
>>     nstvout              = 500
>>     nstfout              = 500
>>     nstcalcenergy        = 100
>>     nstenergy            = 500
>>     nstxtcout            = 0
>>     init-t               = 0
>>     delta-t              = 0.002
>>     xtcprec              = 1000
>>     fourierspacing       = 0.12
>>     nkx                  = 160
>>     nky                  = 160
>>     nkz                  = 216
>>     pme-order            = 4
>>     ewald-rtol           = 1e-05
>>     ewald-geometry       = 0
>>     epsilon-surface      = 0
>>     optimize-fft         = TRUE
>>     ePBC                 = xyz
>>     bPeriodicMols        = FALSE
>>     bContinuation        = FALSE
>>     bShakeSOR            = FALSE
>>     etc                  = No
>>     bPrintNHChains       = FALSE
>>     nsttcouple           = -1
>>     epc                  = No
>>     epctype              = Isotropic
>>     nstpcouple           = -1
>>     tau-p                = 1
>>     ref-p (3x3):
>>        ref-p[    0]={ 1.00000e+00,  0.00000e+00,  0.00000e+00}
>>        ref-p[    1]={ 0.00000e+00,  1.00000e+00,  0.00000e+00}
>>        ref-p[    2]={ 0.00000e+00,  0.00000e+00,  1.00000e+00}
>>     compress (3x3):
>>        compress[    0]={ 4.50000e-05,  0.00000e+00,  0.00000e+00}
>>        compress[    1]={ 0.00000e+00,  4.50000e-05,  0.00000e+00}
>>        compress[    2]={ 0.00000e+00,  0.00000e+00,  4.50000e-05}
>>     refcoord-scaling     = No
>>     posres-com (3):
>>        posres-com[0]= 0.00000e+00
>>        posres-com[1]= 0.00000e+00
>>        posres-com[2]= 0.00000e+00
>>     posres-comB (3):
>>        posres-comB[0]= 0.00000e+00
>>        posres-comB[1]= 0.00000e+00
>>        posres-comB[2]= 0.00000e+00
>>     verlet-buffer-drift  = 0.005
>>     rlist                = 1.4
>>     rlistlong            = 1.4
>>     nstcalclr            = 10
>>     rtpi                 = 0.05
>>     coulombtype          = PME
>>     coulomb-modifier     = Potential-shift
>>     rcoulomb-switch      = 1.2
>>     rcoulomb             = 1.4
>>     vdwtype              = Cut-off
>>     vdw-modifier         = Potential-shift
>>     rvdw-switch          = 1.2
>>     rvdw                 = 1.4
>>     epsilon-r            = 1
>>     epsilon-rf           = inf
>>     tabext               = 1
>>     implicit-solvent     = No
>>     gb-algorithm         = Still
>>     gb-epsilon-solvent   = 80
>>     nstgbradii           = 1
>>     rgbradii             = 1
>>     gb-saltconc          = 0
>>     gb-obc-alpha         = 1
>>     gb-obc-beta          = 0.8
>>     gb-obc-gamma         = 4.85
>>     gb-dielectric-offset = 0.009
>>     sa-algorithm         = Ace-approximation
>>     sa-surface-tension   = 2.05016
>>     DispCorr             = No
>>     bSimTemp             = FALSE
>>     free-energy          = no
>>     nwall                = 0
>>     wall-type            = 9-3
>>     wall-atomtype[0]     = -1
>>     wall-atomtype[1]     = -1
>>     wall-density[0]      = 0
>>     wall-density[1]      = 0
>>     wall-ewald-zfac      = 3
>>     pull                 = no
>>     rotation             = FALSE
>>     disre                = No
>>     disre-weighting      = Conservative
>>     disre-mixed          = FALSE
>>     dr-fc                = 1000
>>     dr-tau               = 0
>>     nstdisreout          = 100
>>     orires-fc            = 0
>>     orires-tau           = 0
>>     nstorireout          = 100
>>     dihre-fc             = 0
>>     em-stepsize          = 0.01
>>     em-tol               = 10
>>     niter                = 20
>>     fc-stepsize          = 0
>>     nstcgsteep           = 1000
>>     nbfgscorr            = 10
>>     ConstAlg             = Lincs
>>     shake-tol            = 0.0001
>>     lincs-order          = 4
>>     lincs-warnangle      = 30
>>     lincs-iter           = 1
>>     bd-fric              = 0
>>     ld-seed              = 1993
>>     cos-accel            = 0
>>     deform (3x3):
>>        deform[    0]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
>>        deform[    1]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
>>        deform[    2]={ 0.00000e+00,  0.00000e+00,  0.00000e+00}
>>     adress               = FALSE
>>     userint1             = 0
>>     userint2             = 0
>>     userint3             = 0
>>     userint4             = 0
>>     userreal1            = 0
>>     userreal2            = 0
>>     userreal3            = 0
>>     userreal4            = 0
>> grpopts:
>>     nrdf:       71907
>>     ref-t:           0
>>     tau-t:           0
>> anneal:          No
>> ann-npoints:           0
>>     acc:           0           0           0
>>     nfreeze:           N           N           N
>>     energygrp-flags[  0]: 0
>>     efield-x:
>>        n = 0
>>     efield-xt:
>>        n = 0
>>     efield-y:
>>        n = 0
>>     efield-yt:
>>        n = 0
>>     efield-z:
>>        n = 0
>>     efield-zt:
>>        n = 0
>>     bQMMM                = FALSE
>>     QMconstraints        = 0
>>     QMMMscheme           = 0
>>     scalefactor          = 1
>> qm-opts:
>>     ngQM                 = 0
>>
>> Non-default thread affinity set, disabling internal thread affinity
>> Using 64 MPI processes
>>
>> Detecting CPU-specific acceleration.
>> Present hardware specification:
>> Vendor: GenuineIntel
>> Brand:  Intel(R) Xeon(R) CPU E5-4640 0 @ 2.40GHz
>> Family:  6  Model: 45  Stepping:  7
>> Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc
>> pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
>> tdt x2apic
>> Acceleration most likely to fit this hardware: AVX_256
>> Acceleration selected at GROMACS compile time: AVX_256
>>
>> Will do PME sum in reciprocal space.
>>
>> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
>> U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G.
>> Pedersen
>> A smooth particle mesh Ewald method
>> J. Chem. Phys. 103 (1995) pp. 8577-8592
>> -------- -------- --- Thank You --- -------- --------
>>
>> Will do ordinary reciprocal space Ewald sum.
>> Using a Gaussian width (1/beta) of 0.448228 nm for Ewald
>> Cut-off's:   NS: 1.4   Coulomb: 1.4   LJ: 1.4
>> System total charge: 19.000
>> Generated table with 4800 data points for Ewald.
>> Tabscale = 2000 points/nm
>> Generated table with 4800 data points for LJ6.
>> Tabscale = 2000 points/nm
>> Generated table with 4800 data points for LJ12.
>> Tabscale = 2000 points/nm
>> Generated table with 4800 data points for 1-4 COUL.
>> Tabscale = 2000 points/nm
>> Generated table with 4800 data points for 1-4 LJ6.
>> Tabscale = 2000 points/nm
>> Generated table with 4800 data points for 1-4 LJ12.
>> Tabscale = 2000 points/nm
>>
>> Using AVX-256 4x4 non-bonded kernels
>>
>> Using Lorentz-Berthelot Lennard-Jones combination rule
>>
>> Potential shift: LJ r^-12: 0.018 r^-6 0.133, Ewald 1.000e-05
>> Initialized non-bonded Ewald correction tables, spacing: 7.81e-04 size:
>> 3076
>>
>> Removing pbc first time
>> Initiating Normal Mode Analysis
>> Started Normal Mode Analysis on node 0 Sun Apr  7 09:55:01 2013
>>
>>
>> However, my NMA has been running for about 4 days on 64 Xeon nodes with
>> 120GB available memory and GROMACS has not generated any output.
>>
>> What should I expect to see, and how would I adjust my mdp parameters to
>> increase the frequency of output of the normal-mode analysis? How long
>> would a run like this be expected to take?
>>
>> Thank you,
>>
>> Bryan
>>
>>  Do you have water too? Otherwise it might be good to turn off PME, then
> the problem becomes sparse. With water it will not be likely to get
> anything useful.
>
> Your calculation should use about 5-10 Gb core memory but I'm not sure
> whether NM works in parallel.
> --
> David van der Spoel, Ph.D., Professor of Biology
> Dept. of Cell & Molec. Biol., Uppsala University.
> Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
> spoel at xray.bmc.uu.se    http://folding.bmc.uu.se
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users<http://lists.gromacs.org/mailman/listinfo/gmx-users>
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Search<http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<http://www.gromacs.org/Support/Mailing_Lists>
>



More information about the gromacs.org_gmx-users mailing list