[gmx-users] problems with the output of pullx

Alfredo E. Cardenas alfredo at ices.utexas.edu
Mon Mar 5 17:15:26 CET 2018


Hi all,
I want to update my own post for any user that could have similar issues in the future. The issue that I described before (large spikes were observed in the values reported in pullx file but when i calculated the same restrained distance using “gmx distance” no such spikes were observed). The problem was the pbcatom (reference atom for the treatment of PBC inside a group). I thought I didn’t have that problem because I was only restraining the z coordinate and the peptide I was pulling inside the membrane never reach near the walls in the z direction. The problem was with the pbcatom chosen for the membrane group. The pbcatom that was chosen by gromacs was a hydrogen in the choline group region and that certainly increases the possibility that some lipids in the other layer move to the wrong side of the box and create havoc during the pulling calculation. Once I explicitly assigned a different pbcatom in the mdp (for example, the terminal methyl carbon of one of the lipids), the spikes in the pullx file don’t show up anymore.
By the way, the problem was described in an earlier post:
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-developers/2010-April/004198.html <https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-developers/2010-April/004198.html>

Thanks,
Alfredo



> On Feb 24, 2018, at 4:43 PM, alfredo <alfredo at ices.utexas.edu> wrote:
> 
> Hi Mark,
> 
> Thanks for your comment. No, that is not the problem. At that location the center of mass of the peptide is deep inside the membrane, separation between the two pulling groups is 0.4 nm and the dimension of the cell along z is more than 10 nm. I am only pulling along the z direction. The puzzle to me is that when I extract the center of mass separation along z between the same two groups using gmx traj those spikes don't show up at the times when they are shown in the pullx file.
> 
> Alfredo
> 
> 
> 
> On 2018-02-24 11:57, Mark Abraham wrote:
>> Hi,
>> My (thoroughly uneducated) guess is that the spikes are related to the pull
>> distance approaching half of the dimensions of the cell. Not all flavours
>> of pulling can  handle this. Might that be the issue?
>> Mark
>> On Sat, Feb 24, 2018, 17:55 alfredo <alfredo at ices.utexas.edu> wrote:
>>> Hi,
>>> Updating my post. The problem has been observed in two different machine
>>> systems (the latest I have found the problem was the skylake nodes in
>>> tacc). I assumed it has to be some communication bug of coordinates and
>>> forces in the pull part of the code. Probably observed in my case
>>> because of the large size of the peptide I am pulling inside the
>>> membrane. For now I am thinking to extract coordinates from the trr file
>>> and from them compute the pulling harmonic forces. But not an ideal
>>> solution.
>>> Thanks
>>> Alfredo
>>> On 2018-02-22 10:17, Alfredo E. Cardenas wrote:
>>> > Hi,
>>> > I am using gromacs to get the PMF of a peptide of  about 20 amino
>>> > acids, moving inside of a bilayer membrane. After pulling the peptide
>>> > inside the membrane now I am using  pull-coord1-type = umbrella
>>> > and pull-coord1-geometry  = distance to sample configurations in each
>>> > window for the umbrella simulations along the z axis (axis
>>> > perpendicular to the membrane surface). Runs finish ok, no error
>>> > messages. The problem is that when I looked at the contents of the
>>> > pullx file I observed spikes (up to 5 or more Angstroms) in the z
>>> > coordinate separating the center of mass of the peptide from the
>>> > membrane center. But when I extract the z coordinates of the center of
>>> > mass of the two groups and compute the difference the values look
>>> > reasonable with no spikes.
>>> >
>>> > Here an example (it starts good):
>>> >       time (ps)                       from pullx      from traj analysis
>>> >
>>> >    200000.000      0.475923002      0.475919992
>>> >    200010.000      0.498394012      0.498389989
>>> >    200020.000      0.527589977      0.527589977
>>> >    200030.000      0.491834015      0.493739992
>>> >    200040.000      0.485377997      0.485379994
>>> >    200050.000      0.488474995      0.488469988
>>> >    200060.000      0.507991016      0.507990003
>>> >    200070.000      0.475095987      0.475100011
>>> >    200080.000      0.465889990      0.465889990
>>> >    200090.000      0.515878975      0.515879989
>>> >    200100.000      0.501435995      0.501429975
>>> >    200110.000      0.505191982      0.505190015
>>> >
>>> > Here a bad section:
>>> >
>>> >    214000.000      0.427343011      0.601450026
>>> >    214010.000      0.484564990      0.545799971
>>> >    214020.000      0.530139029      0.603110015
>>> >    214030.000      0.176231995      0.650319993
>>> >    214040.000      0.342045009      0.637109995
>>> >    214050.000      0.181202993      0.636659980
>>> >    214060.000      0.338808000      0.595300019
>>> >    214070.000      0.442301005      0.547529995
>>> >    214080.000      0.396046013      0.565050006
>>> >    214090.000      0.431407988      0.538460016
>>> >    214100.000      0.402586013      0.568250000
>>> >    214110.000      0.438223004      0.575810015
>>> >
>>> > Then good again:
>>> >
>>> >    230000.000      0.477869004      0.477869987
>>> >    230010.000      0.511840999      0.511839986
>>> >    230020.000      0.469146013      0.469150007
>>> >    230030.000      0.480194002      0.480190009
>>> >    230040.000      0.525618017      0.525619984
>>> >    230050.000      0.498955995      0.498950005
>>> >    230060.000      0.489230990      0.489230007
>>> >    230070.000      0.531931996      0.531930029
>>> >    230080.000      0.535376012      0.535380006
>>> >    230090.000      0.488822013      0.488830000
>>> >    230100.000      0.510704994      0.510699987
>>> >    230110.000      0.524537981      0.524540007
>>> >    230120.000      0.513199985      0.513189971
>>> >
>>> > This problem happens in most umbrella windows that I have examined,
>>> > sometimes several times during the long trajectories I am running. The
>>> > pullf output also have those spikes.
>>> >
>>> > Here is the mdp file I am using:
>>> >
>>> > integrator              = md
>>> > dt                      = 0.002
>>> > nsteps                  = 50000000
>>> > nstlog                  = 10000
>>> > nstxout                 = 5000
>>> > nstvout                 = 5000
>>> > nstfout                 = 5000
>>> > nstcalcenergy           = 500
>>> > nstenergy               = 500
>>> > ;
>>> > cutoff-scheme           = Verlet
>>> > nstlist                 = 20
>>> > rlist                   = 1.2
>>> > coulombtype             = pme
>>> > rcoulomb                = 1.2
>>> > vdwtype                 = Cut-off
>>> > vdw-modifier            = Force-switch
>>> > rvdw_switch             = 1.0
>>> > rvdw                    = 1.2
>>> > ;
>>> > tcoupl                  = Nose-Hoover
>>> > tc_grps                 = PROT   MEMB   SOL_ION
>>> > tau_t                   = 1.0    1.0    1.0
>>> > ref_t                   = 303.15 303.15 303.15
>>> > ;
>>> > pcoupl                  = Parrinello-Rahman
>>> > pcoupltype              = semiisotropic
>>> > tau_p                   = 5.0
>>> > compressibility         = 4.5e-5  4.5e-5
>>> > ref_p                   = 1.0     1.0
>>> > ;
>>> > constraints             = h-bonds
>>> > constraint_algorithm    = LINCS
>>> > continuation            = yes
>>> > ;
>>> > nstcomm                 = 500
>>> > comm_mode               = linear
>>> > comm_grps               = PROT_MEMB   SOL_ION
>>> > ;
>>> > refcoord_scaling        = com
>>> > ;
>>> > pull                    = yes
>>> > pull-coord1-type        = umbrella
>>> > pull-coord1-geometry    = distance
>>> > pull-coord1-dim         = N N Y
>>> > pull-ngroups            = 2
>>> > pull-ncoords            = 1
>>> > pull-coord1-groups      = 1 2
>>> > pull-group1-name        = MEMB
>>> > pull-group2-name        = PROT
>>> > pull-coord1-init        = 0.400
>>> > pull-coord1-k           = 1000        ; kJ mol^-1 nm^-2
>>> > pull-nstxout            = 500        ; every 1 ps
>>> > pull-nstfout            = 500        ; every 1 ps
>>> >
>>> >
>>> > I am not sure what is wrong here. It seems a bug to me.
>>> >
>>> >
>>> > Here is the header of the log file:
>>> >
>>> > Log file opened on Thu Feb 15 11:09:05 2018
>>> > Host: sb202  pid: 254782  rank ID: 0  number of ranks:  112
>>> >                       :-) GROMACS - mdrun_mpi, 2016.4 (-:
>>> >
>>> >
>>> >
>>> > GROMACS:      mdrun_mpi, version 2016.4
>>> >
>>> >
>>> > Command line:
>>> >   mdrun_mpi -v -cpi state8.2.cpt -deffnm constraint8.3 -cpo
>>> > state8.3.cpt -noappend
>>> >
>>> > GROMACS version:    2016.4
>>> > Precision:          single
>>> > Memory model:       64 bit
>>> > MPI library:        MPI
>>> > OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 32)
>>> > GPU support:        disabled
>>> > SIMD instructions:  AVX_256
>>> > FFT library:        fftw-3.3.3-sse2
>>> > RDTSCP usage:       enabled
>>> > TNG support:        enabled
>>> > Hwloc support:      hwloc-1.11.0
>>> > Tracing support:    disabled
>>> > Built on:           Fri Jan 26 09:28:05 MST 2018
>>> > Built by:           aecarde at skybridge-login11 [CMAKE]
>>> > Build OS/arch:      Linux 3.10.0-514.26.1.1chaos.ch6_1.x86_64 x86_64
>>> > Build CPU vendor:   Intel
>>> > Build CPU brand:    Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
>>> > Build CPU family:   6   Model: 45   Stepping: 7
>>> > Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf mmx msr
>>> > nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3
>>> > sse4.1 sse4.2 ssse3 tdt x2apic
>>> > C compiler:         /opt/intel/16.0/bin/intel64/icc Intel
>>> > 16.0.2.20160204
>>> > C compiler flags:    -mavx    -std=gnu99  -O3 -DNDEBUG -ip
>>> > -funroll-all-loops -alias-const -ansi-alias
>>> > C++ compiler:       /opt/intel/16.0/bin/intel64/icpc Intel
>>> > 16.0.2.20160204
>>> > C++ compiler flags:  -mavx    -std=c++0x   -O3 -DNDEBUG -ip
>>> > -funroll-all-loops -alias-const -ansi-alias
>>> >
>>> >
>>> > Running on 7 nodes with total 112 cores, 112 logical cores
>>> >   Cores per node:           16
>>> >   Logical cores per node:   16
>>> >
>>> >
>>> > Thanks for any help
>>> >
>>> > Alfredo
>>> >
>>> >
>>> > Alfredo E. Cardenas, PhD
>>> > Institute of Computational Engineering and Sciences (ICES)
>>> > 1 University Station, C0200
>>> > University of Texas
>>> > Austin, TX 78712
>>> > Office: (512)232-5164
>>> > alfredo at ices.utexas.edu <mailto:alfredo at ices.utexas.edu>
>>> --
>>> Gromacs Users mailing list
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-request at gromacs.org.
> 
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
> 



More information about the gromacs.org_gmx-users mailing list