[gmx-users] Domain decomposition

Alexander Alexander alexanderwien2k at gmail.com
Tue Jul 26 19:17:21 CEST 2016


On Tue, Jul 26, 2016 at 6:07 PM, Justin Lemkul <jalemkul at vt.edu> wrote:

>
>
> On 7/26/16 11:27 AM, Alexander Alexander wrote:
>
>> Thanks.
>>
>> Yes indeed it is a free energy calculation in which no problem showed up
>> in
>> the first 6 windows where the harmonic restrains were applying on my amino
>> acid but the DD problem came up immediately in the first windows of the
>> removing charge. Below please find the mdp file.
>> And If I use -ntmpi = 1 then it takes ages to finish. Although my gromcas
>> need to be compiled again with thread-MPI .
>>
>>
> I suspect you have inconsistent usage of couple-intramol.  Your
> long-distance LJC pairs should be a result of "couple-intramol = no" in
> which you get explicit intramolecular exclusions and pair interactions that
> occur at longer distance than normal 1-4 interactions.  If you ran other
> systems without getting any problem, you probably had "couple-intramol =
> yes" in which all nonbonded interactions are treated the same way and the
> bonded topology is the same.
>

Actually I always have had "couple-intramol = no" in all my other
calculation(a single amino acid in water solution), and not problem has
shown up. But FEP calculations of the charged amino acid where I have also
an Ion for neutralization of the system and "ion+amino acid" is used as
"couple-moltype", this problem emerges. And if you noticed the Ion here CL
is always one of the atom involving in the problem. I hope  "couple-intramol
= yes"can sove the problem in charged amino acid.

>
> Another question is that if really this amount of pull restrain is
>> necessary to be applied on my molecules (singke amino acid) before
>> removing
>> the charge and vdW?
>>
>>
> You're decoupling a single amino acid?  What purpose do the pull
> restraints even serve?  CA-HA, etc. should be bonded in a single amino
> acid, so why are you applying a pull restraint to them?  I really don't
> understand.
>

I want to make sure sudden conformational changes of amino acid do not
occur during the perturbation. In particular, when the charge is turned
off.  Applying a harmonic restraint to keep the geometry the same during
FEP is a well-established procedure, e.g. Deng, Y.; Roux, B. J Chem Theory
Comput 2006, 2 (5), 1255. I might reduce the number of restraints to only
between 1 or 2 pairs.

The whole task is to calculate the binding free energy of amino acid to a
metal surface, although here I am still dealing with the amino acid in only
water without surface yet.

Regards,
Alex

>
> -Justin
>
>
> Best regards,
>> Alex
>>
>> define                   = -DFLEXIBLE
>> integrator               = steep
>> nsteps                   = 500000
>> emtol                    = 250
>> emstep                   = 0.001
>>
>> nstenergy                = 500
>> nstlog                   = 500
>> nstxout-compressed       = 1000
>>
>> constraint-algorithm     = lincs
>> constraints              = h-bonds
>>
>> cutoff-scheme            = Verlet
>> rlist                    = 1.32
>>
>> coulombtype              = PME
>> rcoulomb                 = 1.30
>>
>> vdwtype                  = Cut-off
>> rvdw                     = 1.30
>> DispCorr                 = EnerPres
>>
>> free-energy              = yes
>> init-lambda-state        = 6
>> calc-lambda-neighbors    = -1
>> restraint-lambdas        = 0.0 0.2 0.4 0.6 0.8 1.0 1.0 1.0 1.0 1.0 1.0 1.0
>> 1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
>> coul-lambdas             = 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 1.0
>> 1.0 1.0 1.00 1.0 1.0 1.0 1.0 1.0 1.0 1.0
>> vdw-lambdas              = 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1
>> 0.2 0.3 0.35 0.4 0.5 0.6 0.7 0.8 0.9 1.0
>> couple-moltype           = Protein_chain_A
>> couple-lambda0           = vdw-q
>> couple-lambda1           = none
>> couple-intramol          = no
>> nstdhdl                  = 100
>> sc-alpha                 = 0.5
>> sc-coul                  = no
>> sc-power                 = 1
>> sc-sigma                 = 0.3
>> dhdl-derivatives         = yes
>> separate-dhdl-file       = yes
>> dhdl-print-energy        = total
>>
>> pull                     = yes
>> pull-ngroups             = 9
>> pull-ncoords             = 6
>> pull-group1-name         = CA
>> pull-group2-name         = HA
>> pull-group3-name         = N
>> pull-group4-name         = C
>> pull-group5-name         = O1
>> pull-group6-name         = O2
>> pull-group7-name         = CZ
>> pull-group8-name         = NH1
>> pull-group9-name         = NH2
>>
>> pull-coord1-groups       = 1 2
>> pull-coord1-type         = umbrella
>> pull-coord1-dim          = Y Y Y
>> pull-coord1-init         = 0
>> pull-coord1-start        = yes
>> pull-coord1-geometry     = distance
>> pull-coord1-k            = 0.0
>> pull-coord1-kB           = 1000
>>
>> pull-coord2-groups       = 1 3
>> pull-coord2-type         = umbrella
>> pull-coord2-dim          = Y Y Y
>> pull-coord2-init         = 0
>> pull-coord2-start        = yes
>> pull-coord2-geometry     = distance
>> pull-coord2-k            = 0.0
>> pull-coord2-kB           = 1000
>>
>> pull-coord3-groups       = 4 5
>> pull-coord3-type         = umbrella
>> pull-coord3-dim          = Y Y Y
>> pull-coord3-init         = 0
>> pull-coord3-start        = yes
>> pull-coord3-geometry     = distance
>> pull-coord3-k            = 0.0
>> pull-coord3-kB           = 1000
>>
>> pull-coord4-groups       = 4 6
>> pull-coord4-type         = umbrella
>> pull-coord4-dim          = Y Y Y
>> pull-coord4-init         = 0
>> pull-coord4-start        = yes
>> pull-coord4-geometry     = distance
>> pull-coord4-k            = 0.0
>> pull-coord4-kB           = 1000
>>
>> pull-coord5-groups       = 7 8
>> pull-coord5-type         = umbrella
>> pull-coord5-dim          = Y Y Y
>> pull-coord5-init         = 0
>> pull-coord5-start        = yes
>> pull-coord5-geometry     = distance
>> pull-coord5-k            = 0.0
>> pull-coord5-kB           = 1000
>>
>> pull-coord6-groups       = 7 9
>> pull-coord6-type         = umbrella
>> pull-coord6-dim          = Y Y Y
>> pull-coord6-init         = 0
>> pull-coord6-start        = yes
>> pull-coord6-geometry     = distance
>> pull-coord6-k            = 0.0
>> pull-coord6-kB           = 1000
>>
>> On Tue, Jul 26, 2016 at 2:21 PM, Justin Lemkul <jalemkul at vt.edu> wrote:
>>
>>
>>>
>>> On 7/26/16 8:17 AM, Alexander Alexander wrote:
>>>
>>> Hi,
>>>>
>>>> Thanks for your response.
>>>> I do not know which two atoms has bonded interaction comparable with the
>>>> cell size, however, based on this line in log file "two-body bonded
>>>> interactions: 3.196 nm, LJC Pairs NB, atoms 24 28", I though the 24 and
>>>> 28
>>>> are the couple whom their coordination are as below:
>>>>
>>>> 1ARG   HH22   24   0.946   1.497   4.341
>>>> 2CL      CL       28   1.903   0.147   0.492
>>>>
>>>> Indeed their geometrical distance is too big but it is normal I think. I
>>>> manually changed the coordination of CL atom to bring it closer to the
>>>> other one hoping solve the problem, and test it again, but, the problem
>>>> is
>>>> still here.
>>>>
>>>>
>>>> You'll need to provide a full .mdp file for anyone to be able to tell
>>> anything. It looks like you're doing a free energy calculation, based on
>>> the numbers in LJC, and depending on the settings, free energy
>>> calculations
>>> may involve very long bonded interactions that make it difficult (or even
>>> impossible) to use DD, in which case you must use mdrun -ntmpi 1 to
>>> disable
>>> DD and rely only on OpenMP.
>>>
>>> Here also says "minimum initial size of 3.516 nm", but all of my cell
>>> size
>>>
>>>> are higher than this as well.
>>>>
>>>>
>>>> "Cell size" refers to a DD cell, not the box vectors of your system.
>>> Note
>>> that your system is nearly the same size as your limiting interactions,
>>> which may suggest that your box is too small to avoid periodicity
>>> problems,
>>> but that's an entirely separate issue.
>>>
>>> -Justin
>>>
>>>
>>> ?
>>>
>>>>
>>>> Thanks,
>>>> Regards,
>>>> Alex
>>>>
>>>> On Tue, Jul 26, 2016 at 12:12 PM, Mark Abraham <
>>>> mark.j.abraham at gmail.com>
>>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>>>
>>>>> So you know your cell dimensions, and mdrun is reporting that it can't
>>>>> decompose because you have a bonded interaction that is almost the
>>>>> length
>>>>> of the one of the cell dimensions. How big should that interaction
>>>>> distance
>>>>> be, and what might you do about it?
>>>>>
>>>>> Probably mdrun should be smarter about pbc and use better periodic
>>>>> image
>>>>> handling during DD setup, but you can fix that yourself before you call
>>>>> grompp.
>>>>>
>>>>> Mark
>>>>>
>>>>>
>>>>> On Tue, Jul 26, 2016 at 11:46 AM Alexander Alexander <
>>>>> alexanderwien2k at gmail.com> wrote:
>>>>>
>>>>> Dear gromacs user,
>>>>>
>>>>>>
>>>>>> Now is more than one week that I am engaging with the fatal error due
>>>>>> to
>>>>>> domain decomposition, and I have not been succeeded yet, and it is
>>>>>> more
>>>>>> painful when I have to test different number of cpu's to see which one
>>>>>> works in a cluster with a long queuing time, means being two or three
>>>>>>
>>>>>> days
>>>>>
>>>>> in the queue just to see again the fatal error in two minutes.
>>>>>>
>>>>>> These are the dimensions of the cell " 3.53633,   4.17674,   4.99285",
>>>>>> and below is the log file of my test submitted on 2 nodes with total
>>>>>> 128
>>>>>> cores, I even reduced to 32 CPU's and even changed from "gmx_mpi
>>>>>> mdrun"
>>>>>>
>>>>>> to
>>>>>
>>>>> "gmx mdrun", but the problem is still surviving.
>>>>>>
>>>>>> Please do not refer me to this link (
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>> http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
>>>>>
>>>>> )
>>>>>> as I know what is the problem but I can not solve it:
>>>>>>
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Regards,
>>>>>> Alex
>>>>>>
>>>>>>
>>>>>>
>>>>>> Log file opened on Fri Jul 22 00:55:56 2016
>>>>>> Host: node074  pid: 12281  rank ID: 0  number of ranks:  64
>>>>>>
>>>>>> GROMACS:      gmx mdrun, VERSION 5.1.2
>>>>>> Executable:
>>>>>> /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi/bin/gmx_mpi
>>>>>> Data prefix:  /home/fb_chem/chemsoft/lx24-amd64/gromacs-5.1.2-mpi
>>>>>> Command line:
>>>>>>   gmx_mpi mdrun -ntomp 1 -deffnm min1.6 -s min1.6
>>>>>>
>>>>>> GROMACS version:    VERSION 5.1.2
>>>>>> Precision:          single
>>>>>> Memory model:       64 bit
>>>>>> MPI library:        MPI
>>>>>> OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 32)
>>>>>> GPU support:        disabled
>>>>>> OpenCL support:     disabled
>>>>>> invsqrt routine:    gmx_software_invsqrt(x)
>>>>>> SIMD instructions:  AVX_128_FMA
>>>>>> FFT library:        fftw-3.2.1
>>>>>> RDTSCP usage:       enabled
>>>>>> C++11 compilation:  disabled
>>>>>> TNG support:        enabled
>>>>>> Tracing support:    disabled
>>>>>> Built on:           Thu Jun 23 14:17:43 CEST 2016
>>>>>> Built by:           reuter at marc2-h2 [CMAKE]
>>>>>> Build OS/arch:      Linux 2.6.32-642.el6.x86_64 x86_64
>>>>>> Build CPU vendor:   AuthenticAMD
>>>>>> Build CPU brand:    AMD Opteron(TM) Processor 6276
>>>>>> Build CPU family:   21   Model: 1   Stepping: 2
>>>>>> Build CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
>>>>>> misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp
>>>>>> sse2
>>>>>> sse3 sse4a sse4.1 sse4.2 ssse3 xop
>>>>>> C compiler:         /usr/lib64/ccache/cc GNU 4.4.7
>>>>>> C compiler flags:    -mavx -mfma4 -mxop    -Wundef -Wextra
>>>>>> -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith
>>>>>> -Wall
>>>>>> -Wno-unused -Wunused-value -Wunused-parameter  -O3 -DNDEBUG
>>>>>> -funroll-all-loops  -Wno-array-bounds
>>>>>>
>>>>>> C++ compiler:       /usr/lib64/ccache/c++ GNU 4.4.7
>>>>>> C++ compiler flags:  -mavx -mfma4 -mxop    -Wundef -Wextra
>>>>>> -Wno-missing-field-initializers -Wpointer-arith -Wall
>>>>>>
>>>>>> -Wno-unused-function
>>>>>
>>>>> -O3 -DNDEBUG -funroll-all-loops  -Wno-array-bounds
>>>>>> Boost version:      1.55.0 (internal)
>>>>>>
>>>>>>
>>>>>> Running on 2 nodes with total 128 cores, 128 logical cores
>>>>>>   Cores per node:           64
>>>>>>   Logical cores per node:   64
>>>>>> Hardware detected on host node074 (the node of MPI rank 0):
>>>>>>   CPU info:
>>>>>>     Vendor: AuthenticAMD
>>>>>>     Brand:  AMD Opteron(TM) Processor 6276
>>>>>>     Family: 21  model:  1  stepping:  2
>>>>>>     CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
>>>>>> misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp
>>>>>> sse2
>>>>>> sse3 sse4a sse4.1 sse4.2 ssse3 xop
>>>>>>     SIMD instructions most likely to fit this hardware: AVX_128_FMA
>>>>>>     SIMD instructions selected at GROMACS compile time: AVX_128_FMA
>>>>>> Initializing Domain Decomposition on 64 ranks
>>>>>> Dynamic load balancing: off
>>>>>> Will sort the charge groups at every domain (re)decomposition
>>>>>> Initial maximum inter charge-group distances:
>>>>>>     two-body bonded interactions: 3.196 nm, LJC Pairs NB, atoms 24 28
>>>>>>   multi-body bonded interactions: 0.397 nm, Ryckaert-Bell., atoms 5 13
>>>>>> Minimum cell size due to bonded interactions: 3.516 nm
>>>>>> Maximum distance for 5 constraints, at 120 deg. angles, all-trans:
>>>>>> 0.218
>>>>>>
>>>>>> nm
>>>>>
>>>>> Estimated maximum distance required for P-LINCS: 0.218 nm
>>>>>> Guess for relative PME load: 0.19
>>>>>> Will use 48 particle-particle and 16 PME only ranks
>>>>>> This is a guess, check the performance at the end of the log file
>>>>>> Using 16 separate PME ranks, as guessed by mdrun
>>>>>> Optimizing the DD grid for 48 cells with a minimum initial size of
>>>>>> 3.516
>>>>>>
>>>>>> nm
>>>>>
>>>>> The maximum allowed number of cells is: X 1 Y 1 Z 1
>>>>>>
>>>>>> -------------------------------------------------------
>>>>>> Program gmx mdrun, VERSION 5.1.2
>>>>>> Source code file:
>>>>>> /home/alex/gromacs-5.1.2/src/gromacs/domdec/domdec.cpp,
>>>>>> line: 6987
>>>>>>
>>>>>> Fatal error:
>>>>>> There is no domain decomposition for 48 ranks that is compatible with
>>>>>> the
>>>>>> given box and a minimum cell size of 3.51565 nm
>>>>>> Change the number of ranks or mdrun option -rdd
>>>>>> Look in the log file for details on the domain decomposition
>>>>>> For more information and tips for troubleshooting, please check the
>>>>>>
>>>>>> GROMACS
>>>>>
>>>>> website at http://www.gromacs.org/Documentation/Errors
>>>>>> -------------------------------------------------------
>>>>>> --
>>>>>> Gromacs Users mailing list
>>>>>>
>>>>>> * Please search the archive at
>>>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>>>>> posting!
>>>>>>
>>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>>>
>>>>>> * For (un)subscribe requests visit
>>>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>>>>> send a mail to gmx-users-request at gromacs.org.
>>>>>>
>>>>>> --
>>>>>>
>>>>> Gromacs Users mailing list
>>>>>
>>>>> * Please search the archive at
>>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>>>> posting!
>>>>>
>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>>
>>>>> * For (un)subscribe requests visit
>>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>>>> send a mail to gmx-users-request at gromacs.org.
>>>>>
>>>>>
>>>>> --
>>> ==================================================
>>>
>>> Justin A. Lemkul, Ph.D.
>>> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>>>
>>> Department of Pharmaceutical Sciences
>>> School of Pharmacy
>>> Health Sciences Facility II, Room 629
>>> University of Maryland, Baltimore
>>> 20 Penn St.
>>> Baltimore, MD 21201
>>>
>>> jalemkul at outerbanks.umaryland.edu | (410) 706-7441
>>> http://mackerell.umaryland.edu/~jalemkul
>>>
>>> ==================================================
>>>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-request at gromacs.org.
>>>
>>>
> --
> ==================================================
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 629
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalemkul at outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==================================================
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list