[gmx-users] Is it possible to run implicit solvent simulations in parallel?

Ozlem Ulucan ulucan.oz at googlemail.com
Wed May 4 16:50:54 CEST 2011


Thank you very much for the suggestions, I will try both and compare the
results.

On Wed, May 4, 2011 at 4:13 PM, Per Larsson <per.larsson at sbc.su.se> wrote:

> Hi!
>
> If you are running implicit solvent with no cutoffs, ie using the special
> all-vs-all kernels, then particle decomposition will be used. This exact
> combination (gb, all-vs-all, dd) is quite tricky to implement, and is not
> supported at the moment, IIRC.
> This could be documented better, sorry.
>
> You could try changing constraints from all-bonds to h-bonds, meaning you
> will have only local constraints, which should allow you to run with
> particle decomposition. Or use a cut-off and domain decomposition.
>
> /Per
>
>
>
> 4 maj 2011 kl. 16.05 skrev Ozlem Ulucan:
>
> Dear Justin, this was only a test run and I ran the simulations on my
> multi-core workstations (4 cores actually). MPI is no longer required for
> such a situation. Since I did not set -nt option to 1, this can be accepted
> as a parallel run. So the command I sent in my previous e-mail was for the
> parallel run and for serial run I set -nt to 1.
>
>
> Dear Justin, as I said I am using a workstation of 4 processors. I have
> approximately 2200 atoms in my system. That means for one processor I have
> slightly more than  550 atoms. I set all the cut-offs to 0. I really need to
> run this system in parallel. Any suggestions to make it work out?
>
>
> Here is my run input file :
>
> ;
> ;    File 'mdout.mdp' was generated
> ;    By user: onbekend (0)
> ;    On host: onbekend
> ;    At date: Sun May  1 16:19:29 2011
> ;
>
> ; VARIOUS PREPROCESSING OPTIONS
> ; Preprocessor information: use cpp syntax.
> ; e.g.: -I/home/joe/doe -I/home/mary/roe
> include                  =
> ; e.g.: -DPOSRES -DFLEXIBLE (note these variable names are case sensitive)
> define                   =
>
> ; RUN CONTROL PARAMETERS
> integrator               = SD
> ; Start time and timestep in ps
> tinit                    = 0
> dt                       = 0.002
> nsteps                   = 500000
> ; For exact run continuation or redoing part of a run
> init_step                = 0
> ; Part index is updated automatically on checkpointing (keeps files
> separate)
> simulation_part          = 1
> ; mode for center of mass motion removal
> comm-mode                = Angular
> ; number of steps for center of mass motion removal
> nstcomm                  = 10
> ; group(s) for center of mass motion removal
> comm-grps                = system
>
> ; LANGEVIN DYNAMICS OPTIONS
> ; Friction coefficient (amu/ps) and random seed
> bd-fric                  = 0
> ld-seed                  = 1993
>
> ; ENERGY MINIMIZATION OPTIONS
> ; Force tolerance and initial step-size
> emtol                    = 10.0
> emstep                   = 0.01
> ; Max number of iterations in relax_shells
> niter                    = 20
> ; Step size (ps^2) for minimization of flexible constraints
> fcstep                   = 0
> ; Frequency of steepest descents steps when doing CG
> nstcgsteep               = 1000
> nbfgscorr                = 10
>
> ; TEST PARTICLE INSERTION OPTIONS
> rtpi                     = 0.05
>
> ; OUTPUT CONTROL OPTIONS
> ; Output frequency for coords (x), velocities (v) and forces (f)
> nstxout                  = 1000
> nstvout                  = 1000
> nstfout                  = 0
> ; Output frequency for energies to log file and energy file
> nstlog                   = 1000
> nstcalcenergy            = -1
> nstenergy                = 1000
> ; Output frequency and precision for .xtc file
> nstxtcout                = 0
> xtc-precision            = 500
> ; This selects the subset of atoms for the .xtc file. You can
> ; select multiple groups. By default all atoms will be written.
> xtc-grps                 = Protein
> ; Selection of energy groups
> energygrps               = Protein
>
> ; NEIGHBORSEARCHING PARAMETERS
> ; nblist update frequency
> nstlist                  = 0
> ; ns algorithm (simple or grid)
> ns_type                  = simple
> ; Periodic boundary conditions: xyz, no, xy
> pbc                      = no
> periodic_molecules       = no
> ; nblist cut-off
> rlist                    = 0
> ; long-range cut-off for switched potentials
> rlistlong                = -1
>
> ; OPTIONS FOR ELECTROSTATICS AND VDW
> ; Method for doing electrostatics
> coulombtype              = cut-off
> rcoulomb-switch          = 0
> rcoulomb                 = 0
> ; Relative dielectric constant for the medium and the reaction field
> epsilon_r                = 1
> epsilon_rf               = 1
> ; Method for doing Van der Waals
> vdw-type                 = Cut-off
> ; cut-off lengths
> rvdw-switch              = 0
> rvdw                     = 0
> ; Apply long range dispersion corrections for Energy and Pressure
> DispCorr                 = No
> ; Extension of the potential lookup tables beyond the cut-off
> table-extension          = 1
> ; Seperate tables between energy group pairs
> energygrp_table          =
> ; Spacing for the PME/PPPM FFT grid
> fourierspacing           = 0.12
> ; FFT grid size, when a value is 0 fourierspacing will be used
> fourier_nx               = 0
> fourier_ny               = 0
> fourier_nz               = 0
> ; EWALD/PME/PPPM parameters
> pme_order                = 4
> ewald_rtol               = 1e-05
> ewald_geometry           = 3d
> epsilon_surface          = 0
> optimize_fft             = yes
>
> ; IMPLICIT SOLVENT ALGORITHM
> implicit_solvent         = GBSA
>
> ; GENERALIZED BORN ELECTROSTATICS
> ; Algorithm for calculating Born radii
> gb_algorithm             = OBC
> ; Frequency of calculating the Born radii inside rlist
> nstgbradii               = 1
> ; Cutoff for Born radii calculation; the contribution from atoms
> ; between rlist and rgbradii is updated every nstlist steps
> rgbradii                 = 0
> ; Dielectric coefficient of the implicit solvent
> gb_epsilon_solvent       = 80
> ; Salt concentration in M for Generalized Born models
> gb_saltconc              = 0
> ; Scaling factors used in the OBC GB model. Default values are OBC(II)
> gb_obc_alpha             = 1
> gb_obc_beta              = 0.8
> gb_obc_gamma             = 4.85
> gb_dielectric_offset     = 0.009
> sa_algorithm             = Ace-approximation
> ; Surface tension (kJ/mol/nm^2) for the SA (nonpolar surface) part of GBSA
> ; The value -1 will set default value for Still/HCT/OBC GB-models.
> sa_surface_tension       = -1
>
> ; OPTIONS FOR WEAK COUPLING ALGORITHMS
> ; Temperature coupling
> tcoupl                   = v-rescale
> nsttcouple               = -1
> nh-chain-length          = 10
> ; Groups to couple separately
> tc-grps                  = Protein
> ; Time constant (ps) and reference temperature (K)
> tau-t                    = 0.1
> ref-t                    = 300
> ; Pressure coupling
> Pcoupl                   = Parrinello-Rahman
> Pcoupltype               = isotropic
> nstpcouple               = -1
> ; Time constant (ps), compressibility (1/bar) and reference P (bar)
> tau-p                    = 1
> compressibility          = 4.5e-5
> ref-p                    = 1.0
> ; Scaling of reference coordinates, No, All or COM
> refcoord_scaling         = No
> ; Random seed for Andersen thermostat
> andersen_seed            = 815131
>
> ; OPTIONS FOR QMMM calculations
> QMMM                     = no
> ; Groups treated Quantum Mechanically
> QMMM-grps                =
> ; QM method
> QMmethod                 =
> ; QMMM scheme
> QMMMscheme               = normal
> ; QM basisset
> QMbasis                  =
> ; QM charge
> QMcharge                 =
> ; QM multiplicity
> QMmult                   =
> ; Surface Hopping
> SH                       =
> ; CAS space options
> CASorbitals              =
> CASelectrons             =
> SAon                     =
> SAoff                    =
> SAsteps                  =
> ; Scale factor for MM charges
> MMChargeScaleFactor      = 1
> ; Optimization of QM subsystem
> bOPT                     =
> bTS                      =
>
> ; SIMULATED ANNEALING
> ; Type of annealing for each temperature group (no/single/periodic)
> annealing                =
> ; Number of time points to use for specifying annealing in each group
> annealing_npoints        =
> ; List of times at the annealing points for each group
> annealing_time           =
> ; Temp. at each annealing point, for each group.
> annealing_temp           =
>
> ; GENERATE VELOCITIES FOR STARTUP RUN
> gen-vel                  = no
> gen-temp                 = 300
> gen-seed                 = 173529
>
> ; OPTIONS FOR BONDS
> constraints              = all-bonds
> ; Type of constraint algorithm
> constraint-algorithm     = Lincs
> ; Do not constrain the start configuration
> continuation             = no
> ; Use successive overrelaxation to reduce the number of shake iterations
> Shake-SOR                = no
> ; Relative tolerance of shake
> shake-tol                = 0.0001
> ; Highest order in the expansion of the constraint coupling matrix
> lincs-order              = 4
> ; Number of iterations in the final step of LINCS. 1 is fine for
> ; normal simulations, but use 2 to conserve energy in NVE runs.
> ; For energy minimization with constraints it should be 4 to 8.
> lincs-iter               = 1
> ; Lincs will write a warning to the stderr if in one step a bond
> ; rotates over more degrees than
> lincs-warnangle          = 30
> ; Convert harmonic bonds to morse potentials
> morse                    = no
>
> ; ENERGY GROUP EXCLUSIONS
> ; Pairs of energy groups for which all non-bonded interactions are excluded
> energygrp_excl           =
>
> ; WALLS
> ; Number of walls, type, atom types, densities and box-z scale factor for
> Ewald
> nwall                    = 0
> wall_type                = 9-3
> wall_r_linpot            = -1
> wall_atomtype            =
> wall_density             =
> wall_ewald_zfac          = 3
>
> ; COM PULLING
> ; Pull type: no, umbrella, constraint or constant_force
> pull                     = no
>
> ; NMR refinement stuff
> ; Distance restraints type: No, Simple or Ensemble
> disre                    = No
> ; Force weighting of pairs in one distance restraint: Conservative or Equal
> disre-weighting          = Conservative
> ; Use sqrt of the time averaged times the instantaneous violation
> disre-mixed              = no
> disre-fc                 = 1000
> disre-tau                = 0
> ; Output frequency for pair distances to energy file
> nstdisreout              = 100
> ; Orientation restraints: No or Yes
> orire                    = no
> ; Orientation restraints force constant and tau for time averaging
> orire-fc                 = 0
> orire-tau                = 0
> orire-fitgrp             =
> ; Output frequency for trace(SD) and S to energy file
> nstorireout              = 100
> ; Dihedral angle restraints: No or Yes
> dihre                    = no
> dihre-fc                 = 1000
>
> ; Free energy control stuff
> free-energy              = no
> init-lambda              = 0
> delta-lambda             = 0
> foreign_lambda           =
> sc-alpha                 = 0
> sc-power                 = 0
> sc-sigma                 = 0.3
> nstdhdl                  = 10
> separate-dhdl-file       = yes
> dhdl-derivatives         = yes
> dh_hist_size             = 0
> dh_hist_spacing          = 0.1
> couple-moltype           =
> couple-lambda0           = vdw-q
> couple-lambda1           = vdw-q
> couple-intramol          = no
>
> ; Non-equilibrium MD stuff
> acc-grps                 =
> accelerate               =
> freezegrps               =
> freezedim                =
> cos-acceleration         = 0
> deform                   =
>
> ; Electric fields
> ; Format is number of terms (int) and for all terms an amplitude (real)
> ; and a phase angle (real)
> E-x                      =
> E-xt                     =
> E-y                      =
> E-yt                     =
> E-z                      =
> E-zt                     =
>
> ; User defined thingies
> user1-grps               =
> user2-grps               =
> userint1                 = 0
> userint2                 = 0
> userint3                 = 0
> userint4                 = 0
> userreal1                = 0
> userreal2                = 0
> userreal3                = 0
> userreal4                = 0
>
>
> Regards,
>
> Ozlem
>
> On Wed, May 4, 2011 at 3:44 PM, Mark Abraham <Mark.Abraham at anu.edu.au>wrote:
>
>> On 4/05/2011 11:23 PM, Justin A. Lemkul wrote:
>>
>>>
>>>
>>> Ozlem Ulucan wrote:
>>>
>>>>
>>>> Dear Gromacs Users,
>>>>
>>>> I have been trying to simulate a protein in implicit solvent. When I
>>>> used a single processor by setting -nt to 1 , I did not encounter any
>>>> problem.  But when I tried to run the simulations using more than 1
>>>> processor I get the following error.
>>>>
>>>> Fatal error:
>>>> Constraint dependencies further away than next-neighbor
>>>> in particle decomposition. Constraint between atoms 2177--2179 evaluated
>>>> on node 3 and 3, but atom 2177 has connections within 4 bonds
>>>> (lincs_order)
>>>> of node 1, and atom 2179 has connections within 4 bonds of node 3.
>>>> Reduce the # nodes, lincs_order, or
>>>> try domain decomposition.
>>>>
>>>>  I  set the lincs_order parameter in .mdp file to different values. But
>>>> it did not help.  I have some questions regarding the information above.
>>>>
>>>
>> See comments about lincs_order in 7.3.18. Obviously, only smaller values
>> of lincs_order can help (but if this is not obvious, please consider how
>> obvious "it did not help" is :-))
>>
>>
>>   1) Is it possible to run implicit solvent simulations in parallel?
>>>>
>>>>
>>> Yes.
>>>
>>>   2) As far as I know gromacs uses domain decomposition as default. Why
>>>> does in my simulations gromacs use the particle decomposition which I do not
>>>> ask for.
>>>>
>>>>
>>> Without seeing the exact commands you gave, there is no plausible
>>> explanation. DD is used by default.
>>>
>>
>> Not quite true, unfortunately. With the cutoffs set to zero, the use of
>> the all-against-all GB loops is triggered, and that silently requires PD. It
>> should write something to the log file.
>>
>>
>>
>>> -Justin
>>>
>>>  Any suggestions are appreciated very much.
>>>>  I am ussing gromacs-4.5.4 with charmm force field and the OBC implicit
>>>> solvent model. If you need further informations, probably a run input file,
>>>> let me know.
>>>>
>>>
>> A run input file would have helped me avoid guessing above about those
>> cutoffs :-)
>>
>> The real issue is that not all systems can be effectively parallelized by
>> a given implementation. How many processors and atoms are we talking about?
>> If there's not hundreds of atoms per processor, then parallelism is not
>> going to be worthwhile.
>>
>> Mark
>>
>> --
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20110504/4b4f9060/attachment.html>


More information about the gromacs.org_gmx-users mailing list