[gmx-users] REMD slow's down drastically

Mark Abraham mark.j.abraham at gmail.com
Mon Feb 24 17:14:08 CET 2014


Adding replicas cannot of itself slow things down, though it will increase
the cost linearly. Don't try to run them all on the same amount of hardware
as a smaller calculation! You are shooting yourself in the foot if you do
not have at least one processor per replica (= MPI rank).

Mark


On Mon, Feb 24, 2014 at 8:32 AM, Singam Karthick <sikart21 at yahoo.in> wrote:

> Dear members,
> I am trying to run REMD simulation for poly Alanine (12 residue) system. I
> used remd generator to get the range of temperature with the exchange
> probability of 0.3. I was getting the 125 replicas. I tried to simulate 125
> replicas its drastically slow down the simulation time (for 70 pico seconds
> it took around 17 hours ) could anyone please tell me how to solve this
> issue.
>
> Following is the MDP file
>
> title           = G4Ga3a4a5 production.
> ;define         = ;-DPOSRES     ; position restrain the protein
> ; Run parameters
> integrator      = md            ; leap-frog integrator
> nsteps          = 12500000      ; 2 * 5000000 = 3ns
> dt              = 0.002         ; 2 fs
> ; Output control
> nstxout         = 0             ; save coordinates every 0.2 ps
> nstvout         = 10000         ; save velocities every 0.2 ps
> nstxtcout       = 500           ; save xtc coordinate every 0.2 ps
> nstenergy       = 500           ; save energies every 0.2 ps
> nstlog          = 100           ; update log file every 0.2 ps
> ; Bond parameters
> continuation    = yes           ; Restarting after NVT
> constraint_algorithm = lincs    ; holonomic constraints
> constraints     = hbonds        ; all bonds (even heavy atom-H bonds)
> constrained
> lincs_iter      = 1             ; accuracy of LINCS
> lincs_order     = 4             ; also related to accuracy
> morse           = no
> ; Neighborsearching
> ns_type         = grid          ; search neighboring grid cels
> nstlist         = 5             ; 10 fs
> rlist           = 1.0           ; short-range neighborlist cutoff (in nm)
> rcoulomb        = 1.0           ; short-range electrostatic cutoff (in nm)
> rvdw            = 1.0           ; short-range van der Waals cutoff (in nm)
> ; Electrostatics
> coulombtype     = PME           ; Particle Mesh Ewald for long-range
> electrostatics
> pme_order       = 4             ; cubic interpolation
> fourierspacing  = 0.16          ; grid spacing for FFT
> ; Temperature coupling is on
> tcoupl          = V-rescale     ; modified Berendsen thermostat
> tc-grps         =  protein SOL Cl       ;two coupling groups - more
> accurate
> tau_t                 = 0.1 0.1  0.1 ; time constant, in ps
> ref_t                 = XXXXX  XXXXX  XXXXX    ; reference temperature,
> one for each group, in K
> ; Pressure coupling is on
> pcoupl          = Parrinello-Rahman     ; Pressure coupling on in NPT
> pcoupltype      = isotropic     ; uniform scaling of box vectors
> tau_p           = 2.0           ; time constant, in ps
> ref_p           = 1.0           ; reference pressure, in bar
> compressibility = 4.5e-5        ; isothermal compressibility of water,
> bar^-1
> ; Periodic boundary conditions
> pbc             = xyz           ; 3-D PBC
> ; Dispersion correction
>
> DispCorr        = EnerPres      ; account for cut-off vdW scheme
>
>
> regards
> singam
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list