[gmx-users] simulations in vacuum in parallel
Qinghua Liao
fantasticqhl at gmail.com
Thu Apr 12 19:23:09 CEST 2012
Dear gmx users,
I tried to do simulations of a small peptide in vacuum, I found that it
failed to be run in parallel, even when I use only 8 cores.
My system only have hundreds of atoms. the problem may be resulted from
domain decomposition. When I choose particle
decomposition method, for small system, I can use 4 threads but not 8, and
for a little bigger systems, I can only use 8 threads.
For this situation, is it normal? Is there some solution to this problem?
Thanks very much!
The following lines are my mdp file for the vacuum simulation:
title = PDXN of Abeta in H2O
;cpp = /lib/cpp ; prepocessor of the current machine
define = ;-DPOSRES
integrator = md ; molecular dynamics algorithm
tinit = 0.0 ; start time and timestep in ps
dt = 0.002 ; time step in ps
nsteps = 500000000 ; number of steps for 1000ns run
emtol = 100 ; convergence criterion
emstep = 0.05 ; intial step size
nstlist = 10 ; step frequency for updating neighbour list
ns_type = grid ;simple ; method for neighbour searching (?)
nstxout = 5000 ; frequency for writing coords to output
.trr file
nstvout = 0 ; frequency for writing velocities to
output...should be same as nstxout
nstfout = 0 ; frequency for writing forces to output
nstlog = 5000 ; frequency for writing energies to log
file
nstenergy = 5000 ; frequency for writing energies to energy
file
nstxtcout = 5000 ; frequency for writing coords to xtc traj
xtc_grps = system ; group(s) whose coords are to be written
in xtc traj
energygrps = system ; group(s) whose energy is to be written in
energy file
pbc = no ; use pbc
rlist = 0 ; cutoff lengths (nm)
epsilon_r = 1.0 ; Dielectric constant (DC) for twin-range
or DC of reaction field
niter = 100 ; Some thingies for future use
fourierspacing = 0.16
fourier_nx = 30
fourier_ny = 30
fourier_nz = 30
coulombtype = Cut-off ; truncation for minimisation, with
large cutoff
rcoulomb = 0
rcoulomb-switch = 0
vdw-type = Cut-off ; truncation for minimisation, with
large cutoff
rvdw-switch = 0
rvdw = 0 ; cut-off lengths
;pme_order = 6 ; EWALD/PME/PPPM parameters
;ewald_rtol = 1e-05
;ewald_geometry = 3d
epsilon_surface = 0
optimize_fft = yes
Free energy control stuff
free_energy = yes
init_lambda = 0.0
delta_lambda = 0
sc_alpha =0.5
sc-power =1.0
sc-sigma = 0.3
comm_mode = angular
nstcomm = 10 ; number of steps for centre of mass
motion removal (in vacuo only!)
Tcoupl = V-rescale
tc_grps = system ; MVN_Protein ;SOL_Ion ; Non-Protein
tau_t = 0.01
ref_t = 300
Pcoupl = no ; Parrinello-Rahman ; Pressure coupling
;Pcoupltype = Isotropic
;tau_p = 1.0 1.0 1.0
;ref_p = 1.0 1.0 1.0
;compressibility = 4.5e-5 ; compressibility
;
annealing = no ; SIMULATED ANNEALING CONTROL
;zero_temp_time = 0 ; Time at which temperature should be zero
(ps)
gen_vel = yes
gen_temp = 300
gen_seed = -1
constraints = all-bonds ; OPTIONS FOR BOND CONSTRAINTS
constraint-algorithm = Lincs ; Type of constraint algorithm
lincs_order = 4 ; Highest order in the expansion of the
constraint coupling matrix
lincs_iter = 1
lincs_warnangle = 30 ; Lincs will write a warning to the stderr
if in one step a bond rotates
; over more degrees than
unconstrained-start = no ; Do not constrain the start configuration
;Shake-SOR = no ; Use successive overrelaxation to reduce
the number of shake iterations
;shake-tol = 1e-04 ; Relative tolerance of shake
morse = no ; Convert harmonic bonds to morse potentials
--
Best Regards,
Qinghua
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20120412/3c3b5251/attachment.html>
More information about the gromacs.org_gmx-users
mailing list