[gmx-users] The size of the domain decomposition grid (1) does not match the number of nodes (8).
chris.neale at utoronto.ca
chris.neale at utoronto.ca
Sun Jan 17 06:27:21 CET 2010
Hello,
I have a run that was working fine under parallel mdrun -pd and when I
then switched to domain decomposition I got:
Fatal error:
The size of the domain decomposition grid (1) does not match the
number of nodes (8). The total number of nodes is 8
while running like this:
/scinet/gpc/mpi/openmpi/1.3.2-intel-v11.0-ofed/bin/mpirun -mca
btl_sm_num_fifos 7 -np $(wc -l $PBS_NODEFILE | gawk '{print $1}') -mca
btl self,sm -machinefile $PBS_NODEFILE
/scratch/cneale/GPC/exe/intel/gromacs-4.0.5_berkpdfix/exec/bin/mdrun_openmpi
-deffnm md1 -dlb yes -npme -1 -cpt 1 -maxh 47.5 -cpi md1.cpt -px
coord.xvg -pf force.xvg -rdd 2.5
If I then add -dd 2 2 2 to the mdrun command line it runs fine.
I found a couple of things about this on the mailing list but nothing
that seemed entirely related to this case, e.g.:
http://oldwww.gromacs.org/pipermail/gmx-developers/2008-September/002721.html
I'm not sure there's a question here beyond the standard "does it seem
as if I forgot to do something?".
Here's the log file:
Log file opened on Sat Jan 16 23:33:54 2010
Host: gpc-f105n032 pid: 24876 nodeid: 0 nnodes: 8
The Gromacs distribution was built Fri Jan 8 20:42:07 EST 2010 by
cneale at gpc-f101n084 (Linux 2.6.18-128.7.1.el5 x86_64)
:-) G R O M A C S (-:
GROtesk MACabre and Sinister
:-) VERSION 4.0.5 (-:
Written by David van der Spoel, Erik Lindahl, Berk Hess, and others.
Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2008, The GROMACS development team,
check out http://www.gromacs.org for more information.
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
:-)
/scratch/cneale/GPC/exe/intel/gromacs-4.0.5_berkpdfix/exec/bin/mdrun_openmpi
(-:
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- --- Thank You --- -------- --------
parameters of the run:
integrator = sd
nsteps = 8750000
init_step = 0
ns_type = Grid
nstlist = 5
ndelta = 2
nstcomm = 1
comm_mode = Linear
nstlog = 0
nstxout = 8750000
nstvout = 8750000
nstfout = 8750000
nstenergy = 250
nstxtcout = 250
init_t = 0
delta_t = 0.004
xtcprec = 1000
nkx = 54
nky = 56
nkz = 96
pme_order = 4
ewald_rtol = 1e-05
ewald_geometry = 0
epsilon_surface = 0
optimize_fft = TRUE
ePBC = xyz
bPeriodicMols = FALSE
bContinuation = FALSE
bShakeSOR = FALSE
etc = No
epc = Berendsen
epctype = Semiisotropic
tau_p = 4
ref_p (3x3):
ref_p[ 0]={ 1.00000e+00, 0.00000e+00, 0.00000e+00}
ref_p[ 1]={ 0.00000e+00, 1.00000e+00, 0.00000e+00}
ref_p[ 2]={ 0.00000e+00, 0.00000e+00, 1.00000e+00}
compress (3x3):
compress[ 0]={ 4.50000e-05, 0.00000e+00, 0.00000e+00}
compress[ 1]={ 0.00000e+00, 4.50000e-05, 0.00000e+00}
compress[ 2]={ 0.00000e+00, 0.00000e+00, 4.50000e-05}
refcoord_scaling = No
posres_com (3):
posres_com[0]= 0.00000e+00
posres_com[1]= 0.00000e+00
posres_com[2]= 0.00000e+00
posres_comB (3):
posres_comB[0]= 0.00000e+00
posres_comB[1]= 0.00000e+00
posres_comB[2]= 0.00000e+00
andersen_seed = 815131
rlist = 1
rtpi = 0.05
coulombtype = PME
rcoulomb_switch = 0
rcoulomb = 1
vdwtype = Cut-off
rvdw_switch = 0
rvdw = 1
epsilon_r = 1
epsilon_rf = 1
tabext = 1
implicit_solvent = No
gb_algorithm = Still
gb_epsilon_solvent = 80
nstgbradii = 1
rgbradii = 2
gb_saltconc = 0
gb_obc_alpha = 1
gb_obc_beta = 0.8
gb_obc_gamma = 4.85
sa_surface_tension = 2.092
DispCorr = No
free_energy = no
init_lambda = 0
sc_alpha = 0
sc_power = 0
sc_sigma = 0.3
delta_lambda = 0
nwall = 0
wall_type = 9-3
wall_atomtype[0] = -1
wall_atomtype[1] = -1
wall_density[0] = 0
wall_density[1] = 0
wall_ewald_zfac = 3
pull = umbrella
pull_geometry = position
pull_dim (3):
pull_dim[0]=0
pull_dim[1]=0
pull_dim[2]=1
pull_r1 = 1
pull_r0 = 1.5
pull_constr_tol = 1e-06
pull_nstxout = 250
pull_nstfout = 250
pull_ngrp = 1
pull_group 0:
atom (6656):
atom[0,...,6655] = {293,...,6948}
weight: not available
pbcatom = 3620
vec (3):
vec[0]= 0.00000e+00
vec[1]= 0.00000e+00
vec[2]= 0.00000e+00
init (3):
init[0]= 0.00000e+00
init[1]= 0.00000e+00
init[2]= 0.00000e+00
rate = 0
k = 0
kB = 0
pull_group 1:
atom (287):
atom[0,...,286] = {0,...,286}
weight: not available
pbcatom = 143
vec (3):
vec[0]= 0.00000e+00
vec[1]= 0.00000e+00
vec[2]= 0.00000e+00
init (3):
init[0]= 0.00000e+00
init[1]= 0.00000e+00
init[2]=-3.90000e+00
rate = 0
k = 500
kB = 500
disre = No
disre_weighting = Conservative
disre_mixed = FALSE
dr_fc = 1000
dr_tau = 0
nstdisreout = 100
orires_fc = 0
orires_tau = 0
nstorireout = 100
dihre-fc = 1000
em_stepsize = 0.01
em_tol = 10
niter = 20
fc_stepsize = 0
nstcgsteep = 1000
nbfgscorr = 10
ConstAlg = Lincs
shake_tol = 0.0001
lincs_order = 6
lincs_warnangle = 30
lincs_iter = 1
bd_fric = 0
ld_seed = 640338
cos_accel = 0
deform (3x3):
deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
userint1 = 0
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
grpopts:
nrdf: 68669
ref_t: 300
tau_t: 1
anneal: No
ann_npoints: 0
acc: 0 0 0
nfreeze: N N N
energygrp_flags[ 0]: 0
efield-x:
n = 0
efield-xt:
n = 0
efield-y:
n = 0
efield-yt:
n = 0
efield-z:
n = 0
efield-zt:
n = 0
bQMMM = FALSE
QMconstraints = 0
QMMMscheme = 0
scalefactor = 1
qm_opts:
ngQM = 0
Reading checkpoint file md1.cpt
file generated by:
/scratch/cneale/GPC/exe/intel/gromacs-4.0.5_berkpdfix/exec/bin/mdrun_openmpi
file generated at: Sat Jan 16 21:46:51 2010
GROMACS build time: Fri Jan 8 20:42:07 EST 2010
GROMACS build user: cneale at gpc-f101n084
GROMACS build machine: Linux 2.6.18-128.7.1.el5 x86_64
simulation part #: 1
step: 2889467
time: 11557.868164
Initializing Domain Decomposition on 8 nodes
Dynamic load balancing: yes
Will sort the charge groups at every domain (re)decomposition
Minimum cell size due to bonded interactions: 2.500 nm
Maximum distance for 7 constraints, at 120 deg. angles, all-trans: 1.071 nm
Estimated maximum distance required for P-LINCS: 1.071 nm
Domain decomposition grid 1 x 1 x 1, separate PME nodes 0
-------------------------------------------------------
Program mdrun_openmpi, VERSION 4.0.5
Source code file: domdec.c, line: 5894
Fatal error:
The size of the domain decomposition grid (1) does not match the
number of nodes (8). The total number of nodes is 8
-------------------------------------------------------
"Sometimes Life is Obscene" (Black Crowes)
More information about the gromacs.org_gmx-users
mailing list