[gmx-users] mdrun : fatal error with 8 CPUs

sophie.vilarem@laposte.net sophie.vilarem at laposte.net
Thu Oct 30 11:35:01 CET 2003


Hi,

I've got a fatal error (Fatal error: ci = -2147483648 should
be in 0 .. 1727 [FILE nsgrid.c, LINE 210]) when I try to run
Gromacs on 8 CPUs, on a xeon cluster (bi-processors,
MPICH-1.2.5). It ran without any problem on 1, 2 and 4 CPUs.
Can anybody help me please?

Thanks a lot!
Sophie Vilarem


Here are the command line and the results :
grompp -np 8 -shuffle
mpirun -np 8 -machinefile hostfile
/works/work6/theogone/Gromacs/usr/local/Gromacs/i686-pc-linux-gnu/bin/mdrun
-g pc2_8

Parts of the results :
etc....


creating statusfile for 8 nodes...
 
Back Off! I just backed up mdout.mdp to ./#mdout.mdp.1#
Warning: as of GMX v 2.0 unit of compressibility is truly 1/bar
checking input for internal consistency...
calling /lib/cpp...
processing topology...
Generated 3 of the 3 non-bonded parameter combinations
Excluding 3 bonded neighbours for PE6000 1
Moltype    PE6000    #atoms
CPU   0         1     12000
CPU   1         0         0
CPU   2         0         0
CPU   3         0         0
CPU   4         0         0
CPU   5         0         0
CPU   6         0         0
CPU   7         0         0
Made a shuffling table with 1 entries [molecules]
processing coordinates...
Shuffling coordinates...
Entering shuffle_xv
double-checking input for internal consistency...
Cleaning up constraints and constant bonded interactions with
dummy particles
renumbering atomtypes...
converting bonded parameters...
#      BONDS:   17997
#     ANGLES:   23992
#     RBDIHS:   29985
#   DUMMY3FD:   29990
#  DUMMY3FAD:   10
Setting particle type to Dummy for dummy atoms
initialising group options...
processing index file...
Analysing residue names:
Opening library file
/works/work6/theogone/Gromacs/usr/local/Gromacs/share/gromacs/top/aminoacids.dat
There are:     1      OTHER residues
There are:     0    PROTEIN residues
There are:     0        DNA residues
Analysing Other...
Making dummy/rest group for Acceleration containing 12000 elements
Making dummy/rest group for Freeze containing 12000 elements
Making dummy/rest group for Energy Mon. containing 12000 elements
Making dummy/rest group for VCM containing 12000 elements
Number of degrees of freedom in T-Coupling group System is
17997.00
Making dummy/rest group for User1 containing 12000 elements
Making dummy/rest group for User2 containing 12000 elements
Making dummy/rest group for XTC containing 12000 elements
Making dummy/rest group for Or. Res. Fit containing 12000 elements
T-Coupling       has 1 element(s): System
Energy Mon.      has 1 element(s): rest
Acceleration     has 1 element(s): rest
Freeze           has 1 element(s): rest
User1            has 1 element(s): rest
User2            has 1 element(s): rest
VCM              has 1 element(s): rest
XTC              has 1 element(s): rest
Or. Res. Fit     has 1 element(s): rest
Checking consistency between energy and charge groups...
 
Back Off! I just backed up deshuf.ndx to ./#deshuf.ndx.1#
splitting topology...
There are 6000 charge group borders and 12000 shake borders
There are 6000 total borders
Division over nodes in atoms:
  1500  1500  1500  1500  1500  1500  1500  1500
writing run input file...
 
Back Off! I just backed up topol.tpr to ./#topol.tpr.1#
 
gcq#106: "Step Aside, Butch" (Pulp Fiction)
 
NNODES=8, MYRANK=1, HOSTNAME=lx05
NODEID=1 argc=3
NNODES=8, MYRANK=0, HOSTNAME=lx05
NODEID=0 argc=3
                         :-)  G  R  O  M  A  C  S  (-:
 
NNODES=8, MYRANK=4, HOSTNAME=lx07
NODEID=4 argc=3
NNODES=8, MYRANK=6, HOSTNAME=lx08
NODEID=6 argc=3
NNODES=8, MYRANK=3, HOSTNAME=lx06
NODEID=3 argc=3
NNODES=8, MYRANK=7, HOSTNAME=lx08
NODEID=7 argc=3
NNODES=8, MYRANK=5, HOSTNAME=lx07
NODEID=5 argc=3
NNODES=8, MYRANK=2, HOSTNAME=lx06
NODEID=2 argc=3

etc...

Back Off! I just backed up pc2_86.log to ./#pc2_86.log.1#
 
Back Off! I just backed up pc2_81.log to ./#pc2_81.log.1#
 
Back Off! I just backed up pc2_82.log to ./#pc2_82.log.1#
 
Back Off! I just backed up pc2_84.log to ./#pc2_84.log.1#
 
Back Off! I just backed up pc2_85.log to ./#pc2_85.log.1#
 
Back Off! I just backed up pc2_83.log to ./#pc2_83.log.1#
 
Back Off! I just backed up pc2_87.log to ./#pc2_87.log.1#
 
Back Off! I just backed up pc2_80.log to ./#pc2_80.log.1#
Reading file topol.tpr, VERSION 3.1.4 (single precision)
Reading file topol.tpr, VERSION 3.1.4 (single precision)
 
Back Off! I just backed up ener.edr to ./#ener.edr.1#
starting mdrun 'pe'
5000 steps,      5.0 ps.
 
 
Back Off! I just backed up traj.trr to ./#traj.trr.1#
Fatal error: ci = -2147483648 should be in 0 .. 1727 [FILE
nsgrid.c, LINE 210]
Fatal error: ci = -2147483648 should be in 0 .. 1727 [FILE
nsgrid.c, LINE 210]
Fatal error: ci = -2147483648 should be in 0 .. 1727 [FILE
nsgrid.c, LINE 210]
Error on node 0, will try to stop all the nodes
[0] MPI Abort by user Aborting program !
[0] Aborting program!
Fatal error: ci = -2147483648 should be in 0 .. 1727 [FILE
nsgrid.c, LINE 210]
Fatal error: ci = -2147483648 should be in 0 .. 1727 [FILE
nsgrid.c, LINE 210]
Fatal error: ci = -2147483648 should be in 0 .. 1727 [FILE
nsgrid.c, LINE 210]
Error on node 3, will try to stop all the nodes
[3] MPI Abort by user Aborting program !
[3] Aborting program!
Error on node 2, will try to stop all the nodes
[2] MPI Abort by user Aborting program !
[2] Aborting program!
Fatal error: ci = -2147483648 should be in 0 .. 1727 [FILE
nsgrid.c, LINE 210]
Error on node 5, will try to stop all the nodes
[5] MPI Abort by user Aborting program !
[5] Aborting program!
Fatal error: ci = -2147483648 should be in 0 .. 1727 [FILE
nsgrid.c, LINE 210]
Error on node 7, will try to stop all the nodes
[7] MPI Abort by user Aborting program !
[7] Aborting program!
Error on node 6, will try to stop all the nodes
[6] MPI Abort by user Aborting program !
[6] Aborting program!
Error on node 1, will try to stop all the nodes
[1] MPI Abort by user Aborting program !
[1] Aborting program!
Error on node 4, will try to stop all the nodes
[4] MPI Abort by user Aborting program !
[4] Aborting program!
Fin du run


Accédez au courrier électronique de La Poste : www.laposte.net ; 
3615 LAPOSTENET (0,34€/mn) ; tél : 08 92 68 13 50 (0,34€/mn)






More information about the gromacs.org_gmx-users mailing list