[gmx-users] "Range checking error" issue

Matteo Guglielmi matteo.guglielmi at epfl.ch
Sat May 26 20:03:54 CEST 2007


Mark Abraham wrote:
> Matteo Guglielmi wrote:
>> I did what you suggest and even more, I decreased the timestep to 0.001
>>
>> Well the problem doesn't go away.
>>
>> I've been reading mails from this discussion list... I'm not the only
>> who's experiencing this kind of problem (ci variable range error)...
>> actually few gmx users suggest it comes only from parallel runs which
>> is my case BTW.
>
> I've seen it most often when equilibration hasn't happened properly,
> but you'd think 1.5ns would be enough to avoid a crash.
>
>> I'm running parallel runs on different clusters... it's happening
>> everywhere.
>
> Do serial runs do it?
>
> Have you looked at your trajectory visually to see where things start
> going wrong?
>
> Mark
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before
> posting!
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
I did no try any serial run yet... trajectory looks fine... but log
files might explain
why parallel runs keep crashing.

Usually I run gromacs on a dual  Xeon 5140 cpus machine (2x2-cores) in
double precision.

I did compile gromacs using the intel compilers - 9.1 series:
 
[matteo at lcbcpc02 ~]$ icc -V
Intel(R) C Compiler for Intel(R) EM64T-based applications, Version
9.1    Build 20070320 Package ID: l_cc_c_9.1.049
Copyright (C) 1985-2007 Intel Corporation.  All rights reserved.

[matteo at lcbcpc02 ~]$ icpc -V
Intel(R) C++ Compiler for Intel(R) EM64T-based applications, Version
9.1    Build 20070320 Package ID: l_cc_c_9.1.049
Copyright (C) 1985-2007 Intel Corporation.  All rights reserved.

[matteo at lcbcpc02 ~]$ ifort -V
Intel(R) Fortran Compiler for Intel(R) EM64T-based applications, Version
9.1    Build 20070320 Package ID: l_fc_c_9.1.045
Copyright (C) 1985-2007 Intel Corporation.  All rights reserved.

#### compilatition setup detalis ###
export F77='ifort'
export CC='icc'
export CFLAGS='-axT -unroll -ip -O3'
export FFLAGS='-axT -unroll -ip -O3'
export CPPFLAGS="-I${HOME}/Software/fftw-3.1.2/include"
export LDFLAGS="-L${HOME}/Software/fftw-3.1.2/lib"
./configure --enable-double --with-fft=fftw3 --program-suffix=''
--enable-fortran --enable-threads --prefix=${HOME}/Software/gromacs-3.3.1
make
make install
make distclean
./configure  --enable-mpi --enable-double --with-fft=fftw3
--program-suffix='' --enable-fortran --enable-threads
--prefix=${HOME}/Software/gromacs-3.3.1
make mdrun
make install-mdrun
#################################

I use openMPI 1.2.1 which was compiled with the same intel compilers
I've shown before.

All my systems were pre-geometry optimized:
emtol = 70
integrator = steep
constraints = none

#### md0.log ###
[matteo at lcbcpc02 ~]$ cat md0.log | grep Grid
   ns_type              = Grid
Grid: 10 x 12 x 13 cells

#### md1.log ###
[matteo at lcbcpc02 ~]$ cat md1.log | grep Grid
Grid: 10 x 12 x 13 cells
Grid: 10 x 12 x 12 cells

[matteo at lcbcpc02 ~]$ tail -40 md1.log
   Rel. Constraint Deviation:  Max    between atoms     RMS
       Before LINCS         0.009211   7638   7640   0.001295
        After LINCS         0.000000  10707  10709   0.000000

   Rel. Constraint Deviation:  Max    between atoms     RMS
       Before LINCS         0.008704   9341   9344   0.001308
        After LINCS         0.000000  12623  12625   0.000000

   Rel. Constraint Deviation:  Max    between atoms     RMS
       Before LINCS         0.009517   8162   8163   0.001276
        After LINCS         0.000000   8355   8357   0.000000

   Rel. Constraint Deviation:  Max    between atoms     RMS
       Before LINCS         0.009805   7638   7640   0.001325
        After LINCS         0.000000  10207  10210   0.000000

   Rel. Constraint Deviation:  Max    between atoms     RMS
       Before LINCS         0.009253  12747  12750   0.001259
        After LINCS         0.000000   8377   8379   0.000000

Grid: 10 x 12 x 12 cells
-------------------------------------------------------
Program mdrun_mpi, VERSION 3.3.1
Source code file: nsgrid.c, line: 226

Range checking error:
Explanation: During neighborsearching, we assign each particle to a grid
based on its coordinates. If your system contains collisions or parameter
errors that give particles very high velocities you might end up with some
coordinates being +-Infinity or NaN (not-a-number). Obviously, we cannot
put these on a grid, so this is usually where we detect those errors.
Make sure your system is properly energy-minimized and that the potential
energy seems reasonable before trying again.

Variable ci has value 1476. It should have been within [ 0 .. 1440 ]
Please report this to the mailing list (gmx-users at gromacs.org)
-------------------------------------------------------

"Ease Myself Into the Body Bag" (P.J. Harvey)
#########################

The same holds for md2.log and md3.log!

So... the grid size seems to move from:

10 x 12 x 13 (=1560)

to:

10 x 12 x 12 (=1440)

putting "Variable ci has value 1476..." out of bounds.


Is it normal?


Any help is greatly appreciated,
MG.



### Full input file #####
; PREPROCESSING
title                    = 2masn
cpp                      = /usr/bin/cpp
;include                  = -I./
define                   = -DPOSRES

; RUN CONTROL
integrator               = md
tinit                    = 0
dt                       = 0.001
nsteps                   = 1500000
init_step                = 0
comm_mode                = Angular
nstcomm                  = 1
comm_grps                = Pore

; ENERGY MINIMIZATION
;emtol                    = 100
;emstep                   = 0.01

; OUTPUT CONTROL
nstxout                  = 500000
nstvout                  = 500000
nstfout                  = 500000
nstcheckpoint            = 1000
nstlog                   = 5000
nstxtcout                = 5000
xtc_grps                 = System
energygrps               = Pore Membrane Ions Water
nstenergy                = 5000

; NEIGHBOR SEARCHING
nstlist                  = 5
ns_type                  = grid
pbc                      = xyz
rlist                    = 1.0

; ELECTROSTATICS
coulombtype              = PME
rcoulomb                 = 1.0

; VDW
vdwtype                  = Cut-off
rvdw                     = 1.4

; EWALD
fourierspacing           = 0.09
fourier_nx               =
fourier_ny               =
fourier_nz               =
pme_order                = 4
ewald_rtol               = 1e-5
ewald_geometry           = 3d
optimize_fft             = yes

; TEMPERATURE COUPLING
tcoupl                   = berendsen
tc_grps                  = Solute Solvent
tau_t                    = 0.1    0.4
ref_t                    = 300    300

; PRESSURE COUPLING
pcoupl                   = berendsen
pcoupltype               = anisotropic
tau_p                    = 5.0      5.0      5.0      5.0    5.0    5.0
compressibility          = 4.53e-5  4.53e-5  4.53e-5  0.0    0.0    0.0
ref_p                    = 1.025    1.025    1.025    1.025  1.025  1.025

; SIMULATED ANNEALING
;annealing                = single   single
;annealing_npoints        = 2        2
;annealing_time           = 0   500  0   500
;annealing_temp           = 200 300  200 300

; VELOCITY GENERATION
gen_vel                  = yes
gen_temp                 = 300
gen_seed                 = 173529

; BONDS
constraints              = hbonds
constraint_algorithm     = lincs
unconstrained_start      = no
lincs_order              = 4
lincs_iter               = 2
lincs_warnangle          = 30
#############################



More information about the gromacs.org_gmx-users mailing list