[gmx-users] Signal:11 info.si_errno:0(Success) si_code:1(SEGV_MAPERR)

Mike Hanby mhanby at uab.edu
Mon Jan 22 18:58:26 CET 2007


Howdy, a user of a cluster that I manage sent me an output file of their
Gromacs 3.3.1 job. It looks like the job runs for approx 11 hours and
then abruptly terminates with a SEGV_MAPERR. What could cause this
(gromacs, or a system config / resource?).

The output of the log file is below, thanks for any suggestions. Mike

########################################################################
#############
 STARTED AT: Wed Jan 17 11:40:03 CST 2007

NSLOTS: 16
  :-)
/share/apps/gromacs/intel/gromacs-3.3.1-s64-openmpi/bin/mdrun_ompi  (-:

Option     Filename  Type         Description
------------------------------------------------------------
  -s production-Npt-300K_16CPU.tpr  Input        Generic run input: tpr
tpb
                                   tpa xml
  -o production-Npt-300K_16CPU.trr  Output       Full precision
trajectory:
                                   trr trj
  -x production-Npt-300K_16CPU.xtc  Output, Opt! Compressed trajectory
                                   (portable xdr format)
  -c production-Npt-300K_16CPU.gro  Output       Generic structure: gro
g96
                                   pdb xml
  -e       ener.edr  Output       Generic energy: edr ene

Getting Loaded...
Reading file production-Npt-300K_16CPU.tpr, VERSION 3.3.1 (single
precision)
Loaded with Money

Back Off! I just backed up ener.edr to ./#ener.edr.1#
starting mdrun 'Protein in water'
2500000 steps,   2500.0 ps.

...... steps ......

step 650170, will finish at Fri Jan 19 06:52:24 2007

step 650180, will finish at Fri Jan 19 06:52:21 2007
Signal:11 info.si_errno:0(Success) si_code:1(SEGV_MAPERR)
Failing at addr:0x7c3fefc
[0] func:/share/apps/openmpi/intel/openmpi-1.1.2-64/lib/libopal.so.0
[0x2a966102a5]
[1] func:/lib64/tls/libpthread.so.0 [0x3793b0c430]
[2]
func:/share/apps/gromacs/intel/gromacs-3.3.1-s64-openmpi/bin/mdrun_ompi(
spread_q_bsplines+0x1c0) [0x45c80c]
[3]
func:/share/apps/gromacs/intel/gromacs-3.3.1-s64-openmpi/bin/mdrun_ompi(
do_pme+0xb5a) [0x45dfea]
[4]
func:/share/apps/gromacs/intel/gromacs-3.3.1-s64-openmpi/bin/mdrun_ompi(
force+0x9ac) [0x43eac2]
[5]
func:/share/apps/gromacs/intel/gromacs-3.3.1-s64-openmpi/bin/mdrun_ompi(
do_force+0xa10) [0x46e69a]
[6]
func:/share/apps/gromacs/intel/gromacs-3.3.1-s64-openmpi/bin/mdrun_ompi(
do_md+0x1a42) [0x429bfe]
[7]
func:/share/apps/gromacs/intel/gromacs-3.3.1-s64-openmpi/bin/mdrun_ompi(
mdrunner+0xc0f) [0x427f87]
[8]
func:/share/apps/gromacs/intel/gromacs-3.3.1-s64-openmpi/bin/mdrun_ompi(
main+0x2d6) [0x42b62a]
[9] func:/lib64/tls/libc.so.6(__libc_start_main+0xdb) [0x379321c3fb]
[10]
func:/share/apps/gromacs/intel/gromacs-3.3.1-s64-openmpi/bin/mdrun_ompi
[0x418b8a]
*** End of error message ***
11 additional processes aborted (not shown)

 ENDED AT: Wed Jan 17 22:54:32 CST 2007
########################################################################
#############



More information about the gromacs.org_gmx-users mailing list