[gmx-users] Coredump when using PME
Robert Bjornson
rbjornson at gmail.com
Tue Dec 27 23:22:16 CET 2005
Hi,
I'm experiencing a coredump when running gromacs 3.3 on a system of
72000 atoms. The coredump only occurs when I use pme for
electrostatics.
Here is the stacktrace (no symbols compiled in unfortunately)
(gdb) where
#0 0x0000000000457877 in spread_q_bsplines ()
#1 0x000000000045b4f8 in do_pme ()
#2 0x0000000000437d81 in force ()
#3 0x000000000046bb6a in do_force ()
#4 0x000000000041d329 in do_md ()
#5 0x000000000041ba14 in mdrunner ()
#6 0x0000000000420057 in main ()
I also noticed some odd behavior in stderr output. No idea if they
are related, but the
non-ewald run doesn't show this behavior:
==================
Step 20, time 0.04 (ps) LINCS WARNING
relative constraint deviation after LINCS:
max 0.096362 (between atoms 523 and 524) rms 0.004632
bonds that rotated more than 30 degrees:
atom 1 atom 2 angle previous, current, constraint length
524 525 66.5 0.1431 0.1395 0.1430
524 526 37.7 0.1502 0.1430 0.1500
Back Off! I just backed up step19.pdb to ./#step19.pdb.1#
Wrote pdb files with previous and current coordinates
Step 21, time 0.042 (ps) LINCS WARNING
relative constraint deviation after LINCS:
max 0.120046 (between atoms 524 and 526) rms 0.005158
bonds that rotated more than 30 degrees:
atom 1 atom 2 angle previous, current, constraint length
524 525 64.7 0.1398 0.1573 0.1430
524 526 55.8 0.1433 0.1680 0.1500
Back Off! I just backed up step20.pdb to ./#step20.pdb.1#
Sorry couldn't backup step20.pdb to ./#step20.pdb.1#
Back Off! I just backed up step20.pdb to ./#step20.pdb.1#
Sorry couldn't backup step20.pdb to ./#step20.pdb.2#
Back Off! I just backed up step21.pdb to ./#step21.pdb.1#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
Back Off! I just backed up step21.pdb to ./#step21.pdb.2#
Sorry couldn't backup step21.pdb to ./#step21.pdb.2#
Wrote pdb files with previous and current coordinates
Wrote pdb files with previous and current coordinates
-----------------------------------------------------------------------------
Some background data:
I'm running on 8 processors using LAM-MPI and pbs-pro (although the
problem also occurs for sequential runs)
The only difference between the two runs is the use of pme in the run
that fails. PME failed in a similar way on a different input set, as
well.
Does anyone have any idea why this might be happening? I can provide
the input/output files or corefile, but didn't want to broadcast them
to everyone.
Thanks very much for any assistance,
Rob Bjornson
More information about the gromacs.org_gmx-users
mailing list