[gmx-users] Re: Segmentation fault, mdrun_mpi

Taudt aaron.taudt at itb.uni-stuttgart.de
Tue Nov 13 16:52:26 CET 2012


Hi,
I got a similar error for my system:
...
[n020110:27321] *** Process received signal ***
[n020110:27321] Signal: Segmentation fault (11)
[n020110:27321] Signal code:  (128)
[n020110:27321] Failing at address: (nil)
[n020110:27321] [ 0] /lib64/libpthread.so.0 [0x38bac0eb70]
[n020110:27321] [ 1]
/opt/bwgrid/mpi/openmpi/1.4.3-intel-12.0/lib/libmpi.so.0 [0x2b7964e5a44a]
[n020110:27321] [ 2]
/opt/bwgrid/mpi/openmpi/1.4.3-intel-12.0/lib/libmpi.so.0 [0x2b7964e58acd]
...

This error was reproducible at the beginning of each simulation and only
occured when I used more than 16 cores for my simulation. Everything works
fine for 16 or less cores. The GROMACS log file doesn't give any error
messages at all (it just ends).
The problem seems to be the number of PME cores in my case. When dedicating
half of the total cores to PME calculation, everything works fine again (up
to 256 total cores tested).



--
View this message in context: http://gromacs.5086.n6.nabble.com/Segmentation-fault-mdrun-mpi-tp5001601p5002923.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.



More information about the gromacs.org_gmx-users mailing list