[gmx-users] Segfault using Verlet-list with Gromacs 4.6.5 - MPI only

Mark Abraham mark.j.abraham at gmail.com
Tue Apr 15 12:17:16 CEST 2014


On Apr 15, 2014 1:01 AM, "Sébastien Côté" <sebastien.cote.4 at umontreal.ca>
wrote:
>
> Dear gmx developers,
> I am trying to use Verlet list for MD simulations using MPI
parallelization only with Gromacs 4.6.5. However, I am getting a segfault
during the initialization of my system before any time step is executed.
The simulation runs perfectly when I am either using Group list or Gromacs
4.6.1 instead.
> Is their a problem known for verlet list in Gromacs 4.6.5 for some
architecture?

No known problem on the information given. Can you please open an issue at
http://redmine.gromacs.org, including a tarball of the log, tpr, and the
files to build the tpr?

Mark

> Thanks for your help,
> Sebastien
> Below, you can find more specific information:
> The error message thrown by the system:
> starting mdrun 'bla'100 steps,      0.2 ps.[node-c3-42:09559] *** Process
received signal ***[node-c3-42:09559] Signal: Segmentation fault
(11)[node-c3-42:09559] Signal code:  (128)[node-c3-42:09559] Failing at
address: (nil)[node-c3-42:09559] [ 0] /lib64/libpthread.so.0()
[0x3a05c0f500][node-c3-42:09559] [ 1]
/home/apps/Logiciels/gromacs/gromacs-4.6.5-no-plumed/bin/../lib/libmd_mpi.so.8(nbnxn_kernel_simd_4xn_tab_comb_lb_energrp+0x2223)
[0x7f676aedc983][node-c3-42:09559] [ 2]
/home/apps/Logiciels/gromacs/gromacs-4.6.5-no-plumed/bin/../lib/libmd_mpi.so.8(nbnxn_kernel_simd_4xn+0x4b1)
[0x7f676aea9901][node-c3-42:09559] [ 3]
/home/apps/intel/composerxe-2011.4.191/mkl/../compiler/lib/intel64/libiomp5.so(__kmp_invoke_microtask+0x93)
[0x7f6768380323][node-c3-42:09559] *** End of error message
***[node-c3-42:09554] *** Process received signal ***
> The log file finishes as follow:
> Center of mass motion removal mode is LinearWe have the following groups
for center of mass motion removal:  0:  Protein  1:  non-Protein
> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++G. Bussi, D.
Donadio and M. ParrinelloCanonical sampling through velocity rescalingJ.
Chem. Phys. 126 (2007) pp. 014101-------- -------- --- Thank You ---
-------- --------
> System info in the log file:
> Log file opened on Mon Apr 14 18:51:35 2014Host: node-c3-42  pid: 9551
 nodeid: 0  nnodes:  12Gromacs version:    VERSION 4.6.5Precision:
 singleMemory model:       64 bitMPI library:        MPIOpenMP support:
enabledGPU support:        disabledinvsqrt routine:
 gmx_software_invsqrt(x)CPU acceleration:   SSE4.1FFT library:
 MKLLarge file support: enabledRDTSCP usage:       enabledBuilt on:
  Mon Apr 14 09:57:30 EDT 2014Built by:
rqchpbib at briaree1[CMAKE]Build OS/arch:      Linux 2.6.32-71.el6.x86_64
x86_64Build CPU
vendor:   GenuineIntelBuild CPU brand:    Intel(R) Xeon(R) CPU
X5650  @ 2.67GHzBuild CPU family:   6   Model: 44   Stepping: 2Build CPU
features: aes apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc pcid
pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3C
compiler:
/RQusagers/apps/intel/composerxe-2011.4.191/bin/intel64/icc Intel icc (ICC)
12.0.4 20110427C compiler flags:   -msse4.1
>     -mkl=sequential -std=gnu99 -Wall   -ip -funroll-all-loops  -O3
-DNDEBUGLinked with Intel MKL version 10.3.4.
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list