[gmx-users] gmx 4.6.2 segementation fault (core dump)
Szilárd Páll
szilard.pall at cbr.su.se
Mon Jun 3 16:28:11 CEST 2013
I can't reproduce it on a very similar hardware and software
configuration. Can you provide a tpr? Is the segfault reproducible
with multiple configurations? Does mdrun work without GPUs (-nb cpu or
GMX_DISABLE_GPU_DETECTION=1 env. var)?
--
Szilárd
On Mon, Jun 3, 2013 at 4:12 PM, Johannes Wagner
<johannes.wagner at h-its.org> wrote:
> hi, thanks for the prompt replies
>
> ~/programs/gromacs-4.6.2/bin$ ./mdrun -version
>
> Program: ./mdrun
> Gromacs version: VERSION 4.6.2
> Precision: single
> Memory model: 64 bit
> MPI library: thread_mpi
> OpenMP support: enabled
> GPU support: enabled
> invsqrt routine: gmx_software_invsqrt(x)
> CPU acceleration: AVX_256
> FFT library: fftw-3.3.2-sse2
> Large file support: enabled
> RDTSCP usage: enabled
> Built on: Fri May 31 18:44:32 CEST 2013
> Built by: xxxxx at xxxx-its.org [CMAKE]
> Build OS/arch: Linux 3.8.8-100.fc17.x86_64 x86_64
> Build CPU vendor: GenuineIntel
> Build CPU brand: Intel(R) Core(TM) i5-3550 CPU @ 3.30GHz
> Build CPU family: 6 Model: 58 Stepping: 9
> Build CPU features: aes apic avx clfsh cmov cx8 cx16 f16c htt lahf_lm mmx msr nonstop_tsc pcid pclmuldq pdcm popcnt pse rdrnd rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
> C compiler: /usr/lib64/ccache/cc GNU cc (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2)
> C compiler flags: -mavx -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wall -Wno-unused -Wunused-value -march=core-avx-i -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast -O3 -DNDEBUG
> C++ compiler: /usr/lib64/ccache/c++ GNU c++ (GCC) 4.7.2 20120921 (Red Hat 4.7.2-2)
> C++ compiler flags: -mavx -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wall -Wno-unused -Wunused-value -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast -O3 -DNDEBUG
> CUDA compiler: nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2012 NVIDIA Corporation;Built on Fri_Sep_21_17:28:58_PDT_2012;Cuda compilation tools, release 5.0, V0.2.1221
> CUDA driver: 5.0
> CUDA runtime: 5.0
>
>
> ~/programs/gromacs-4.6.2/bin/mdrun -o gpu.log -s gpu.tpr -v
>
> Back Off! I just backed up md.log to ./#md.log.1#
> Reading file gpu.tpr, VERSION 4.6.2 (single precision)
> Using 1 MPI thread
> Using 4 OpenMP threads
>
> 1 GPU detected:
> #0: NVIDIA GeForce GT 640, compute cap.: 3.0, ECC: no, stat: compatible
>
> 1 GPU auto-selected for this run: #0
>
>
> Back Off! I just backed up traj.xtc to ./#traj.xtc.1#
>
> Back Off! I just backed up ener.edr to ./#ener.edr.1#
> starting mdrun 'triazole'
> 100000 steps, 100.0 ps.
> Segmentation fault (core dumped)
>
>
>
> .log file:
>
> 1,1 Anfang
> Using a Gaussian width (1/beta) of 0.320163 nm for Ewald
> Cut-off's: NS: 1.004 Coulomb: 1 LJ: 1
> System total charge: 0.000
> Generated table with 1002 data points for Ewald.
> Tabscale = 500 points/nm
> Generated table with 1002 data points for LJ6.
> Tabscale = 500 points/nm
> Generated table with 1002 data points for LJ12.
> Tabscale = 500 points/nm
> Generated table with 1002 data points for 1-4 COUL.
> Tabscale = 500 points/nm
> Generated table with 1002 data points for 1-4 LJ6.
> Tabscale = 500 points/nm
> Generated table with 1002 data points for 1-4 LJ12.
> Tabscale = 500 points/nm
>
> Using CUDA 8x8 non-bonded kernels
>
> Potential shift: LJ r^-12: 1.000 r^-6 1.000, Ewald 1.000e-05
> Initialized non-bonded Ewald correction tables, spacing: 6.52e-04 size: 1536
>
> Removing pbc first time
> Pinning threads with an auto-selected logical core stride of 1
> Center of mass motion removal mode is Linear
> We have the following groups for center of mass motion removal:
> 0: rest
>
> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
> G. Bussi, D. Donadio and M. Parrinello
> Canonical sampling through velocity rescaling
> J. Chem. Phys. 126 (2007) pp. 014101
> -------- -------- --- Thank You --- -------- --------
>
> There are: 18206 Atoms
> Initial temperature: 299.372 K
>
> Started mdrun on node 0 Mon Jun 3 16:09:59 2013
>
> Step Time Lambda
> 0 0.00000 0.00000
>
> Energies (kJ/mol)
> Bond Angle Proper Dih. Ryckaert-Bell. LJ-14
> 2.04376e+04 2.58927e+04 9.76653e+00 1.50764e+02 -5.89359e+02
> Coulomb-14 LJ (SR) Coulomb (SR) Coul. recip. Potential
> -2.69867e+05 -3.86496e+04 -4.94959e+04 3.97160e+03 -3.08139e+05
> Kinetic En. Total Energy Temperature Pressure (bar)
> 6.80836e+04 -2.40055e+05 2.99864e+02 -1.42021e+02
>
>
>
>
> thats basically it. not much information, hence my email here in this list…
>
> cheers, Johannes
>
>
> --
> Dipl. Phys. Johannes Wagner
> PhD Student, MBM Group
>
> Klaus Tschira Lab (KTL)
> Max Planck Partner Institut for Computational Biology (PICB)
> 320 YueYang Road
> 200031 Shanghai, China
>
> phone: +86-21-54920475
> email: Johannes at picb.ac.cn
>
> and
>
> Heidelberg Institut for Theoretical Studies
> HITS gGmbH
> Schloß-Wolfsbrunnenweg 35
> 69118 Heidelberg
> Germany
>
> phone: +49-6221-533254
> fax: +49 6221 533298
> email: johannes.wagner at h-its.org
>
> http://www.h-its.org
> _________________________________________________
>
> Amtsgericht Mannheim / HRB 337446
> Managing Directors:
> Dr. h.c. Klaus Tschira
> Prof. Dr.-Ing. Andreas Reuter
>
> On 03.06.2013, at 16:01, Szilárd Páll <szilard.pall at cbr.su.se> wrote:
>
>> Thanks for reporting this.
>>
>> he best would be a redmine bug with a tpr, command line invocation for
>> reproduction as well log output to see what software and hardware
>> configuration are you using.
>>
>> Cheers,
>> --
>> Szilárd
>>
>>
>> On Mon, Jun 3, 2013 at 2:46 PM, Johannes Wagner
>> <johannes.wagner at h-its.org> wrote:
>>> Hi there,
>>> trying to set up gmx-4.6.2, compiled with cuda 5.0.35 and gcc 4.7.2 on fedora linux, but it only gives me a "segementation fault (core dump)" on mdrun startup. Same compiling options on gmx 4.6.1 gives me a running mdrun. Did anyone encounter the same problem?
>>>
>>> Thanks, Johannes
>>>
>>> --
>>> gmx-users mailing list gmx-users at gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>>> * Please don't post (un)subscribe requests to the list. Use the
>>> www interface or send it to gmx-users-request at gromacs.org.
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> --
>> gmx-users mailing list gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
More information about the gromacs.org_gmx-users
mailing list