[gmx-users] GPU running problem with GMX-4.6 beta2

Szilárd Páll szilard.pall at cbr.su.se
Mon Dec 17 18:59:21 CET 2012


Hi Albert,

Thanks for the testing.

Last questions.
- What version are you using? Is it beta2 release or latest git? if it's
the former, getting the latest git might help if...
-  (do) you happen to be using GMX_GPU_ACCELERATION=None (you shouldn't!)?
A bug triggered only with this setting has been fixed recently.

If the above doesn't help, please file a bug report and attach a tpr so we
can reproduce.

Cheers,

--
Szilárd



On Mon, Dec 17, 2012 at 6:21 PM, Albert <mailmd2011 at gmail.com> wrote:

> On 12/17/2012 06:08 PM, Szilárd Páll wrote:
>
>> Hi,
>>
>> How about GPU emulation or CPU-only runs? Also, please try setting the
>> number of therads to 1 (-ntomp 1).
>>
>>
>> --
>> Szilárd
>>
>>
> hello:
>
> I am running in GPU emulation mode with the GMX_EMULATE_GPU=1 env. var
> set (and to match closer the GPU setup with -ntomp 12), it failed with log:
>
> Back Off! I just backed up step33b.pdb to ./#step33b.pdb.2#
>
> Back Off! I just backed up step33c.pdb to ./#step33c.pdb.2#
>
> Wrote pdb files with previous and current coordinates
> [CUDANodeA:20753] *** Process received signal ***
> [CUDANodeA:20753] Signal: Segmentation fault (11)
> [CUDANodeA:20753] Signal code: Address not mapped (1)
> [CUDANodeA:20753] Failing at address: 0x106ae6a00
>
> [1]    Segmentation fault            mdrun_mpi -v -s nvt.tpr -c nvt.gro -g
> nvt.log -x nvt.xtc -ntomp 12
>
>
>
>
> I also tried , number of therads to 1 (-ntomp 1), it failed with following
> messages:
>
>
> Back Off! I just backed up step33c.pdb to ./#step33c.pdb.1#
>
> Wrote pdb files with previous and current coordinates
> [CUDANodeA:20740] *** Process received signal ***
> [CUDANodeA:20740] Signal: Segmentation fault (11)
> [CUDANodeA:20740] Signal code: Address not mapped (1)
> [CUDANodeA:20740] Failing at address: 0x1f74a96ec
> [CUDANodeA:20740] [ 0] /lib64/libpthread.so.0(+**0xf2d0) [0x2b351d3022d0]
> [CUDANodeA:20740] [ 1] /opt/gromacs-4.6/lib/libmd_**mpi.so.6(+0x11020f)
> [0x2b351a99c20f]
> [CUDANodeA:20740] [ 2] /opt/gromacs-4.6/lib/libmd_**mpi.so.6(+0x111c94)
> [0x2b351a99dc94]
> [CUDANodeA:20740] [ 3] /opt/gromacs-4.6/lib/libmd_**mpi.so.6(gmx_pme_do+0x1d2e)
> [0x2b351a9a1bae]
> [CUDANodeA:20740] [ 4] /opt/gromacs-4.6/lib/libmd_**
> mpi.so.6(do_force_lowlevel+**0x1eef) [0x2b351a97262f]
> [CUDANodeA:20740] [ 5] /opt/gromacs-4.6/lib/libmd_**
> mpi.so.6(do_force_cutsVERLET+**0x1756) [0x2b351aa04736]
> [CUDANodeA:20740] [ 6] /opt/gromacs-4.6/lib/libmd_**mpi.so.6(do_force+0x3bf)
> [0x2b351aa0a0df]
> [CUDANodeA:20740] [ 7] mdrun_mpi(do_md+0x8133) [0x4334c3]
> [CUDANodeA:20740] [ 8] mdrun_mpi(mdrunner+0x19e9) [0x411639]
> [CUDANodeA:20740] [ 9] mdrun_mpi(main+0x17db) [0x4373db]
> [CUDANodeA:20740] [10] /lib64/libc.so.6(__libc_start_**main+0xfd)
> [0x2b351d52ebfd]
> [CUDANodeA:20740] [11] mdrun_mpi() [0x407f09]
> [CUDANodeA:20740] *** End of error message ***
>
> [1]    Segmentation fault            mdrun_mpi -v -s nvt.tpr -c nvt.gro -g
> nvt.log -x nvt.xtc -ntomp 1
>
>
>
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/**mailman/listinfo/gmx-users<http://lists.gromacs.org/mailman/listinfo/gmx-users>
> * Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Search<http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<http://www.gromacs.org/Support/Mailing_Lists>
>



More information about the gromacs.org_gmx-users mailing list