[gmx-users] strange GPU performance

Szilárd Páll pall.szilard at gmail.com
Tue Jul 12 15:10:38 CEST 2016


How's that related to what Mark and I said?
--
Szilárd


On Mon, Jul 11, 2016 at 4:02 PM, Albert <mailmd2011 at gmail.com> wrote:
> yes.
>
> But the job failed from to time:
>
> vol 0.87! imb F 34% step 33600, will finish Sat Jul 23 17:26:12 2016
>
> -------------------------------------------------------
> Program gmx mdrun, VERSION 5.1.2
> Source code file:
> /home/albert/Downloads/gromacs/gromacs-5.1.2/src/gromacs/mdlib/nbnxn_cuda/nbnxn_cuda.cu,
> line: 688
>
> Fatal error:
> cudaStreamSynchronize failed in cu_blockwait_nb: unknown error
>
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> -------------------------------------------------------
>
> Halting parallel program gmx mdrun on rank 0 out of 4
>
> -------------------------------------------------------
> Program gmx mdrun, VERSION 5.1.2
> Source code file:
> /home/albert/Downloads/gromacs/gromacs-5.1.2/src/gromacs/mdlib/nbnxn_cuda/nbnxn_cuda.cu,
> line: 688
>
> Fatal error:
> cudaStreamSynchronize failed in cu_blockwait_nb: unknown error
>
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> -------------------------------------------------------
>
> Halting parallel program gmx mdrun on rank 1 out of 4
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
> with errorcode 1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
>
>
>
>
> On 07/11/2016 03:41 PM, Mark Abraham wrote:
>>
>> Hi,
>>
>> Why did you specify using 2 MPI ranks with 8 OpenMP threads per rank on a
>> node with 10 cores and 2 GPUs? You want something that fills all the cores
>> (and hyperthreads) e.g.
>>
>> mpirun -np 2 gmx_mpi mdrun
>>
>> or
>>
>> mpirun -np 4 gmx_mpi mdrun
>>
>> There are likely further improvements available along the lines Szilard
>> suggests.
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
> mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list