[gmx-users] problem: gromacs run on gpu

Szilárd Páll pall.szilard at gmail.com
Fri Jul 7 15:16:38 CEST 2017


You've got a pretty strange beast there with 4 CPU sockets 24 cores each,
one very fast GPU and two rather slow ones (about 3x slower than the first).

If you want to do a single run on this machine, I suggest trying to
partition the rank across the GPUs so that you get a decent balance, e.g.
you can try
- run 12 ranks 8 threads each, 6 using GPU0, 3-3 for GPU 1 and 2 or
- 16 ranks 6-12 threads each, 10/3/3 rank on GPU 0/1/2, resp.





--
Szilárd

On Fri, Jul 7, 2017 at 1:13 PM, leila karami <karami.leila1 at gmail.com>
wrote:

> Dear Gromacs users,
>
> I installed Gromacs 5.1.3. on GPU in Rocks cluster system.
>
> After using command:
>
> gmx_mpi mdrun -nb gpu -v -deffnm  old_gpu,
>
> I encountered with:
> =============================================================
> GROMACS:      gmx mdrun, VERSION 5.1.3
> Executable:   /home/karami_leila1/513/gromacs/bin/gmx_mpi
> Data prefix:  /home/karami_leila1/513/gromacs
> Command line:
>   gmx_mpi mdrun -nb gpu -v -deffnm new_gpu
>
>
> Running on 1 node with total 96 cores, 192 logical cores, 3 compatible GPUs
> Hardware detected on host cschpc.ut.ac.ir (the node of MPI rank 0):
>   CPU info:
>     Vendor: GenuineIntel
>     Brand:  Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz
>     SIMD instructions most likely to fit this hardware: AVX2_256
>     SIMD instructions selected at GROMACS compile time: AVX2_256
>   GPU info:
>     Number of GPUs detected: 3
>     #0: NVIDIA TITAN X (Pascal), compute cap.: 6.1, ECC:  no, stat:
> compatible
>     #1: NVIDIA Tesla K40c, compute cap.: 3.5, ECC:  no, stat: compatible
>     #2: NVIDIA Tesla K40c, compute cap.: 3.5, ECC:  no, stat: compatible
>
> Reading file new_gpu.tpr, VERSION 5.1.3 (single precision)
> Using 1 MPI process
> Using 192 OpenMP threads
>
> 3 compatible GPUs are present, with IDs 0,1,2
> 1 GPU auto-selected for this run.
> Mapping of GPU ID to the 1 PP rank in this node: 0
>
>
> NOTE: potentially sub-optimal launch configuration, gmx_mpi started with
> less
>       PP MPI process per node than GPUs available.
>       Each PP MPI process can use only one GPU, 1 GPU per node will be
> used.
>
>
> -------------------------------------------------------
> Program gmx mdrun, VERSION 5.1.3
> Source code file:
> /root/gromacs_source/gromacs-5.1.3/src/programs/mdrun/
> resource-division.cpp,
> line: 571
>
> Fatal error:
> Your choice of 1 MPI rank and the use of 192 total threads leads to the use
> of 192 OpenMP threads, whereas we expect the optimum to be with more MPI
> ranks with 2 to 6 OpenMP threads. If you want to run with this many OpenMP
> threads, specify the -ntomp option. But we suggest to increase the number
> of MPI ranks.
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> -------------------------------------------------------
>
> Halting program gmx mdrun
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> =============================================================
>
> How to resolve this problem?
>
> Any help will be highly appreciated.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list