[gmx-users] running MD on gpu (Fatal error)

Mark Abraham mark.j.abraham at gmail.com
Tue Mar 7 00:17:57 CET 2017


Hi,

Somehow your execution environment has eg OMP_NUM_THREADS set to 192. That
would be most likely through misuse of your MPI run command, or job
scheduler. But we can't see enough of the GROMACS log output to know any
more.

Mark

On Mon, 6 Mar 2017 19:57 Andrew Bostick <andrew.bostick1 at gmail.com> wrote:

> Dear Gromacs users,
>
> I am running MD on gpu using following command line:
>
> gmx_mpi mdrun -nb gpu -v -deffnm gpu_md
>
> But, I encountered with:
>
> GROMACS version:    VERSION 5.1.3
> Precision:          single
> Memory model:       64 bit
> MPI library:        MPI
> OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 32)
> "gpu_md.log" 319L, 14062C                                     1,1
> Top
>    userint2                       = 0
>    userint3                       = 0
>    userint4                       = 0
>    userreal1                      = 0
>    userreal2                      = 0
>    userreal3                      = 0
>    userreal4                      = 0
> grpopts:
>    nrdf:     16999.9      999171
>    ref-t:         300         300
>    tau-t:           1           1
> annealing:          No          No
> annealing-npoints:           0           0
>    acc:            0           0           0
>    nfreeze:           N           N           N
>    energygrp-flags[  0]: 0 0 0 0
>    energygrp-flags[  1]: 0 0 0 0
>    energygrp-flags[  2]: 0 0 0 0
>    energygrp-flags[  3]: 0 0 0 0
>
> Using 1 MPI process
> Using 192 OpenMP threads
>
>
> -------------------------------------------------------
> Program gmx mdrun, VERSION 5.1.3
> Source code file:
>
> /root/gromacs_source/gromacs-5.1.3/src/programs/mdrun/resource-division.cpp,
> line: 571
>
> Fatal error:
> Your choice of 1 MPI rank and the use of 192 total threads leads to the use
> of 192 OpenMP threads, whereas we expect the optimum to be with more MPI
> ranks with 1 to 6 OpenMP threads. If you want to run with this many OpenMP
> threads, specify the -ntomp option. But we suggest to increase the number
> of MPI ranks.
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
>
>
> What is the reason of this error?
>
> How to fix it?
>
> Best,
> Andrew
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list