[gmx-users] running MD on gpu (Fatal error)
Andrew Bostick
andrew.bostick1 at gmail.com
Mon Mar 6 19:57:19 CET 2017
Dear Gromacs users,
I am running MD on gpu using following command line:
gmx_mpi mdrun -nb gpu -v -deffnm gpu_md
But, I encountered with:
GROMACS version: VERSION 5.1.3
Precision: single
Memory model: 64 bit
MPI library: MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 32)
"gpu_md.log" 319L, 14062C 1,1
Top
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
grpopts:
nrdf: 16999.9 999171
ref-t: 300 300
tau-t: 1 1
annealing: No No
annealing-npoints: 0 0
acc: 0 0 0
nfreeze: N N N
energygrp-flags[ 0]: 0 0 0 0
energygrp-flags[ 1]: 0 0 0 0
energygrp-flags[ 2]: 0 0 0 0
energygrp-flags[ 3]: 0 0 0 0
Using 1 MPI process
Using 192 OpenMP threads
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
Source code file:
/root/gromacs_source/gromacs-5.1.3/src/programs/mdrun/resource-division.cpp,
line: 571
Fatal error:
Your choice of 1 MPI rank and the use of 192 total threads leads to the use
of 192 OpenMP threads, whereas we expect the optimum to be with more MPI
ranks with 1 to 6 OpenMP threads. If you want to run with this many OpenMP
threads, specify the -ntomp option. But we suggest to increase the number
of MPI ranks.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
What is the reason of this error?
How to fix it?
Best,
Andrew
More information about the gromacs.org_gmx-users
mailing list