[gmx-users] problem: gromacs run on gpu
leila karami
karami.leila1 at gmail.com
Fri Jul 7 13:13:55 CEST 2017
Dear Gromacs users,
I installed Gromacs 5.1.3. on GPU in Rocks cluster system.
After using command:
gmx_mpi mdrun -nb gpu -v -deffnm old_gpu,
I encountered with:
=============================================================
GROMACS: gmx mdrun, VERSION 5.1.3
Executable: /home/karami_leila1/513/gromacs/bin/gmx_mpi
Data prefix: /home/karami_leila1/513/gromacs
Command line:
gmx_mpi mdrun -nb gpu -v -deffnm new_gpu
Running on 1 node with total 96 cores, 192 logical cores, 3 compatible GPUs
Hardware detected on host cschpc.ut.ac.ir (the node of MPI rank 0):
CPU info:
Vendor: GenuineIntel
Brand: Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz
SIMD instructions most likely to fit this hardware: AVX2_256
SIMD instructions selected at GROMACS compile time: AVX2_256
GPU info:
Number of GPUs detected: 3
#0: NVIDIA TITAN X (Pascal), compute cap.: 6.1, ECC: no, stat:
compatible
#1: NVIDIA Tesla K40c, compute cap.: 3.5, ECC: no, stat: compatible
#2: NVIDIA Tesla K40c, compute cap.: 3.5, ECC: no, stat: compatible
Reading file new_gpu.tpr, VERSION 5.1.3 (single precision)
Using 1 MPI process
Using 192 OpenMP threads
3 compatible GPUs are present, with IDs 0,1,2
1 GPU auto-selected for this run.
Mapping of GPU ID to the 1 PP rank in this node: 0
NOTE: potentially sub-optimal launch configuration, gmx_mpi started with
less
PP MPI process per node than GPUs available.
Each PP MPI process can use only one GPU, 1 GPU per node will be used.
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
Source code file:
/root/gromacs_source/gromacs-5.1.3/src/programs/mdrun/resource-division.cpp,
line: 571
Fatal error:
Your choice of 1 MPI rank and the use of 192 total threads leads to the use
of 192 OpenMP threads, whereas we expect the optimum to be with more MPI
ranks with 2 to 6 OpenMP threads. If you want to run with this many OpenMP
threads, specify the -ntomp option. But we suggest to increase the number
of MPI ranks.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
Halting program gmx mdrun
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
=============================================================
How to resolve this problem?
Any help will be highly appreciated.
More information about the gromacs.org_gmx-users
mailing list