[gmx-users] Running Gromacs on GPUs

Giannis Gl igaldadas at gmail.com
Thu May 11 13:00:19 CEST 2017


Dear Gromacs users,

I am trying to run a multiple walkers metadynamics simulation on Gromacs
5.1.4 using a machine that has 12 CPUs and 4 GPUs.
I have compiled Gromacs using the following schems:

cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON
-DGMX_MPI=on -DGMX_GPU=on -DGMX_USE_OPENCL=on

and the command that I use to run the simulation is a follows:

mpirun -np 4 gmx_mpi mdrun -s topol -multi 4 -gpu_id 0123 -x traj

However, I get the following error with or without the plumed flag:

Running on 1 node with total 6 cores, 12 logical cores, 4 compatible GPUs
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.4
Source code file:
/usr/local/gromacs-5.1.4/src/gromacs/gmxlib/gmx_detect_hardware.cpp, line:
1203

Fatal error:
The OpenCL implementation only supports using exactly one PP rank per node
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

Halting parallel program gmx mdrun on rank 1 out of 4
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

I tried the following command but still I get the same error,

mpirun -np 4 gmx_mpi mdrun -s topol -multi 4 -gpu_id 0123 -ntomp 1

Did anyone have the same issue at some point and if yes, how did you solve
it?

Best wishes,
Yiannis


More information about the gromacs.org_gmx-users mailing list