[gmx-users] Trouble running 4.6.4. on cpu+gpu but not on gpu alone

rajat desikan rajatdesikan at gmail.com
Tue Dec 10 10:30:59 CET 2013


Dear all,
I recently installed gromacs 4.6.4 on our cluster. The configuration is 12
cpus and 2 gpus per node. The build details are given below.

I am able to run gromacs on the 2 gpus alone. However, running a job with
cpu+gpu fails with a fatal error (given below).

Gromacs build:

cmake .. -DGMX_CPU_ACCELERATION=SSE2 -DGMX_GPU=ON -DGMX_BUILD_OWN_FFTW=ON
-DGMX_MPI=ON -DGMX_OPENMP=ON -DGMX_PREFER_STATIC_LIBS=ON
-DCMAKE_INSTALL_PREFIX=/


*Job1) Pure GPUs:Running*

node=1:gpus=2

mpirun  –np 2 mdrun-mpi  ./

running on 2 GPU cards of the same node.


*Job2) CPU+GPUs:Crashed*


node=1:ppn=10:gpus=2

mpirun  –np 12 mdrun-mpi


FATAL error

“Using 12 MPI processes

Using 2 OpenMP threads per MPI process

Compiled acceleration: SSE2 (Gromacs could use SSE4.1 on this machine,
which is better)

2 GPUs detected on host cn1.local:

  #0: NVIDIA Tesla M2090, compute cap.: 2.0, ECC:  no, stat: compatible

  #1: NVIDIA Tesla M2090, compute cap.: 2.0, ECC:  no, stat: compatible

2 GPUs auto-selected for this run.

Mapping of GPUs to the 12 PP ranks in this node: #0, #1

-------------------------------------------------------

Program mdrun_mpi, VERSION 4.6.4

Source code file:
/home/rajat/softback/gromacs-4.6.4/src/gmxlib/gmx_detect_hardware.c, line:
372

Fatal error:

Incorrect launch configuration: mismatching number of PP MPI processes and
GPUs per node.

mdrun_mpi was started with 12 PP MPI processes per node, but only 2 GPUs
were detected.
For more information and tips for troubleshooting, please check the GROMACS”

-- 
Rajat Desikan (Ph.D Scholar)
Prof. K. Ganapathy Ayappa's Lab (no 13),
Dept. of Chemical Engineering,
Indian Institute of Science, Bangalore


More information about the gromacs.org_gmx-users mailing list