[gmx-users] parallelizing gromacs2018.4

praveen kumar praveenche at gmail.com
Fri Nov 23 15:45:06 CET 2018


Dear all
I have successfully installed gromacs 2018.4 in local PC and HPC center
(Without GPU)
using these commands
CMAKE_PREFIX_PATH=/home/sappidi/software/fftw-3.3.8
/home/sappidi/software/cmake-3.13.0/bin/cmake ..
-DCMAKE_INCLUDE_PATH=/home/sappidi/software/fftw-3.3.8/include
-DCMAKE_LIBRARY_PATH=/home/sappidi/software/fftw-3.3.8/lib
      -DGMX_GUP=OFF
-DGMX_MPI=ON
-DGMX_OPENMP=ON
-DGMX_X11=ON -DCMAKE_INSTALL_PREFIX=/home/sappidi/software/gromacs-2018.4
-DCMAKE_CXX_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicxx
-DCMAKE_C_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicc
make && make install
the sample job runs perfectly without using mpirun.
but when I want to run on multiple processors on single node or multi
nodes, I am getting following error message

"Fatal error:
Your choice of number of MPI ranks and amount of resources results in using
20
OpenMP threads per rank, which is most likely inefficient. The optimum is
usually between 1 and 6 threads per rank. If you want to run with this
setup,
specify the -ntomp option. But we suggest to change the number of MPI
ranks."

I have tried to rectify the problem using several ways but could not
succeed.
The sample job script file for my HPC run is shown below.

#!/bin/bash
#PBS -N test
#PBS -q mini
#PBS -l nodes=1:ppn=20
#PBS -j oe
#$ -e err.$JOB_ID.$JOB_NAME
#$ -o out.$JOB_ID.$JOB_NAME
cd $PBS_O_WORKDIR
export I_MPI_FABRICS=shm:dapl
export I_MPI_MPD_TMPDIR=/scratch/sappidi/largefile/


/home/sappidi/software/openmpi-2.0.1/bin/mpirun -np 20 -machinefile
$PBS_NODEFILE /home/sappidi/software/gromacs-2018.4/bin/gmx_mpi  mdrun -v
-s NVT1.tpr -deffnm test9

I wondering what could be the reason,

Thanking in advance
Praveen


More information about the gromacs.org_gmx-users mailing list