[gmx-users] parallelizing gromacs2018.4

praveen kumar praveenche at gmail.com
Fri Nov 23 15:45:06 CET 2018

Dear all
I have successfully installed gromacs 2018.4 in local PC and HPC center
(Without GPU)
using these commands
/home/sappidi/software/cmake-3.13.0/bin/cmake ..
-DGMX_X11=ON -DCMAKE_INSTALL_PREFIX=/home/sappidi/software/gromacs-2018.4
make && make install
the sample job runs perfectly without using mpirun.
but when I want to run on multiple processors on single node or multi
nodes, I am getting following error message

"Fatal error:
Your choice of number of MPI ranks and amount of resources results in using
OpenMP threads per rank, which is most likely inefficient. The optimum is
usually between 1 and 6 threads per rank. If you want to run with this
specify the -ntomp option. But we suggest to change the number of MPI

I have tried to rectify the problem using several ways but could not
The sample job script file for my HPC run is shown below.

#PBS -N test
#PBS -q mini
#PBS -l nodes=1:ppn=20
#PBS -j oe
#$ -e err.$JOB_ID.$JOB_NAME
#$ -o out.$JOB_ID.$JOB_NAME
export I_MPI_FABRICS=shm:dapl
export I_MPI_MPD_TMPDIR=/scratch/sappidi/largefile/

/home/sappidi/software/openmpi-2.0.1/bin/mpirun -np 20 -machinefile
$PBS_NODEFILE /home/sappidi/software/gromacs-2018.4/bin/gmx_mpi  mdrun -v
-s NVT1.tpr -deffnm test9

I wondering what could be the reason,

Thanking in advance

More information about the gromacs.org_gmx-users mailing list