[gmx-users] GROMACS 4.6.7 not running on more than 16 MPI threads

Agnivo Gosai agnivogromacs14 at gmail.com
Fri Feb 27 01:29:57 CET 2015

Dr. Abraham and other users

Thanks for the detailed explanation.

I compiled using the Intel MPI C and C++ compilers as evident by the cmake
flags, in my first mail on this topic.
I am a novice in this area, and , after you pointed out about the mix in
MPI versions I looked up the net and found that "mpiexec" is associated
with MPICH and I am thinking that the PBS scheduler of my university
cluster uses this MPI version for distributing job across the compute nodes.
So , is there any way around this problem? I mean to ask if I am
recompiling GROMACS then what specific things should I be careful of and
what flags should I be using ?
I looked up the installation guide but could not figure out a solution. I
am at present forced to use only 1 node and 16 processes. ( np = 16 in
mpirun ).

Regarding compilation with OpenMPI, instead of Intel MPI , is it compatible
with the MPICH ? Also as MPICH is already there in the cluster , then why
does it cause problem with the present installation of GROMACS?

I did not find an installation guide for it in the GROMACS website but I
found this link :


I think it is well explained and should work with GROMACS 4.6.7. Any
suggestions ?

Thanks & Regards
Agnivo Gosai
Grad Student, Iowa State University.

More information about the gromacs.org_gmx-users mailing list