[gmx-users] Regarding Gromacs 5.0.3 parallel computation

Bikash Ranjan Sahoo bikash.bioinformatics at gmail.com
Wed Dec 10 07:22:38 CET 2014


​Dear All,
    I have installed the Gromacs 5.0.3 ​in cluster and would like to thank
Dr. Mark for his valuable suggestions and guidance. I am facing some
problems in the computation speed in 5.0.3. A comparable study of the same
system in gromacs 4.5.5 and 5.0.3 in the same cluster using equal number of
nodes rendered an extremely slow simulation for the latter one. The
commands I used for installation are pasted below.


cmake .. -DCMAKE_INSTALL_PREFIX=/user1/GROMACS-5.0.3 -DGMX_MPI=ON
-DGMX_THREAD_MPI=ON -DGMX_PREFER_STATIC_LIBS=ON -DGMX_BUILD_OWN_FFTW=ON
-DGMX_X11=OFF -DGMX_CPU_ACCELERATION=SSE4.1

​I tried to do a small simulation in Gromacs 4.5.5 using 30 cores for 200
ps. The computation time was 4.56 minutes . The command used was "dplace -c
0-29 mdrun -v -s md.tpr -c md.gro -nt 30 &".

Next I ran the same system using Gromacs 5.0.3. The command used was
"dplace -c 0-29 mpirun -np 30 mdrun_mpi -v -s md.tpr -c md.gro". The
simulation was extremely slow and took 37 minutes to complete only 200 ps
MD.
Even the energy minimization for a small protein is taking long time in
5.0.3 which can be converged in 4.5.5 in few seconds. Kindly suggest me
where is the problem. Is there any problem in my installation procedure (in
cmake commands).

Thanking You
In anticipation of your reply.
Bikash, Osaka, Japan


​P.S. The "dplace 0-29"​ is for serial assignment of CPUs in my cluster.
Kindly ignore it if you are using qsub.


More information about the gromacs.org_gmx-users mailing list