[gmx-users] gromacs 5.1.2 MPI performance
Mark Abraham
mark.j.abraham at gmail.com
Wed Sep 14 10:41:02 CEST 2016
Hi,
On Thu, Sep 8, 2016 at 12:09 PM Stephen Chan <hcsinfo.2009 at googlemail.com>
wrote:
> Hello,
>
> I am compiling an MPI version of gromacs 5.1.2 on a computer cluster.
> The compilation seems ok. However, when running MPI jobs, I got some
> issues:
>
> 1) mpirun -n 112 gmx_mpi grompp -f 0.mdp -o 0.tpr -n -c input.pdb
> Certainly using 112 cores for a simple task doesn't make sense. The
> problem is the ouput 0.tpr was overwritten for ~ 16 times. Is it a sign
> that my gromacs hasn't enabled MPI?
>
Don't ask for 112 cores for a serial task, or you'll do it 112 times,
overwriting files 111 times, etc.
> 2) Next, I got domain decomposition error (again...!) when running:
> mpirun -n 112 gmx_mpi mdrun -s 0.tpr -x 1.xtc -g 1.log -v -c 1.gro >&
> 1.info
> The issue resolved if i only used 8 cores. But the speed drops to 10+
> ns/day which is far from ideal. I'm expecting ~100ns/day with 112 cores.
> Is there any other ways to boost the speed?
>
We can't tell on the information provided. Parallelization via domain
decomposition isn't magic, and the reason why it didn't succeed is given in
the log file, probably immediately before your error message. Likely your
system is just too small or unsuitably shaped. Using a combination of
hybrid OpenMP + MPI with a small number of threads per rank may be fruitful.
Mark
> It would be nice if someone can give me some hints.
>
> Regards,
> Stephen
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
More information about the gromacs.org_gmx-users
mailing list