[gmx-users] gromacs 5.1.2 MPI performance

yunshi11 . yunshi09 at gmail.com
Fri Sep 9 09:41:41 CEST 2016


Can you tell everyone your system size?

112 cores could be 7 X 16 or 14 X 8, which is indeed weird. Have you tried
4 X 8, 6 X 8, or 12 X 8? These look more natural to me.

On Thu, Sep 8, 2016 at 9:08 PM, Stephen Chan <hcsinfo.2009 at googlemail.com>
wrote:

> Hello,
>
> I am compiling an MPI version of gromacs 5.1.2 on a computer cluster. The
> compilation seems ok. However, when running MPI jobs, I got some issues:
>
> 1) mpirun -n 112 gmx_mpi grompp -f 0.mdp -o 0.tpr -n -c input.pdb
> Certainly using 112 cores for a simple task doesn't make sense. The
> problem is the ouput 0.tpr was overwritten for ~ 16 times. Is it a sign
> that my gromacs hasn't enabled MPI?
>
> 2) Next, I got domain decomposition error (again...!) when running:
> mpirun -n 112 gmx_mpi mdrun -s 0.tpr -x 1.xtc -g 1.log -v -c 1.gro >&
> 1.info
> The issue resolved if i only used 8 cores. But the speed drops to 10+
> ns/day which is far from ideal. I'm expecting ~100ns/day with 112 cores. Is
> there any other ways to boost the speed?
>
> It would be nice if someone can give me some hints.
>
> Regards,
> Stephen
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support
> /Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list