[gmx-users] Comparing Gromacs versions

Mark Abraham mark.j.abraham at gmail.com
Fri May 17 14:31:39 CEST 2013


On Fri, May 17, 2013 at 2:21 PM, Djurre de Jong-Bruinink <
djurredejong at yahoo.com> wrote:

> >You are doing simulations with a lot of water (and perhaps with charge
> groups),
>
> >and that is the case where an unbuffered group scheme has the best
> performance.
>
> >How much you like the physics is another story.
>
>
> Thank you for your answer. I didn't realize a system like this is already
> in the "lot of water"-regime (it makes sense though, ~95% of the particles
> is water). I could lower the water content a bit by reducing the solute to
> box distance (eg from 1.5 to 1.2/1.0nm), but that only saves a few percent.
> In practice it will mean that for any system containing soluble proteins,
> the group scheme will still be faster? Right?
>

This was what the group kernels were built for. But after a certain number
of cores (as Szilard said), by construction, the group scheme will stop
scaling, and the group and verlet performance curves will certainly cross.
The PME performance also starts to die at large MPI process counts because
of the global inter-PME-node communication, which you can see in the .log
file timing breakdowns. I expect you are not being fair to the Verlet
scheme at high core counts by requiring high MPI process counts by turning
off OpenMP.

Mark


> Groetnis,
> Djurre de Jong
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



More information about the gromacs.org_gmx-users mailing list