[gmx-users] Scaling/performance on Gromacs 4
Mark Abraham
Mark.Abraham at anu.edu.au
Tue Feb 21 01:02:00 CET 2012
On 21/02/2012 8:11 AM, Floris Buelens wrote:
> Poor scaling with MPI on many-core machines can also be due uneven job
> distributions across cores or jobs being wastefully swapped between
> cores. You might be able to fix this with some esoteric configuration
> options of mpirun (--bind-to-core worked for me with openMPI), but the
> surest option is to switch to gromacs 4.5 and run using thread-level
> parallelisation, bypassing MPI entirely.
That can avoid problems arising from MPI performance, but not those
arising from PP-vs-PME load balance, or intra-PP load balance. The end
of the .log files will suggest if these latter effects are strong
contributors. Carsten's suggested solution is one good one.
Mark
>
>
> ------------------------------------------------------------------------
> *From:* Sara Campos <srrcampos at gmail.com>
> *To:* gmx-users at gromacs.org
> *Sent:* Monday, 20 February 2012, 17:12
> *Subject:* [gmx-users] Scaling/performance on Gromacs 4
>
> Dear GROMACS users
>
> My group has had access to a quad processor, 64 core machine (4 x
> Opteron 6274 @ 2.2 GHz with 16 cores)
> and I made some performance tests, using the following specifications:
>
> System size: 299787 atoms
> Number of MD steps: 1500
> Electrostatics treatment: PME
> Gromacs version: 4.0.4
> MPI: LAM
> Command ran: mpirun -ssi rpi tcp C mdrun_mpi ...
>
> #CPUS Time (s) Steps/s
> 64 195.000 7.69
> 32 192.000 7.81
> 16 275.000 5.45
> 8 381.000 3.94
> 4 751.000 2.00
> 2 1001.000 1.50
> 1 2352.000 0.64
>
> The scaling is not good. But the weirdest is the 64 processors performing
> the same as 32. I see the plots from Dr. Hess on the GROMACS 4 paper
> on JCTC
> and I do not understand why this is happening. Can anyone help?
>
> Thanks in advance,
> Sara
>
> --
> gmx-users mailing list gmx-users at gromacs.org
> <mailto:gmx-users at gromacs.org>
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org
> <mailto:gmx-users-request at gromacs.org>.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20120221/784dbc8b/attachment.html>
More information about the gromacs.org_gmx-users
mailing list