[gmx-users] MPI Scaling Issues

Mark Abraham Mark.Abraham at anu.edu.au
Fri Feb 3 23:48:16 CET 2012


On 4/02/2012 5:20 AM, Christoph Klein wrote:
> Hi all,
>
> I am running a water/surfactant system with just under 100000 atoms 
> using MPI on a local cluster and not getting the scaling I was hoping 
> for. The cluster consists of 8 core xeon nodes and I'm running gromacs 
> 4.5 with mpich2-gnu. I've tried running a few benchmarks using 100ps 
> runs and get the following results:
>
> *Threads: 8   16   24   32   40   48   56   64*
> *hr/ns:    15 18   53   54   76   117  98   50 *

Are you sure you have the right performance number (and not ns/day or 
something)!

> *
> *
> *Each set of 8 threads is being sent to one node and the 8 threaded 
> run was performed without MPI. I have tried changing the -npme 
> settings for all permissible values on runs with 16 threads. In every 
> instance the results were worse than if I didn't specify anything.*
> *
> *
> *The fact that I am getting negative scaling leads me to believe that 
> something is wrong with my set up. Any tips on what I could try?*

The simplest explanation is that your network (or MPI settings for it) 
is not up to the job. Very low latency is required. Gigabit ethernet is 
not good enough.

You could try installing OpenMPI, also.

Mark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20120204/68a16232/attachment.html>


More information about the gromacs.org_gmx-users mailing list