[gmx-users] MPI Scaling Issues

Christoph Klein ctk3b at virginia.edu
Fri Feb 3 19:20:03 CET 2012

Hi all,

I am running a water/surfactant system with just under 100000 atoms using
MPI on a local cluster and not getting the scaling I was hoping for. The
cluster consists of 8 core xeon nodes and I'm running gromacs 4.5 with
mpich2-gnu. I've tried running a few benchmarks using 100ps runs and get
the following results:

*Threads: 8   16   24   32   40   48   56   64*
*hr/ns:    15 18   53   54   76   117  98   50 *
*Each set of 8 threads is being sent to one node and the 8 threaded run was
performed without MPI. I have tried changing the -npme settings for all
permissible values on runs with 16 threads. In every instance the results
were worse than if I didn't specify anything.*
*The fact that I am getting negative scaling leads me to believe that
something is wrong with my set up. Any tips on what I could try?*
*Many thanks,*
Christoph Klein
University of Virginia
B.S. Chemical Engineering
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20120203/d54448bf/attachment.html>

More information about the gromacs.org_gmx-users mailing list