[gmx-developers] Gromacs 3.3.1 parallel benchmarking

Carsten Kutzner ckutzne at gwdg.de
Tue Aug 15 10:29:34 CEST 2006


Hi Michael,

in principal you should be able to get the same scaling
with 3.3.x compared to 3.0. The changes in the 3.3 version
that have to do with parallelism are mainly in the PME
part--and should give you *better* scaling compared to the
old version if you use PME. But I suppose you use Cutoff
electrostatics with the DPPC benchmark?

What can also have an impact is the network switch you
are using. I have done benchmarks with DPPC recently
on a similar setup (but with PME) and found that you have
to have Ethernet flow control activated in the switch to
get good scaling on more than two nodes. That is probably
also important for Cutoff simulations, at least if you
run on a large number of nodes.

Is your setup such that all nodes are connected to the
same switch? If a group of nodes is connected to switch
A, which is connected to switch B where the other nodes
reside, you have additional latencies and maybe other network
traffic that could ruin your scaling as well.

Carsten


Michael Haverty wrote:
> Hello all,
>   I'm doing some benchmarking of gromacs 3.3.1 on SUSE
> 9 systems using Intel Xeon processors on Gigabit
> ethernet, but have been unable to reproduce the
> scaling at 
> http://www.gromacs.org/gromacs/benchmark/scaling-benchmarks.html
> for Gromacs 3.0.0 and am trying to diagnose why.  I'm
> getting sublinear scaling on distributed
> single-processor 3.4 GHz Intel Xeon's with gigabit
> connections.  I'm compiling using the 9.X versions of
> Intel compilers and used a wide variety of FFT and
> BLAS libraries with no success in reproducing the
> linear scaling shown in the online benchmarking
> results for the "large DPPC membrane system".
>   Have any changes in the code been implemented since
> 3.0.0 that would likely change this scaling behavior
> and/or has anyone done similar parallel benchmarking
> with 3.3.1?  We'd like to start using this code for up
> to 100's of millions of atoms system, but are
> currently limited by this poor scaling.
>   Thanks for any input or suggestions you can provide!
> Mike
> 
> __________________________________________________
> Do You Yahoo!?
> Tired of spam?  Yahoo! Mail has the best spam protection around 
> http://mail.yahoo.com 
> _______________________________________________
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-developers-request at gromacs.org.

-- 
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne




More information about the gromacs.org_gmx-developers mailing list