Antw: [gmx-developers] Gromacs 3.3.1 parallel benchmarking
friedel at ipfdd.de
Tue Aug 15 09:55:54 CEST 2006
I am trying to do some benchmarking on SLES 9 system with AMD opteron dual core processors (2.2 GHz) too, but there are still some crashes when performing the DPPC test with 2 or 8 processors. I try to find the subroutine where the crash happens. I am using the infiniband network applying the MPICH derivative mvapich. Nevertheless, the test with 16 processors had a benchmark of 13.335 GFlops and with one processor 1.175 GFlops.
Dr. Peter Friedel
Hohe Str. 6, 01069 Dresden
Institut für Polymerforschung Dresden e.V.
email: friedel at ipfdd.de
>>> mghav at yahoo.com 15.08.06 1.05 >>>
I'm doing some benchmarking of gromacs 3.3.1 on SUSE
9 systems using Intel Xeon processors on Gigabit
ethernet, but have been unable to reproduce the
for Gromacs 3.0.0 and am trying to diagnose why. I'm
getting sublinear scaling on distributed
single-processor 3.4 GHz Intel Xeon's with gigabit
connections. I'm compiling using the 9.X versions of
Intel compilers and used a wide variety of FFT and
BLAS libraries with no success in reproducing the
linear scaling shown in the online benchmarking
results for the "large DPPC membrane system".
Have any changes in the code been implemented since
3.0.0 that would likely change this scaling behavior
and/or has anyone done similar parallel benchmarking
with 3.3.1? We'd like to start using this code for up
to 100's of millions of atoms system, but are
currently limited by this poor scaling.
Thanks for any input or suggestions you can provide!
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
gmx-developers mailing list
gmx-developers at gromacs.org
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-developers-request at gromacs.org.
More information about the gromacs.org_gmx-developers