[gmx-users] GROMACS with MPI/GAMMA
Tony Ladd
ladd at che.ufl.edu
Thu Dec 7 16:02:56 CET 2006
I recently ran GROMACS benchmarks (villin and DPPC) using Gigabit Ethernet
with TCP and GAMMA protocols. GAMMA gives significantly better scaling than
TCP as can be seen from the following sample timings. The serial results are
in the usual nanosecs/day units and the speedup is measured with respect to
the serial time.
The nodes were dual-core P4D (3.0GHz) with Intel Pro 1000 NICS. The switch
was Extreme Networks x450a-48t.
I also compared with dual Opterons (2 X 275) with 4 cores per node and an
Infiniband interconnect. Multi-node runs use all the cores on an individual
node.
Serial: Intel P4D Opteron 275
DPPC 0.209 0.257
Villin 10.10 11.49
Speedup DPPC:
Intel P4D Opteron 275
CPUS LAM OpenMPI MPI/GAMMA Infiniband
2 2.33 2.35 2.33 2.33
4 4.27 4.31 4.34 4.39
6 5.74
8 7.52 7.52 7.85 8.23
12 9.50 7.88 10.5 9.42
16 11.2 7.67 12.9 10.4
20 7.85 14.3
24 8.17 9.24 15.4 12.7
32 4.44 11.0 16.2 13.4
40 9.17 15.0
48 6.70 13.2 13.8
64 4.17 10.0 11.5
Speedup Villin:
Intel P4D Opteron 275
CPUS LAM OpenMPI MPI/GAMMA Infiniband
2 1.94 1.87 2.05 1.95
4 2.52 2.33 2.73 3.27
6 2.85
8 2.85 2.47 3.46 4.24
12 2.52 2.15 3.33 4.00
16 1.94 1.79 2.74 4.00
20 1.40 2.20
24 0.32 1.08 1.73 3.00
32 0.66 1.11 2.00
-------------------------------
Tony Ladd
Chemical Engineering
University of Florida
PO Box 116005
Gainesville, FL 32611-6005
Tel: 352-392-6509
FAX: 352-392-9513
Email: tladd at che.ufl.edu
Web: http://ladd.che.ufl.edu
More information about the gromacs.org_gmx-users
mailing list