[gmx-users] Scaling Benchmarks

Jens Krüger mercutio at uni-paderborn.de
Mon Aug 1 17:35:04 CEST 2005


Hello,

we want to add some new numbers to the scaling section. We used GROMACS,
3.2.1 on the new PC2 HPCLine, with Dual INTEL Xeon 3.2 GHZ EM64T and
InfiniBand HCA PCI-e with SCALI-MPI
(http://wwwcs.uni-paderborn.de/pc2/). For the DPPC benchmark system,
using -shuffle and -sort, we reached to,

CPU           1      2      4       8     12     16     20     24     32

ps/24h     143  322  640 1269 1834 2367 2814 3212 3945
rel. scal.   100  113  112   111   107   103     98     94     86

The performance on each node is as expected, but the fast network yields
a superb scaling.

How far is the support for Xeon 64bit in the latested CVS-Versions and
can one expect a performance gain?


Best wishes,

Jens Krüger




Following settings were used during compilation,

FFTW

#! /bin/sh
export F77=ifort
export CC=icc
export MPICC="mpicc -ccl $CC"
export CFLAGS="-O2 -no_cpprt"
export CFLAGS="-O3 -ip"
export FFLAGS="-O3 -ip"

./configure \
- --enable-float   \
- --enable-type-prefix \
- --enable-mpi \
- --prefix=/opt/pc2/fftw


GROMACS

#! /bin/sh
#export F77=ifort
export CC=icc
export CFLAGS="-O2 -no_cpprt"
export CPPFLAGS=-I/opt/pc2/fftw/include
export LDFLAGS=-L/opt/pc2/fftw/lib
export MPICC="mpicc -ccl $CC"
 ./configure \
- --enable-mpi \
- --prefix=/opt/pc2/gromacs
--
========================================================
Jens Krüger
mercutio at uni-paderborn.de
dienstlich:
Warburger Str. 100 * 33098 Paderborn * Tel.:05251-602183
privat:
Ellersteg 3 * 33100 Paderborn-Dahl * Tel.:0176-20042288
========================================================





More information about the gromacs.org_gmx-users mailing list