[gmx-users] No improvement in scaling on introducing flow control
himanshu khandelia
hkhandelia at gmail.com
Thu Oct 25 13:00:16 CEST 2007
Hi,
We tried turning on switch control on our local cluster
(www.dcsc.sdu.dk) but were unable to achieve any improvement in scale
up whatsoever. I was wondering if you folks could shed light upon how
we should go ahead with this. (We have not installed the all-to-all
patch yet)
The cluster architecture is as follows:
##########
* Computing nodes
160x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel
Woodcrest CPUs, 4 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer,
SATA150), 2x Gigabit Ethernet
40x Dell PowerEdge 1950 1U rackmountable servers with 2 2,66Ghz Intel
Woodcrest CPUs, 8 GB Ram, 2x160 GB HDD (7200rpm, 8 MB buffer,
SATA150), 2x Gigabit Ethernet
##########
* Switches
9 D-link SR3324
2 D-link SRi3324
The switches are organised in two stacks, each connected to the
infrastracture switch with an 8 Gb/s LACP trunk.Firmware Build on the
switches
##########
* Firmware Build on the switches: 3.00-B16
There are newer firmware builds available, but according to the update
logs, there is not update on the IEEE flow control protocol in the new
firmware
##########
* Tests (were run using OPENMPI, not LAMMPI)
DPPC-bilayer system of ~ 40000 atoms, with PME and cutoffs, 1fs time
step. The scaleup data is as follows. We are also currently running
some tests with larger systems.
# Procs nanoseconds/day Scaleup
1 0.526 1
2 1.0 1.90
4 1.768 3.36
8 1.089 2.07
16 0.39 0.74
Any inputs will be very helpful, thank you
Best,
-himanshu
More information about the gromacs.org_gmx-users
mailing list