[gmx-users] best cluster for gromacs

David spoel at xray.bmc.uu.se
Thu Aug 21 18:51:01 CEST 2003


On Thu, 2003-08-21 at 17:57, Bert de Groot wrote:
> Dear all,
> 
> we're about to renew our cluster and I'd like to share our thoughts, and
> would very much appreciate if you could share your experience/thoughts too.
> There have been some posts on this list about this issue, but there have
> been some hardware developments since then, so I figured it would make sense
> to post once again. 
> 
Everyone's favorite topic, spending $$$


> So far we've been very happy with our dual athlons (apart from a few stability
> issues), but scaling is not that great, especially with PME. We have a few nodes
> with myrinet, but even there most jobs run on maximally 2 dual nodes 
> (4 processors), because beyond that the scaling simply breaks. We've also
> played with gigabit ethernet and the M-VIA protocol, but these only yield
> marginal improvements, especially on the faster nodes.
> 
> Some points that we considered are:
> -the latest generation of SCALI cards seem to have a quite promising 
>  price/performance ratio. Does anyone have recent experience with SCALI?
Yes, the numbers on the benchmark pages for GROMACS are still valid, but
that is of course for a simulation with a cut-off. We have a 200 node
Xeon/Scali cluster in Linköping, on which I run reasonably large
simulations (40000 + atoms) with PME routinely on 4 dual processor
nodes. 8 processors is still slightly faster than 6. It seems that the
scali drivers are not optimally stable, but the engineers are working on
it full time. Gromacs scaling is not perfect, but that is mainly a
Gromacs problem. With better algorithms it will improve greatly. 

> -what about quad Xeon boards?
$$$$ and the interconnect (bus) may be not optimal, such that
multiprocessor scaling would be not as good as e.g. IBM.


> -any alternative to scali or myrinet for improved low latency networking?
Didn't Anton report some good results with Gigabit ethernet? Most
machines come with Gigabit nowadays, so it is possible to test first
before forking out $$$ for scali.


> -can we expect developments in gromacs in the near future that will 
>  reduce the network load/improve scaling? (I don't want to push the developers
>  here, they're doing a splendid job. It's only to optimise the planning).
> 
The things Erik is working on will improve performace on any network.
The question is of course whether it is justified to pay the overhead of
fast network card (roughly $300-$400) per machine.

I am personally planning to buy some dual Opteron machines, the main
reason being quantum calculations, but these machines also have a much
superior bus and memory interface compared to Xeons (at least
theoretically). It will be a while before I can do testing though, and
initially I won't buy more than two boxes. According to Erik is the
performance in single precision comparable to similarly clocked P4
machines but you can only get them up until 2 GHz (Xeon to 3 GHz)


-- 
Groeten, David.
________________________________________________________________________
Dr. David van der Spoel, 	Dept. of Cell and Molecular Biology
Husargatan 3, Box 596,  	75124 Uppsala, Sweden
phone:	46 18 471 4205		fax: 46 18 511 755
spoel at xray.bmc.uu.se	spoel at gromacs.org   http://xray.bmc.uu.se/~spoel
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



More information about the gromacs.org_gmx-users mailing list