[gmx-users] best price/performance for GMX?

David van der Spoel spoel at xray.bmc.uu.se
Mon Sep 24 19:15:30 CEST 2001

On Mon, 24 Sep 2001, Bert de Groot wrote:

>We're about to extend our linux cluster and are interested to hear if there's any specific hardware that you, fellow gromacs-ers, recommend.
>So far we have a cluster consisting of 40 dual PII/PIII using standard
>fast ethernet (100 Mb/s) and our experience is that with a typical system size of about 100.000 atoms using PME, the scaling with LAM-MPI and also MPICH is not very spectacular. So in practice, most jobs are running on a single, two-processor node, which
>is not too bad with the speed that gromacs offers, but still for some jobs it would be nice to be able tu use more than two processors.  Would a gigabit ethernet or Myrinet help?
>And if yes, how much?
I have recently been testing a dual P3-800 cluster with Scali networking
(www.scali.com). I have hitherto only done the benchmark we present in the
GROMACS 3 paper (dppc 125000 atoms with twin range cut-off). Up to 28
nodes the scaling was comparable to or better than the IBM SP2. I will
present the results on the gromacs website once I have done 32 CPUs too.
That said, for PME the picture is definitely different due to algorithmic
reasons. We basically violate Amdahl's law, which says that parallel
scaling is determined eventually by the little bit of sequential code that
is left. In the PME case it is not so little. The next version of gromacs
will be better here.

If I get the Scali cluster again some time, I can try the same system with

Groeten, David.
Dr. David van der Spoel, 	Biomedical center, Dept. of Biochemistry
Husargatan 3, Box 576,  	75123 Uppsala, Sweden
phone:	46 18 471 4205		fax: 46 18 511 755
spoel at xray.bmc.uu.se	spoel at gromacs.org   http://zorn.bmc.uu.se/~spoel

More information about the gromacs.org_gmx-users mailing list