[gmx-users] Gromacs 4 Scaling Benchmarks...

Mike Hanby mhanby at uab.edu
Mon Nov 10 16:57:18 CET 2008


The fftw used during compilation was FFTW 3.1.2 compiled using the GNU
compilers.

 

From: gmx-users-bounces at gromacs.org
[mailto:gmx-users-bounces at gromacs.org] On Behalf Of Yawar JQ
Sent: Sunday, November 09, 2008 3:31 PM
To: gmx-users at gromacs.org
Subject: [gmx-users] Gromacs 4 Scaling Benchmarks...

 

I was wondering if anyone could comment on these benchmark results for
the d.dppc benchmark?

 

Nodes

Cutoff (ns/day)

PME (ns/day)

4

1.331

0.797

8

2.564

1.497

16

4.5

1.92

32

8.308

0.575

64

13.5

0.275

128

20.093

-

192

21.6

-

 

It seems to scale relatively well up to 32-64 nodes without PME. This
seems slightly better than the benchmark results for Gromacs 3 on
www.gromacs.org <http://www.gromacs.org/> . 

 

Can someone comment on the magnitude of the performance hit and lack of
scaling with PME is worrying me. 

 

For the PME runs, I set rlist,rvdw,rouloumb=1.2 and the rest set to the
defaults. I can try it with some other settings, larger spacing for the
grid, but I'm not sure how much more that would help. Is there a more
standardized system I should use for testing PME scaling?

 

This is with GNU compilers and parallelization with OpenMPI 1.2. I'm not
sure what we're using for the FFTW The compute nodes are Dell m600
blades w/ 16GB of RAM and dual quad core Intel Xeon 3GHz processors. I
believe it's all ethernet interconnects.

 

Thanks,

YQ

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20081110/98ac4b85/attachment.html>


More information about the gromacs.org_gmx-users mailing list