[gmx-users] Gromacs 4 Scaling Benchmarks...

Yawar JQ yawarq at gmail.com
Sun Nov 9 22:30:51 CET 2008


I was wondering if anyone could comment on these benchmark results for the
d.dppc benchmark?

    Nodes Cutoff (ns/day) PME (ns/day) 4 1.331 0.797 8 2.564 1.497 16 4.5
1.92 32 8.308 0.575 64 13.5 0.275 128 20.093 - 192 21.6 -

It seems to scale relatively well up to 32-64 nodes without PME. This seems
slightly better than the benchmark results for Gromacs 3 on www.gromacs.org
.

Can someone comment on the magnitude of the performance hit and lack of
scaling with PME is worrying me.

For the PME runs, I set rlist,rvdw,rouloumb=1.2 and the rest set to the
defaults. I can try it with some other settings, larger spacing for the
grid, but I'm not sure how much more that would help. Is there a more
standardized system I should use for testing PME scaling?

This is with GNU compilers and parallelization with OpenMPI 1.2. I'm not
sure what we're using for the FFTW The compute nodes are Dell m600 blades w/
16GB of RAM and dual quad core Intel Xeon 3GHz processors. I believe it's
all ethernet interconnects.

Thanks,
YQ
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20081109/a5296052/attachment.html>


More information about the gromacs.org_gmx-users mailing list