[gmx-users] Gromacs 4 Scaling Benchmarks...
vivek sharma
viveksharma.iitb at gmail.com
Tue Nov 11 12:06:06 CET 2008
Hi Carsten,
I have also tried scaling gromacs for a number of nodes ....but was not able
to optimize it beyond 20 processor..on 20 nodes i.e. 1 processor per node..
I am not getting the point of optimizing PME for the number of nodes, is it
like we can change the parameters for PME for MDS or using some other
coulomb type for the same, please explain.
and suggest me the way to do it.
With Thanks,
Vivek
2008/11/10 Carsten Kutzner <ckutzne at gwdg.de>
> Hi,
> most likely the Ethernet is the problem here. I compiled some numbers for
> the DPPC
> benchmark in the paper "Speeding up parallel GROMACS on high-latency
> networks",
>
> http://www3.interscience.wiley.com/journal/114205207/abstract?CRETRY=1&SRETRY=0
> which are for version 3.3, but PME will behave similarly. If you did not
> already use
> separate PME nodes, this is worth a try, since on Ethernet the performance
> will drastically
> depend on the number of nodes involved in the FFT. I also have a tool which
> finds the
> optimal PME settings for a given number of nodes, by varying the number of
> PME nodes
> and the fourier grid settings. I can send it to you if you want.
>
> Carsten
>
>
> On Nov 9, 2008, at 10:30 PM, Yawar JQ wrote:
>
> I was wondering if anyone could comment on these benchmark results for the
> d.dppc benchmark?
>
> Nodes Cutoff (ns/day) PME (ns/day) 4 1.331 0.797 8 2.564 1.497 16 4.5
> 1.92 32 8.308 0.575 64 13.5 0.275 128 20.093 - 192 21.6 -
>
> It seems to scale relatively well up to 32-64 nodes without PME. This seems
> slightly better than the benchmark results for Gromacs 3 on
> www.gromacs.org.
>
> Can someone comment on the magnitude of the performance hit and lack of
> scaling with PME is worrying me.
>
> For the PME runs, I set rlist,rvdw,rouloumb=1.2 and the rest set to the
> defaults. I can try it with some other settings, larger spacing for the
> grid, but I'm not sure how much more that would help. Is there a more
> standardized system I should use for testing PME scaling?
>
> This is with GNU compilers and parallelization with OpenMPI 1.2. I'm not
> sure what we're using for the FFTW The compute nodes are Dell m600 blades w/
> 16GB of RAM and dual quad core Intel Xeon 3GHz processors. I believe it's
> all ethernet interconnects.
>
> Thanks,
> YQ
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>
>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20081111/d46bada5/attachment.html>
More information about the gromacs.org_gmx-users
mailing list