[gmx-users] PME scaling
bjornson at aya.yale.edu
Fri Feb 10 16:24:01 CET 2006
I'm still interested in improving the scaling of gromacs on my GigE
cluster. I'm wondering if the current state of the head is such that
it would make sense for me to give it a try?
I've been studying a ppt I found that Carsten gave at
Forschungszentrum Juelich in April 2005. Is there something more
up-to-date that describes the design changes that are taking place for
PME in gromacs?
On 1/11/06, Carsten Kutzner <ckutzne at gwdg.de> wrote:
> Hi Rob,
> a 72 k atom system should definitely scale better on GigE. At least it
> should not be slower on 6 CPUs compared to 4. What kind of switch are
> you using? Please look at the network settings and make shure that both
> the switch and the network cards have flow control enabled.
> On Tue, 10 Jan 2006, Robert Bjornson wrote:
> > Hi David,
> > Sure, thanks for your interest, here is a bit more info:
> > I'm running gromacs 3.3, with patch to pme.c (using 188.8.131.52 to fix
> > pme-order bug)
> > I'm using lam 7.1.1 under pbspro, running on a cluster of 3.2 Ghz
> > EMT-64 xeons. The interconnect is gigE.
> > The model is 72k atoms, using PME with pme_order=6
> > Here is the performance I'm seeing (I've rounded numbers):
> > cpus steps/hour steps/hour/cpu typical node cpu utilization
> > (remember 2 cpus/node)
> > 8 6200 775 .5-.9
> > 6 8500 1400 .75-1.08
> > 4 25000 6250 2.0
> > 1 8870 8870 1.0
> > If I run the same model with cutoff instead of PME, I see:
> > 8 82000 10274 1.6-1.8
> > 1 24000 24000 1.0
> > So, the performance I'm seeing on more than 4 nodes with PME is pretty
> > bad. How does this compare to what you expected? I can provide you
> > with more info if you'd like. Thanks very much for any insight you
> > might have.
> > Sincerely,
> > Rob Bjornson
> > On 1/10/06, David van der Spoel <spoel at xray.bmc.uu.se> wrote:
> > > Robert Bjornson wrote:
> > > > Hi,
> > > >
> > > > I'm experiencing very poor scaling when using PME on gromacs-3.3, and
> > > > looking through the list indicates that this is a known issue with
> > > > that release. However, there was some indication that work has been
> > > > done on parallelizing PME, and that the top of CVS might contain a
> > > > version that is worth trying.
> > >
> > > In addition to Erik's answer: can you be more specific? We've seen quite
> > > decent scaling on up to 16 processors, but depending strongly on the
> > > size of the system and the interconnect.
> > > >
> > > > Has anyone tried this? Did your PME performance improve? If so, did
> > > > you simply take the top of cvs, or is there a tag that is more likely
> > > > to work successfully?
> > > >
> > > > thanks,
> > > >
> > > > Rob Bjornson
> > > > _______________________________________________
> > > > gmx-users mailing list
> > > > gmx-users at gromacs.org
> > > > http://www.gromacs.org/mailman/listinfo/gmx-users
> > > > Please don't post (un)subscribe requests to the list. Use the
> > > > www interface or send it to gmx-users-request at gromacs.org.
> > >
> > >
> > > --
> > > David.
> > > ________________________________________________________________________
> > > David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
> > > Dept. of Cell and Molecular Biology, Uppsala University.
> > > Husargatan 3, Box 596, 75124 Uppsala, Sweden
> > > phone: 46 18 471 4205 fax: 46 18 511 755
> > > spoel at xray.bmc.uu.se spoel at gromacs.org http://xray.bmc.uu.se/~spoel
> > > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >
> > _______________________________________________
> > gmx-users mailing list
> > gmx-users at gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics Department
> Am Fassberg 11
> 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> eMail ckutzne at gwdg.de
More information about the gromacs.org_gmx-users