[gmx-users] Best performace with 0 core for PME calcuation

Nicolas nsapay at ucalgary.ca
Sat Jan 10 20:32:36 CET 2009


Berk Hess a écrit :
> Hi,
>
> Setting -npme 2 is ridicolous.
> mdrun estimates the number of PME nodes by itself when you do not 
> specify -npme.
> In most cases you need 1/3 or 1/4 of the nodes doing pme.
> The default -npme guess of mdrun is usually not bad,
> but might need to tuned a bit.
> At the end of the md.log file you find the relative PP/PME load
> so you can see in which direction you might need to change -npme,
> if necessary.
Actually, I have tested npme ranging from 0 to 5, but 2 is well 
representative of what happens. For example with 5 cores for the PME, 
the perfs reach a plateau at 14-15 cores. So, setting npme to 0 
systematically gives the best results. I have also tested -1. With, npme 
set to -1, the performances are the same than for 0 until 8 cores. Above 
that, the guess is not so efficient.

Nicolas
>
> Berk
>
> > Date: Fri, 9 Jan 2009 18:37:37 -0700
> > From: nsapay at ucalgary.ca
> > To: gmx-users at gromacs.org
> > Subject: Re: [gmx-users] Best performace with 0 core for PME calcuation
> >
> > Nicolas a écrit :
> > > Hello,
> > >
> > > I'm trying to do a benchmark with Gromacs 4 on our cluster, but I
> > > don't completely understand the results I obtain. The system I 
> used is
> > > a 128 DOPC bilayer hydrated by ~18800 SPC for a total of ~70200 
> atoms.
> > > The size of the system is 9.6x9.6x10.1 nm^3. I'm using the following
> > > parameters:
> > >
> > > * nstlist = 10
> > > * rlist = 1
> > > * Coulombtype = PME
> > > * rcoulomb = 1
> > > * fourier spacing = 0.12
> > > * vdwtype = Cutoff
> > > * rvdw = 1
> > >
> > > The cluster itself has got 2 procs/node connected by Ethernet 100
> > > MB/s. I'm using mpiexec to run Gromacs. When I use -npme 2 -ddorder
> > > interleave, I get:
> > Little mistake: I used the wrong cluster specifications. There is 4
> > cores per nodes and they communicate with Infiniband.
> > > ncore Perf (ns/day) PME (%)
> > >
> > > 1 0,00 0
> > > 2 0,00 0
> > > 3 0,00 0
> > > 4 1,35 28
> > > 5 1,84 31
> > > 6 2,08 27
> > > 8 2,09 21
> > > 10 2,25 17
> > > 12 2,02 15
> > > 14 2,20 13
> > > 16 2,04 11
> > > 18 2,18 10
> > > 20 2,29 9
> > >
> > > So, above 6-8 cores, the PP nodes are spending too much time waiting
> > > for the PME nodes and the perf forms a plateau. When I use -npme 0, I
> > > get:
> > >
> > > ncore Perf (ns/day) PME (%)
> > > 1 0,43 33
> > > 2 0,92 34
> > > 3 1,34 35
> > > 4 1,69 36
> > > 5 2,17 33
> > > 6 2,56 32
> > > 8 3,24 33
> > > 10 3,84 34
> > > 12 4,34 35
> > > 14 5,05 32
> > > 16 5,47 34
> > > 18 5,54 37
> > > 20 6,13 36
> > >
> > > I obtain much better performances when there is no PME nodes, while I
> > > was expecting the opposite. Does someone have an explanation for 
> that?
> > > Does that means domain decomposition is useless below a certain real
> > > space cutoff? I'm quite confused.
> > >
> > > Thanks,
> > > Nicolas
> > >
> > >
> > > _______________________________________________
> > > gmx-users mailing list gmx-users at gromacs.org
> > > http://www.gromacs.org/mailman/listinfo/gmx-users
> > > Please search the archive at http://www.gromacs.org/search before 
> posting!
> > > Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-request at gromacs.org.
> > > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> >
>
> ------------------------------------------------------------------------
> See all the ways you can stay connected to friends and family 
> <http://www.microsoft.com/windows/windowslive/default.aspx>
> ------------------------------------------------------------------------
>
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-------------- next part --------------
A non-text attachment was scrubbed...
Name: nsapay.vcf
Type: text/x-vcard
Size: 310 bytes
Desc: not available
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20090110/3135a07a/attachment.vcf>


More information about the gromacs.org_gmx-users mailing list