[gmx-users] Strange assignment of atoms to processors with pd

Berk Hess gmx3 at hotmail.com
Wed May 27 11:10:02 CEST 2009


Hi,

This is plain 4.0 code is presume?
This problem should be fixed then.

But I now also made vacuum without cut-off working with domain decomposition in CVS head.
Compared to a not-unbalanced PD (for instance only a protein, no water) DD is slightly slower.
But DD will be faster than a badly balanced PD system.

Berk

> Date: Wed, 27 May 2009 11:04:49 +0200
> From: spoel at xray.bmc.uu.se
> To: gmx-users at gromacs.org
> Subject: Re: [gmx-users] Strange assignment of atoms to processors with pd
> 
> Erik Marklund wrote:
> > David van der Spoel skrev:
> >> Erik Marklund wrote:
> >>> I should add that this problem only seem to arise when the analyte is 
> >>> covered with a thin sheet of water. When simulating a dry analyte I 
> >>> get good scaling. In the latter case the charges, and therefore the 
> >>> topology, is slightly different.
> >> How about vsites? Did you happen to turn them off as well in the 
> >> vacuum case?
> > Turned off in all cases. The VSites mentioned in the log file is the 
> > 4:th particle on the tip4p-water molecules.
> OK. Did you try a one step run with -debug?
> It may give more info on the partitioning.
> 
> >>>
> >>> /Erik
> >>>
> >>> Erik Marklund skrev:
> >>>> Hi,
> >>>>
> >>>> I'm simulating non-periodic systems in vacuo, using constrained 
> >>>> h-bonds and particle decomposition. For some of my simulations the 
> >>>> cpu-usage seem far from optimal. The first cpu gets no atoms, while 
> >>>> the second one gets plenty and the remaining cpus get less than I 
> >>>> expected. Is this a bug?
> >>>>
> >>>>
> >>>> An excerpt from the log file:
> >>>>
> >>>> There are: 2911 Atoms
> >>>> There are: 317 VSites
> >>>> splitting topology...
> >>>> There are 999 charge group borders and 318 shake borders
> >>>> There are 318 total borders
> >>>> Division over nodes in atoms:
> >>>> 0 1960 212 212 212 212 212 208
> >>>> Walking down the molecule graph to make constraint-blocks
> >>>> CPU= 0, lastcg= -1, targetcg= 499, myshift= 1
> >>>> CPU= 1, lastcg= 681, targetcg= 182, myshift= 0
> >>>> CPU= 2, lastcg= 734, targetcg= 235, myshift= 7
> >>>> CPU= 3, lastcg= 787, targetcg= 288, myshift= 6
> >>>> CPU= 4, lastcg= 840, targetcg= 341, myshift= 5
> >>>> CPU= 5, lastcg= 893, targetcg= 394, myshift= 4
> >>>> CPU= 6, lastcg= 946, targetcg= 447, myshift= 3
> >>>> CPU= 7, lastcg= 998, targetcg= 499, myshift= 2
> >>>> pd->shift = 7, pd->bshift= 0
> >>>> Division of bonded forces over processors
> >>>> CPU 0 1 2 3 4 5 6 7
> >>>> Workload division
> >>>> nnodes: 8
> >>>> pd->shift: 7
> >>>> pd->bshift: 0
> >>>> Nodeid atom0 #atom cg0 #cg
> >>>> 0 0 0 0 0
> >>>> 1 0 1960 0 682
> >>>> 2 1960 212 682 53
> >>>> 3 2172 212 735 53
> >>>> 4 2384 212 788 53
> >>>> 5 2596 212 841 53
> >>>> 6 2808 212 894 53
> >>>> 7 3020 208 947 52
> >>>>
> >>>> …
> >>>> Total Scaling: 18% of max performance
> >>>>
> >>>
> >>>
> >>
> >>
> > 
> > 
> 
> 
> -- 
> David van der Spoel, Ph.D., Professor of Biology
> Molec. Biophys. group, Dept. of Cell & Molec. Biol., Uppsala University.
> Box 596, 75124 Uppsala, Sweden. Phone:	+46184714205. Fax: +4618511755.
> spoel at xray.bmc.uu.se	spoel at gromacs.org   http://folding.bmc.uu.se
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php

_________________________________________________________________
What can you do with the new Windows Live? Find out
http://www.microsoft.com/windows/windowslive/default.aspx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20090527/d848bb84/attachment.html>


More information about the gromacs.org_gmx-users mailing list