[gmx-users] Is the PME domain decomposition flexible in some hidden way?
paolo.franz at gmail.com
Sat Apr 7 15:46:00 CEST 2012
Thank you very much for the help. I read the part of the manual you
referred to, it is now clear how to choose the decomposition.
On 3 April 2012 17:16, Mark Abraham <Mark.Abraham at anu.edu.au> wrote:
> On 3/04/2012 11:58 PM, Paolo Franz wrote:
>> I am wondering if the domain decomposition chosen by the code, once the
>> number of cpus dedicated to PME is chosen,can be overridden somehow. That
>> is, if in some sort there exist a reciprocal space equivalent of "-dd" for
>> the domain decomposition of the direct space. The problem I have is that I
>> am trying to get a good scaling factor for some large system I have, on
>> 1000/2000 CPUs. If for instance I have 128 CPU on PME and a 128^3 grid, I
>> would like to have a decomposition 8X16X1 on PME, which should be perfectly
>> ok with the parallelization scheme, if I understand it correctly.
>> Unfortunately, the default one I get is 128X1X1, which is inevitably very
>> inefficient. Is there anything I could be doing?
> The automatic choice depends on a large number of factors. A big factor is
> optimizing the PP<->PME communication load. Big common factors in the
> divisions of processors into grids are key, so long as those factors can
> lead to useful hardware mappings. In particular, see last para 3.17.5 of
> gmx-users mailing list gmx-users at gromacs.org
> Please search the archive at http://www.gromacs.org/**
> Support/Mailing_Lists/Search<http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<http://www.gromacs.org/Support/Mailing_Lists>
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the gromacs.org_gmx-users