[gmx-users] Is the PME domain decomposition flexible in some hidden way?
Mark Abraham
Mark.Abraham at anu.edu.au
Tue Apr 3 17:16:09 CEST 2012
On 3/04/2012 11:58 PM, Paolo Franz wrote:
> Hello!
> I am wondering if the domain decomposition chosen by the code, once
> the number of cpus dedicated to PME is chosen,can be overridden
> somehow. That is, if in some sort there exist a reciprocal space
> equivalent of "-dd" for the domain decomposition of the direct space.
> The problem I have is that I am trying to get a good scaling factor
> for some large system I have, on 1000/2000 CPUs. If for instance I
> have 128 CPU on PME and a 128^3 grid, I would like to have a
> decomposition 8X16X1 on PME, which should be perfectly ok with the
> parallelization scheme, if I understand it correctly. Unfortunately,
> the default one I get is 128X1X1, which is inevitably very
> inefficient. Is there anything I could be doing?
The automatic choice depends on a large number of factors. A big factor
is optimizing the PP<->PME communication load. Big common factors in the
divisions of processors into grids are key, so long as those factors can
lead to useful hardware mappings. In particular, see last para 3.17.5 of
manual.
Mark
More information about the gromacs.org_gmx-users
mailing list