[gmx-users] PME nodes

Mark Abraham Mark.Abraham at anu.edu.au
Thu May 31 13:21:09 CEST 2012


On 31/05/2012 9:11 PM, Ignacio Fernández Galván wrote:
> Hi all,
>
> There must be something I don't fully understand, by running grompp on a system, I get this:
>
>    Estimate for the relative computational load of the PME mesh part: 0.32
>
> Good, that's approximately 1/3, or a 2:1 PP:PME ratio, which is the recommended value for a dodecahedral box. But then I run the dynamics with "mdrun_mpi -np 8" (different cores in a single physical machine) and I get:
>
>    Initializing Domain Decomposition on 8 nodes
>    [...]
>    Using 0 separate PME nodes
>
> I would have expected at least 2 nodes (3:1, 0.25) to be used for PME, so there's obviously something wrong in my assumption.
>
> Should I be looking somewhere in the output to find out why? Would it be better to try to get some dedicated PME node(s) (even in a single machine)?

Generally mdrun does pretty well, given the constraints you've set for 
it. Here, you've implicitly let it choose (with mdrun -npme -1), and 
with fewer than a minimum number of nodes (10, in 4.5.5) it doesn't 
bother, since the book-keeping would be too costly. Otherwise, you can 
investigate the reasons for the choices mdrun made from the output in 
the .log file.

You can try mdrun -npme 2 or 3 if you like, but it's likely not faster 
or might even refuse to run. See manual 3.17, also.

Mark



More information about the gromacs.org_gmx-users mailing list