[gmx-users] PME nodes
Peter C. Lai
pcl at uab.edu
Thu May 31 13:20:34 CEST 2012
According to the manual, mdrun does not dedicate PME nodes unless -np > 11
You can manually specify dedicated PME nodes using -npme, but it is highly
system dependent on whether this will be faster on lowcore systems.
Also the estimate given by grompp may not be optimal during runtime. You'll
have to repeat runs on different node combinations or use g_tune_pme to
discover the optimal PME:PP ratio for your particular system.
On 2012-05-31 04:11:15AM -0700, Ignacio Fernández Galván wrote:
> Hi all,
>
> There must be something I don't fully understand, by running grompp on a system, I get this:
>
> Estimate for the relative computational load of the PME mesh part: 0.32
>
> Good, that's approximately 1/3, or a 2:1 PP:PME ratio, which is the recommended value for a dodecahedral box. But then I run the dynamics with "mdrun_mpi -np 8" (different cores in a single physical machine) and I get:
>
> Initializing Domain Decomposition on 8 nodes
> [...]
> Using 0 separate PME nodes
>
> I would have expected at least 2 nodes (3:1, 0.25) to be used for PME, so there's obviously something wrong in my assumption.
>
> Should I be looking somewhere in the output to find out why? Would it be better to try to get some dedicated PME node(s) (even in a single machine)?
>
> Thanks,
> Ignacio
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
==================================================================
Peter C. Lai | University of Alabama-Birmingham
Programmer/Analyst | KAUL 752A
Genetics, Div. of Research | 705 South 20th Street
pcl at uab.edu | Birmingham AL 35294-4461
(205) 690-0808 |
==================================================================
More information about the gromacs.org_gmx-users
mailing list