[gmx-users] Optimal pme grid

Mark Abraham mark.j.abraham at gmail.com
Tue Sep 11 10:18:48 CEST 2018


Hi,

Yes this is expected. The number of ranks affects what grids may be chosen,
and thus the number of alternatives the optimizer might try. For such a
short run, the optimization is fairly useless, however.

Mark

On Fri, Aug 31, 2018 at 1:21 PM Mahmood Naderan <nt_mahmood at yahoo.com>
wrote:

> Hi
> It seems that changing the number of ntmpi and ntomp affects the number of
> steps that takes to calculate the optimal pme grid. Is that correct?
>
> Please see the following output
>
> gmx mdrun -nb gpu -ntmpi 1 -ntomp 16 -v -deffnm nvt
> Using 1 MPI thread
> Using 16 OpenMP threads
> step 2400: timed with pme grid 60 80 60, coulomb cutoff 1.037: 5708.7
> M-cycles
> step 2600: timed with pme grid 64 80 60, coulomb cutoff 1.000: 5382.6
> M-cycles
>               optimal pme grid 64 80 60, coulomb cutoff 1.000
> step 3900, remaining wall clock time:     1 s
>
>
>
> gmx mdrun -nb gpu -ntmpi 16 -ntomp 1 -v -deffnm nvtUsing 16 MPI threads
> Using 1 OpenMP thread per tMPI thread
> step 3800: timed with pme grid 56 72 56, coulomb cutoff 1.111: 21060.1
> M-cycles
> step 4000: timed with pme grid 60 72 56, coulomb cutoff 1.075: 21132.8
> M-cycles
>
> Writing final coordinates.
>
>
>
> I have intentionally limit the number of steps to 4000. As you can see, in
> the second run, the optimal value has not been reached.
>
>
>
>
>
>
> Regards,
> Mahmood
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list