[gmx-users] Optimal pme grid

Mahmood Naderan nt_mahmood at yahoo.com
Fri Aug 31 13:20:50 CEST 2018


Hi
It seems that changing the number of ntmpi and ntomp affects the number of steps that takes to calculate the optimal pme grid. Is that correct?

Please see the following output

gmx mdrun -nb gpu -ntmpi 1 -ntomp 16 -v -deffnm nvt
Using 1 MPI thread
Using 16 OpenMP threads 
step 2400: timed with pme grid 60 80 60, coulomb cutoff 1.037: 5708.7 M-cycles
step 2600: timed with pme grid 64 80 60, coulomb cutoff 1.000: 5382.6 M-cycles
              optimal pme grid 64 80 60, coulomb cutoff 1.000
step 3900, remaining wall clock time:     1 s          



gmx mdrun -nb gpu -ntmpi 16 -ntomp 1 -v -deffnm nvtUsing 16 MPI threads
Using 1 OpenMP thread per tMPI thread
step 3800: timed with pme grid 56 72 56, coulomb cutoff 1.111: 21060.1 M-cycles
step 4000: timed with pme grid 60 72 56, coulomb cutoff 1.075: 21132.8 M-cycles

Writing final coordinates.



I have intentionally limit the number of steps to 4000. As you can see, in the second run, the optimal value has not been reached.






Regards,
Mahmood


More information about the gromacs.org_gmx-users mailing list