[gmx-users] offloading PME to GPUs

jing liang jingliang2015 at gmail.com
Thu Feb 7 15:17:39 CET 2019


Hi,

thanks for this information. I wonder if PME offload has been implemented
for more than one
node simulations? I tried the following command for running on two nodes
(with 4 ranks each
and 4 OpenMP threads)

mpirun -np 8 gmx_mpi mdrun -nb gpu -pme gpu -npme 1 -ntomp 4 -dlb yes

the log files when the simulation finishes says:


On 7 MPI ranks doing PP, each using 4 OpenMP threads, and
on 1 MPI rank doing PME, using 4 OpenMP threads

Thus, it seems that only 1 PME rank (on a single node was offloaded)?




El mié., 6 feb. 2019 a las 15:42, Kevin Boyd (<kevin.boyd at uconn.edu>)
escribió:

> Hi,
>
> Your log file will definitely tell you whether PME was offloaded.
>
> The performance gains depend on your hardware, particularly the CPU/GPU
> balance. There have been a number of threads on this forum discussing this
> topic, if you search back through the gmx_user archives. The gist of it is
> that with you can balance a good GPU with ~4 CPU cores, though that's
> dependent on the CPU quality as well.
>
> The docs in Gromacs 2019 have been updated to include examples on running
> PME on GPUs
> http://manual.gromacs.org/current/user-guide/mdrun-performance.html
> Search for "-pme gpu". The examples should mostly apply if you're using
> Gromacs 2018 as well (although you won't have the -bonded options).
>
> Kevin
>
>
> On Wed, Feb 6, 2019 at 9:29 AM jing liang <jingliang2015 at gmail.com> wrote:
>
> > Hello,
> >
> > I understood from the documentation:
> >
> >
> >
> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmanual.gromacs.org%2Fdocumentation%2F2018-current%2Fuser-guide%2Fmdrun-performance.html&amp;data=02%7C01%7Ckevin.boyd%40uconn.edu%7Caec25493f30d483d29e008d68c3f8aad%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636850601922827903&amp;sdata=QvfmUT6Ua70zXuDezkO%2Bzunj8WX72qyGBpQvcIRm%2BgA%3D&amp;reserved=0
> >
> > that PME can be now offloaded to GPUs in v2018. I'm using SLURM in a
> > machine with 24
> > cores, I wonder if the following script would be able to offload PME to
> > GPUs
> >
> > #SBATCH -n 2
> > #SBATCH -c 12
> > #SBATCH --gpu:2
> >
> > slurun gmx_mpi mdrun -ntomp 12 -npme 0 -dlb yes  -v -deffnm step1
> >
> > Is there some information in the log files which can allow me to see if
> PME
> > has been
> > offloaded and what was the performance gain upon using GPUs?
> >
> > Thanks.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> >
> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists%2FGMX-Users_List&amp;data=02%7C01%7Ckevin.boyd%40uconn.edu%7Caec25493f30d483d29e008d68c3f8aad%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636850601922837913&amp;sdata=W9R%2BATbN5LSgPIdhIxzxUIZ1q5aW9SaVXHmSUbMwgIw%3D&amp;reserved=0
> > before posting!
> >
> > * Can't post? Read
> >
> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.gromacs.org%2FSupport%2FMailing_Lists&amp;data=02%7C01%7Ckevin.boyd%40uconn.edu%7Caec25493f30d483d29e008d68c3f8aad%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636850601922837913&amp;sdata=7HobuqWxVukzkfhWJMv1XLxNqDvHWVkfkQktV49ncaQ%3D&amp;reserved=0
> >
> > * For (un)subscribe requests visit
> >
> >
> https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmaillist.sys.kth.se%2Fmailman%2Flistinfo%2Fgromacs.org_gmx-users&amp;data=02%7C01%7Ckevin.boyd%40uconn.edu%7Caec25493f30d483d29e008d68c3f8aad%7C17f1a87e2a254eaab9df9d439034b080%7C0%7C0%7C636850601922837913&amp;sdata=4QOcG5jH%2BnoNwiAjdReKq4hWmuDy7zVweAA2yPVzR%2Bg%3D&amp;reserved=0
> > or send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list