[gmx-users] Simulation Across Mulitple Nodes with GPUs and PME
Kutzner, Carsten
ckutzne at gwdg.de
Wed Dec 19 11:43:26 CET 2018
Hi,
> On 18. Dec 2018, at 18:04, Zachary Wehrspan <zwehrspan at gmail.com> wrote:
>
> Hello,
>
>
> I have a quick question about how GROMACs 2018.5 distributes GPU resources
> across multiple nodes all running one simulation. Reading the
> documentation, I think it says that only 1 GPU can be assigned to the PME
> calculation.
That is correct. The PME grid part cannot be parallelized over multiple
GPUs yet.
> Is it then true if I had 10 nodes each with 4 GPUs all working
> on the same simulation only one GPU of the 40 total could be working on the
> PME calculation?
Yes.
> Or could each node contribute 1 GPU to the PME
> calculation?
No.
In a setup with 40 GPUs it will likely be a bottleneck to have the PME
grid computations computed on one of the GPUs. It is probably faster
to not offload the PME grid part to a GPU, but instead run it on
the CPUs, where it can be parallelized.
Best regards,
Carsten
> Any help would be gratefully received.
>
>
> Thanks,
>
> Zachary Wehrspan
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa
More information about the gromacs.org_gmx-users
mailing list