[gmx-users] PME on GPU4.6.2)

Mark Abraham mark.j.abraham at gmail.com
Sun Mar 23 23:33:56 CET 2014


On Sun, Mar 23, 2014 at 6:40 PM, Albert Romano <albertmano at aol.com> wrote:

>
> Dear list,
> I have noticed the following changelog from Gromacs 4.6.2:
>
>
> --> Added CUDA PME kernels with analytical Ewald correction
>
>
> Does that mean we can now offload PME computation to GPU?
>

No. That refers to short-ranged kernels used with PME.

More generally, there'd be little/no advantage in CUDA offload, because it
would leave the CPU idle in many cases. In parallel, the CPU-based FFT for
PME is latency-bound already, and adding CUDA offload latency would be
silly, unless/until GPUs come with on-board network connects, and/or are
coupled with really weak CPUs.

Mark


>
> I have not found how... I have noticed two environment variables:
> GMX_CUDA_NB_ANA_EWALD and GMX_CUDA_NB_TAB_EWALD but even if I export them
> it does not seem to activate PME on GPU.
> My technique for verifying is: raise PME accuracy by means of
> fourierspacing, interpolation etc... and see from the force evaluation time
> GPU/CPU that the CPU time has increased.
>
>
> Thank you
> AR
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list