[gmx-developers] free energies on GPUs?

Igor Leontyev ileontyev at ucdavis.edu
Wed Feb 22 10:05:02 CET 2017

I am having hard time with accelerating free energy (FE) simulations on 
my high end GPU. Not sure is it normal for my smaller systems or I am 
doing something wrong.

The efficiency of GPU acceleration seems to decrease with the system 
size, right? Typical sizes in FE simulations in water is 32x32x32 A^3 
(~3.5K atoms) and in protein it is about 60x60x60A^3 (~25K atoms). 
Requirement for larger MD box in FE simulation is rather rare.

For my system (with 11K atoms) I am getting on 8 cpus and with GTX 1080 
gpu only up to 50% speedup. GPU utilization during simulation is only 
1-2%. Does it sound right? (I am using current gmx ver-2016.2 and CUDA 
driver 8.0; by request will attach log-files with all the details.)

BTW, regarding how much take perturbed interactions, in my case 
simulation with "free_energy = no" running about TWICE faster.


> On 2/13/17, 1:32 AM, "gromacs.org_gmx-developers-bounces at maillist.sys.kth.se on behalf of Berk Hess" <gromacs.org_gmx-developers-bounces at maillist.sys.kth.se on behalf of hess at kth.se> wrote:
>     That depends on what you mean with this.
>     With free-energy all non-perturbed non-bonded interactions can run on
>     the GPU. The perturbed ones currently can not. For a large system with a
>     few perturbed atoms this is no issue. For smaller systems the
>     free-energy kernel can be the limiting factor. I think there is a lot of
>     gain to be had in making the extremely complex CPU free-energy kernel
>     faster. Initially I thought SIMD would not help there. But since any
>     perturbed i-particle will have perturbed interactions with all j's, this
>     will help a lot.
>     Cheers,
>     Berk
>     On 2017-02-13 01:08, Michael R Shirts wrote:
>     > What?s the current state of free energy code on GPU?s, and what are the roadblocks?
>     >
>     > Thanks!
>     > ~~~~~~~~~~~~~~~~
>     > Michael Shirts

More information about the gromacs.org_gmx-developers mailing list