[gmx-users] Re: Looking for GPU benchmarks

Szilárd Páll szilard.pall at cbr.su.se
Mon Aug 27 17:25:02 CEST 2012

Which system did you run? What settings?

A few tips:
- Use CUDA 4.2 (5.0 on Kepler);
- Have at least 10-20k atoms/GPU (and more to get peak GPU performance);
- Use the shortest cut-off possible to allow CPU-GPU load balancing;
- Due to initial domain-decomposition/parallelization overhead,
scaling from one to two GPUs is affected by this overhead.
(- If load balancing is limited, try using multiple MPI ranks per GPU.)


On Mon, Aug 27, 2012 at 1:31 PM, Mathieu38 <mathieu.dubois at bull.net> wrote:
> I have tried the basic approcah of taking some of the input files that are
> provided with the sources of Gromacs or on the website, and adding the line
> cutoff-scheme = Verlet
> in the grompp.mdp file
> However, I have not find a case where use of GPUs (2 MPI Tasks, 4 OpenMP
> Threads per MPI tasks, 2 GPUs) lead to a significant speed up, comparing to
> a CPU only version using 8 cores on a single node.
> I don't know if this is because GPU implementation is still not performant
> or if this is because my test cases are unappropriate.
> Any help here would be very much appreciated.
> Thx
> --
> View this message in context: http://gromacs.5086.n6.nabble.com/Looking-for-GPU-benchmarks-tp5000377p5000567.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

More information about the gromacs.org_gmx-users mailing list