[gmx-users] RE: Looking for GPU benchmarks
Mathieu38
mathieu.dubois at bull.net
Mon Aug 27 17:34:30 CEST 2012
Hi,
Thanks for your answer.
The thing is that I am a benchmarker and I have no knowledge of the
physics behind GROMACS. My goal is to demonstrate the interest (or not)
in using GPUs for a customer test case.
So my idea would be to do one run using the CPU only version of GROMACS,
then modify the input file to add cutoff-scheme = Verlet and run the
GPU version.
I am not sure how to modify any other parameters without changing the
physics of the problem.
Regards,
Mathieu Dubois
Hardware Accelerators expert
Applications & Performances Team
tel : +33 (0)4.76.29.70.56
BULL, Architect of An Open World
http://www.bull.com
P Pensez à lenvironnement avant dimprimer / Before printing, think
about the environment.
De : Szilárd Páll [via GROMACS] <ml-node+s5086n5000577h4 at n6.nabble.com>
[mailto:Szilárd Páll [via GROMACS]
<ml-node+s5086n5000577h4 at n6.nabble.com>]
Envoyé : lundi 27 août 2012 17:26
À : Mathieu38 <mathieu.dubois at bull.net>
Objet : Re: Looking for GPU benchmarks
Which system did you run? What settings?
A few tips:
- Use CUDA 4.2 (5.0 on Kepler);
- Have at least 10-20k atoms/GPU (and more to get peak GPU performance);
- Use the shortest cut-off possible to allow CPU-GPU load balancing;
- Due to initial domain-decomposition/parallelization overhead,
scaling from one to two GPUs is affected by this overhead.
(- If load balancing is limited, try using multiple MPI ranks per GPU.)
--
Szilárd
On Mon, Aug 27, 2012 at 1:31 PM, Mathieu38 <[hidden email]> wrote:
> I have tried the basic approcah of taking some of the input files that
are
> provided with the sources of Gromacs or on the website, and adding the
line
>
> cutoff-scheme = Verlet
>
> in the grompp.mdp file
>
> However, I have not find a case where use of GPUs (2 MPI Tasks, 4
OpenMP
> Threads per MPI tasks, 2 GPUs) lead to a significant speed up,
comparing to
> a CPU only version using 8 cores on a single node.
>
> I don't know if this is because GPU implementation is still not
performant
> or if this is because my test cases are unappropriate.
>
> Any help here would be very much appreciated.
>
> Thx
>
>
>
>
> --
> View this message in context:
http://gromacs.5086.n6.nabble.com/Looking-for-GPU-benchmarks-tp5000377p5
000567.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> gmx-users mailing list [hidden email]
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to [hidden email].
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
gmx-users mailing list [hidden email]
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to [hidden email].
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
________________________________
If you reply to this email, your message will be added to the discussion
below:
http://gromacs.5086.n6.nabble.com/Looking-for-GPU-benchmarks-tp5000377p5
000577.html
To unsubscribe from Looking for GPU benchmarks, click here
<http://gromacs.5086.n6.nabble.com/template/NamlServlet.jtp?macro=unsubs
cribe_by_code&node=5000377&code=bWF0aGlldS5kdWJvaXNAYnVsbC5uZXR8NTAwMDM3
N3wtNTc4MTY2NzIx> .
NAML
<http://gromacs.5086.n6.nabble.com/template/NamlServlet.jtp?macro=macro_
viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces
.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web
.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.
naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3A
email.naml>
--
View this message in context: http://gromacs.5086.n6.nabble.com/Looking-for-GPU-benchmarks-tp5000377p5000578.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
More information about the gromacs.org_gmx-users
mailing list