[gmx-developers] PME on GPUs time line

Jochen Hub jhub at gwdg.de
Thu Jan 19 14:02:24 CET 2017


Am 19/01/2017 um 13:03 schrieb Erik Lindahl:
> All changes are available in the sense that they are public, but there
> is no guarantee whatsoever that they produce correct results or are
> representative of the performance that will be in the release.
>
> Not that we think it's bad, but the main reason things haven't been
> committed is that it's not completely ready yet.
>
> Feel free to play with them, but we want to spend the efforts on
> finishing the changes rather than supporting unfinished ones :-)

Thanks, that sounds perferctly fine. But even the current non-finished 
verion may give us some indication where the cpu/gpu balance is going. 
So'll give it a try with the current gerrit commit.

So thanks again,
Jochen

>
> Cheers,
>
> Erik
>
> Erik Lindahl <erik.lindahl at scilifelab.se
> <mailto:erik.lindahl at scilifelab.se>>
> Professor of Biophysics
> Science for Life Laboratory
> Stockholm University & KTH
> Office (SciLifeLab): +46 8 524 81567
> Cell (Sweden): +46 73 4618050
> Cell (US): +1 267 3078746
>
>
> On 19 Jan 2017, at 11:52, Jochen Hub <jhub at gwdg.de
> <mailto:jhub at gwdg.de>> wrote:
>
>> Hi Berk,
>>
>> many thanks for the quick reply. We are btw targeting purely throughput.
>>
>> Is the development tree somewhere available, so we can get a rough
>> feeling on the performace on our GPUs?
>>
>> Many thanks,
>> Jochen
>>
>> Am 19/01/2017 um 09:30 schrieb Berk Hess:
>>> Hi,
>>>
>>> A basic version of PME on CUDA GPUs will be shipped with the 2017
>>> release. Most likely this will only be single GPU, possibly with the
>>> option to run pair interactions on one GPU and PME on another. The
>>> changes up in gerrit are all working, but not complete yet. Aleksei has
>>> a development tree with a complete implementation. The main question is
>>> how much performance optimization and feature completion we can do
>>> before the 2017 release.
>>> PME on GPU will allow you to buy a CPU+GPU cluster with cheaper CPUs if
>>> you are targeting throughput. It is less clear what the best setup will
>>> be for the highest ns/day, since PME requires a lot of different kernels
>>> which results in higher overheads.
>>>
>>> Cheers,
>>>
>>> Berk
>>>
>>> On 01/19/2017 08:55 AM, Jochen Hub wrote:
>>>> Hi developers,
>>>>
>>>> I noticed that Aleksei has uploaded a whole bunch of commits for
>>>> implenting PME for GPUs. Since we about to buy a new cluster, I was
>>>> wondering if there is a rough time line when PME/CUDA code goes into
>>>> master and realease branches. If PME runs smoothly on GPUs, the
>>>> hardware with the best price/performance ratio obviously changes.
>>>>
>>>> Also, are the commits in the branch "master (pme)" already suitable
>>>> for some preliminary benchmarking?
>>>>
>>>> Many thanks,
>>>> Jochen
>>>>
>>>
>>
>> --
>> ---------------------------------------------------
>> Dr. Jochen Hub
>> Computational Molecular Biophysics Group
>> Institute for Microbiology and Genetics
>> Georg-August-University of Göttingen
>> Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
>> Phone: +49-551-39-14189
>> http://cmb.bio.uni-goettingen.de/
>> ---------------------------------------------------
>> --
>> Gromacs Developers mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List
>> before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or
>> send a mail to gmx-developers-request at gromacs.org
>> <mailto:gmx-developers-request at gromacs.org>.
>
>

-- 
---------------------------------------------------
Dr. Jochen Hub
Computational Molecular Biophysics Group
Institute for Microbiology and Genetics
Georg-August-University of Göttingen
Justus-von-Liebig-Weg 11, 37077 Göttingen, Germany.
Phone: +49-551-39-14189
http://cmb.bio.uni-goettingen.de/
---------------------------------------------------


More information about the gromacs.org_gmx-developers mailing list