[gmx-developers] FMM in GMX-5.x

Mark Abraham mark.j.abraham at gmail.com
Sun Aug 17 04:52:51 CEST 2014


Hi,

Some work has started, but it will take a while to complete.

Mark
On Aug 16, 2014 8:00 AM, "Yorquant Wang" <wangykoo at gmail.com> wrote:

> Hi,
>      So, if the size of the simulation is big enough, FMM can be faster
> than PME.  Even though I heard that FMM is very hard to be coded into
> program, I just want to know the state of implement of FMM to GMX5.x .
>      It seems that FMM probabily can behave better than PME in massively
> parallelled architecture hardware, for example the mutliple GPU or CPU or
> MIC nodes clusters.  I quite look forword that.
>
>
> 2014-08-15 4:42 GMT+08:00 Mark Abraham <mark.j.abraham at gmail.com>:
>
>>
>>
>>
>> On Thu, Aug 14, 2014 at 1:42 PM, Smart Eagle <wangykoo at gmail.com> wrote:
>>
>>> Hi,
>>>     I just notice that GMX have plan to implement FMM in to GMX to
>>> calculate long-term forces for MD. You know PME will suffer FFT NxN all
>>> communactiom problems and FMM seemly can scale much better than FME. Indeed
>>> there have been a progrom called MODYLAS which have implement FMM to MD (J.
>>> Chem. Theory Comput. 2013, 9, 3201−3209‍).
>>>     I just want to know therotically, which is larger between FMM and
>>> PME for the time cost of long-term forces calculation in one MD steps in
>>> one nodes ? (means the basic calculation load for each method without
>>> consideration of communications between nodes). ‍‍‍
>>>
>>
>> Theoretically - it depends on the size of the simulation (which fixes N)
>> and the quality of the implementations (which fix the prefactors in the
>> asymptotic analysis of O(N) vs O(N log N)).
>>
>> Practically - you could analyze the total flops on a given problem, but
>> it would tell you little about the performance on a single node, because
>> how well the implementation handles data flow is what dominates performance
>> (now, and more so in the future). Adding inter-node parallelism makes that
>> worse.
>>
>> Mark
>>
>>
>>> cheers up
>>>
>>>
>>> --
>>> Gromacs Developers mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>>> or send a mail to gmx-developers-request at gromacs.org.
>>>
>>
>>
>> --
>> Gromacs Developers mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>> or send a mail to gmx-developers-request at gromacs.org.
>>
>
>
>
> --
> Yukun Wang
> PhD candidate
> Institute of Natural Sciences && College of Life Science, Shanghai Jiao
> Tong University
> Cell phone: 13621806236.
> China Shanghai
>
> --
> Gromacs Developers mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
> or send a mail to gmx-developers-request at gromacs.org.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20140817/b10325eb/attachment.html>


More information about the gromacs.org_gmx-developers mailing list