[gmx-developers] Gromacs FFT

Roland Schulz roland at utk.edu
Wed Dec 10 19:07:02 CET 2008

On Wed, Dec 10, 2008 at 11:09 AM, Knox, Kent <Kent.Knox at amd.com> wrote:

> I've done a naïve printf style instrumentation of the gromacs FFT
> interface, and can only see 3d real-to-complex/complex-to-real style fft's
> being used.  For an MPI build of gromacs, I see that gromacs chunks 3d FFT's
> into 2D FFT's itself and passes those down to the underlying fft library to
> finish.  I believe that in these instances ACML is threaded appropriately,
> but please let me know if I am drawing the wrong conclusions.  I am basing
> my observations purely on the d.lzm bench.

Yes it is correct that gromacs partitions the 3d FFT in 2D FFT itself. It
doesn't use threading for the FFT because the surrounding code is not
threaded and thus one MPI process is running per core. My work (not in
Gromacs yet) is to partition into two dimensions not only one to scale to
higher number of processors. The partitioning thus  ends up with
columns/stencils instead of slabs.


> -----Original Message-----
> From: roland at rschulz.eu [mailto:roland at rschulz.eu] On Behalf Of Roland
> Schulz
> Sent: Tuesday, December 09, 2008 2:24 PM
> To: Discussion list for GROMACS development
> Cc: Knox, Kent
> Subject: Re: [gmx-developers] Gromacs FFT
> Hi Kent,
> usually FFT is not a bottleneck for MD when run on one or a few processors.
> You can increase the FFT load slightly by using a small cut-off (rcoulomb in
> the mdp file) and a fine grid (fourierspacing in mdp). Typical one uses a
> minimum of rcoloumb 0.8 and fourierspacing of 1.1. But you could decrease
> fourierspacing further to see the effect on the FFT time.
> FFT becomes the mayor bottleneck for parallel runs on more than a few
> hundred CPUs. I did some work on parallel FFT on Jaguar and Kraken. Let me
> know in case you are also interested in parallel FFT. Is it correct that the
> ACML only supports serial FFT so far? Do you plan to add an parallel FFT or
> an extension as for the linear algegra routines with AMD ScaLAPACK?
> Roland

ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20081210/c20bebea/attachment.html>

More information about the gromacs.org_gmx-developers mailing list