[gmx-developers] MPI enabled FFTW and GROMACS

Erik Lindahl lindahl at stanford.edu
Sat Jul 20 00:11:51 CEST 2002


Hi Joshua,


> According to some of the comments that I have seen in the source, and
> related documentation, a MPI enabled GROMACS requires a MPI enabled FFTW
> to run (and I believe the converse to be rue as well: an MPI disabled
> GROMACS requires a MPI disabled FFTW installation).

FFTW is only used for the 3D-FFT in the PME algorith. You can actually 
compile Gromacs with the --without-fftw flag; everything
except PME will still work.

The parallel FFTW calls are in a separate library, so if you've compiled
FFTW with MPI support you can use it both for MPI and non-MPI versions
of Gromacs. (no need to recompile FFTW without MPI).

> 
> Does GROMACS use MPI calls that call FFTW MPI calls or:
> Does GROMACS use just FFTW MPI calls?
> 

Gromacs always uses MPI calls to communicate coordinates, forces and 
energies between nodes.

For the PME algorithm we need to do a 3D-FFT of the grid. We are working
on a version where we do 1D and 2D-FFTs and communicate the grid data
ourselves (using MPI directly), but in the present version we just call
the parallel FFTW 3D-FFT, which in turn uses its own MPI calls for the
communication.


> I have been working my way through the source files trying to answer this
> one, and thought that maybe someone could enlighten me. From what I have
> seen the former seems to be the case where GROMACS is parallelizing then
> in those parallelizations we have FFTW parallelizing. How wrong am I?
> 

Right. The 3D-FFT in PME is parallelized, although that's actually a 
little stupid since it would be smarter for one node to do the normal 
interactions while another does the PME - less communication means 
better scaling. This is actually what we're working on. The PME part is
actually relatively straightforward, but we decided to overhaul the
direct-space part and introduce a 3D domain decomposition too...

Cheers,

Erik





More information about the gromacs.org_gmx-developers mailing list