[gmx-developers] MPI Datatypes and TMPI

Roland Schulz roland at utk.edu
Wed Apr 4 17:03:15 CEST 2012

On Wed, Apr 4, 2012 at 9:31 AM, Berk Hess <hess at kth.se> wrote:

>  On 04/04/2012 03:22 PM, Roland Schulz wrote:
> Hi,
>  we are looking how to best tackle the issue of packing data for
> MPI communication in 5.0. We believe this is an important issue because the
> two current approaches (serializing in serial buffer (e.g. global_stat) or
> sending in many small messages (e.g. initial bcast)) are both slow and
> produces difficult to read code code (global_stat). The many
> small messages is for large systems a serious issue for scaling and even
> the serializing is unnecessary slow because it means
> a potentially unnecessary copy.
>  We first looked into Boost::MPI but while it is very nice in some
> aspects, it also has disadvantages. Thus we're looking at alternatives.
> Most interesting alternatives use MPI Datatypes to get high performance
> and avoid the unnecessary copy of serialization. The problem is that TMPI
> doesn't support MPI Datatypes.
>  Thus my question: Is it planned to add Datatypes to TMPI? If not is TMPI
> still required in 5.0? Would it be sufficient to support OpenMP for non-MPI
> multi-core installations in 5.0? What was the reason for TMPI in the first
> place? Why did we not just bundle e.g. OpenMPI for those users missing an
> MPI implementation?
> Nothing is planned, but if it isn't much work, Sander might do it.
Not sure how much work it would be.

> For the old code path/kernels we still like to have TMPI, as this makes it
> far easier for normal users to get maximum performance.
> For the new Verlet scheme OpenMP seems to do very well, but with multiple
> CPUs and/or GPUs TMPI is still very nice,
I see.

 as it makes configuring and starting runs trivial.
In case it turns out too much work to implement in tMPI would it be a
solution to make it as trivial with OpenMPI/Mpich? I think it could be made
as simple to configure by bundling the library. It could be made as simple
to run if mdrun automatically spawns additional MPI processes (with the
same logic as currently tMPI does). This is supported by OpenMPI see:


> Cheers,
> Berk
>  Roland
>  --
> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
> 865-241-1537, ORNL PO BOX 2008 MS6309

ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20120404/8aa87cd3/attachment.html>

More information about the gromacs.org_gmx-developers mailing list