[gmx-developers] Python interface for Gromacs

Berk Hess hess at cbr.su.se
Thu Sep 9 09:44:38 CEST 2010

On 09/09/2010 09:35 AM, Mark Abraham wrote:
> > People in the Lindahl group are working on parallellizing
> > analysis tools because they are quickly becoming the bottleneck.
> > We run simulations of large systems on hundreds of processors,
> > and due to checkpointing this can be done largely unattended.
> If so, then an important issue to address is using MPI2 parallel IO
> properly. At the moment, for DD, mdrun collects vectors for I/O on the
> master and writes them out in serial. Proper use of parallel I/O might
> be worth the investment in restructuring the output. Maintaining the
> DD processor-local file view suited for I/O of the local atoms is
> probably not any more complex than the existing contortions that are
> gone through to gather global vectors. Likewise, a parallel analysis
> tool will often wish to be doing its I/O in parallel.
The main issue here is that the atom order chances every nstlist steps.
We could write files with all atom indices in there as well, but that
would double the file size.
Also I have my doubts about the efficiency of MPI i/o.
Ideally we would want the i/o to happen in the background, I don't know
if the MPI file i/o can do this.
With Roland Schulz I have been discussing the possibility of some
(dedicated) processes collecting
and writing the data using some kind of tree structure.

> We would probably wish to write our own data representation conversion
> functions to hook into MPI_File_set_view so that we can read/write our
> standard XDR formats in parallel. (Unless, of course, the existing
> "external32" representation can be made to do the job.)
> Mark 

More information about the gromacs.org_gmx-developers mailing list