[gmx-developers] Collective IO

Szilárd Páll szilard.pall at cbr.su.se
Fri Oct 1 00:21:14 CEST 2010


Hi Roland,

Nice work, I'll definitely take a look at it!

Any idea on how does this improve scaling in general and at what
problem size starts to really matter? Does it introduce and overhead
in smaller simulations or it is only conditionally turned on?

Cheers,
--
Szilárd



On Fri, Oct 1, 2010 at 12:11 AM, Roland Schulz <roland at utk.edu> wrote:
> Hi,
> we (Ryan&me) just uploaded our work on buffered MPI writing of XTC
> trajectories. It can be found in the branch CollectiveIO.
> We buffer a number of frames and use MPI IO to write those frames from a
> number of nodes (see previous mails for details). The XTC trajectory is
> written at least at every checkpoint guaranteeing that no frames are lost if
> a simulation crashes.
> We have tested it in serial, with PME, with threads, with multi and it seems
> to work in all cases.
> For 3 million atoms on 8192 writing every 1000 steps the performance is
> increased from 21ns/day to 34ns/day and the time spent in comm.
> energies decreases from 47% to 7%.
> Feedback on the code change is very welcome. If you want to look at the
> diff, I suggest to use:
> git difftool afd66e48c4e608    #this is the origin/master from when we
> uploaded the branch
> Please let us know what you would like us to change before we merge this
> into master.
> Thanks
> Ryan & Roland
> --
> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
> 865-241-1537, ORNL PO BOX 2008 MS6309
>
> --
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-developers-request at gromacs.org.
>



More information about the gromacs.org_gmx-developers mailing list