[gmx-developers] flushing files

Roland Schulz roland at utk.edu
Wed Oct 13 09:25:20 CEST 2010


On Wed, Oct 13, 2010 at 3:08 AM, Erik Lindahl <lindahl at cbr.su.se> wrote:

> Hi,
>
> On Oct 13, 2010, at 9:02 AM, Roland Schulz wrote:
>
> On Wed, Oct 13, 2010 at 2:20 AM, Erik Lindahl <lindahl at cbr.su.se> wrote:
>
>> Hi,
>>
>> On Oct 13, 2010, at 7:53 AM, Roland Schulz wrote:
>>
>>
>> On Wed, Oct 13, 2010 at 1:02 AM, Erik Lindahl <lindahl at cbr.su.se> wrote:
>>
>>> Hi,
>>>
>>> File flushing has been a huge issue to get working properly on AFS and
>>> other systems that have an extra layer of network disk cache. We also want
>>> to make sure the files are available e.g. on the frontend node of a cluster
>>> while the simulation is still running.
>>>
>> Do we want to guarantee that it is available sooner than at each
>> checkpoint (thus by default 15min)?
>>
>>
>> It's not only a matter of "being available", but making sure you don't
>> lose all that data in the disk cache layer of the node crashes and you (for
>> some reason) disabled checkpointing.
>>
> Well but if you disabled checkpointing than it's your own fault ;-)
>
>>
>> Basically, when a frame has been "written", it is reasonable for the user
>> to expect that it is actually on disk. The default behavior should be safe,
>> IMHO.
>>
> I'm not sure whether the user necessarily assumes that. Their are well
> known cases where the behavior of the cache is exposed to the user (e.g.
> writing files to USB sticks). Currently GROMACS only does a fflush not a
> fsync after each frame. Thus, it is not guaranteed that it is immediate on
> the disk because it can still be in the kernel buffers. Already now, a fsync
> is only done after each checkpoint.
>
>
> Priority 1A is that we should never write "broken" trajectory frames to
> disk - that has caused huge amounts of grief in the past, and can be really
> confusing to users.
>

This is not what we are doing at the moment. At the moment (flush after
frame, sync after checkpoint) it is possible that the trajectory is broken.
But the check-pointing append feature guarantees that it automatically fixes
it. I like the approach of fast writing + automatic fix in the worst case
better than having to guarantee that it is always correct from the
beginning. Also it would be extremely difficult to guarantee it for all
cases (e.g. for the case of a crash during writing of a frame).

>
> I think that basically leaves two long-term options:
>
> 1) Make sure that each frame is properly flushed/synced
>
that would be slow for frequent writing.


> 2) Buffer IO and wait until the next checkpoint time before you write the
> frames to disk.
>
We have added Buffer IO already in the CollectiveIO branch. Without
buffering it is impossible to get fast CollectiveIO.
At the moment we only buffer as many frames as the number of IO nodes. We
could certainly increase that and that would reduce or even eliminate the
sync problem.

>
> If we go with #2, there are two additional (minor?) issues: First, we need
> to check if checkpointing is disabled or only done every 5-10h, and in that
> case anyway sync frames ever ~15 minutes.
>
ok

> Second, there could be a number of systems where we run out of memory if we
> buffer things. Then we need to designate a buffer amount and flush files
> when this is full.
>
Currently we have a limit of 2MB per core as upper limit for the buffer.
This seems to be enough for efficient collective IO. Adding the flush back
in we might want to increase that limit a bit.

>
> The problem is that MPI-IO doesn't have this distinction. Their is only a
> MPI_File_sync (and no MPI_File_flush). And a sync can be *very* expensive.
>
>
> Unfortunately we absolutely need to do a full sync at regular intervals
> (but #2 above would work), or you risk losing weeks of results on some
> clusters.
>
I never wanted to remove the full sync at regular intervals. The original
question was whether we need a flush (not sync) after each frame.

Roland



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20101013/154a6288/attachment.html>


More information about the gromacs.org_gmx-developers mailing list