[gmx-developers] Re: flushing files

Sander Pronk pronk at cbr.su.se
Wed Oct 13 09:16:26 CEST 2010


> 
> The difference is that an IO thread would virtually never run though; it would instantly block waiting for the filesystem, and in the mean time the real threads would get control back?
> Yes but you can only have one IO thread per file (otherwise the synchronization becomes quite difficult). Thus if the overhead is larger than the time between writes than your are still waiting. The time for MPI_File_sync can be * extremely* long (compared to fflush). 
> 


There is already quite a bit of code dealing with files and threads. We should be add a single syncing thread without too much effort. Is there any danger of running out of space for threads? (they each have a stack, etc).

BTW MPI_File_sync is not a collective call, right?


> Priority 1A is that we should never write "broken" trajectory frames to disk - that has caused huge amounts of grief in the past, and can be really confusing to users.
> 
> I think that basically leaves two long-term options:
> 
> 1) Make sure that each frame is properly flushed/synced
> 2) Buffer IO and wait until the next checkpoint time before you write the frames to disk.
> 
> If we go with #2, there are two additional (minor?) issues: First, we need to check if checkpointing is disabled or only done every 5-10h, and in that case anyway sync frames ever ~15 minutes. Second, there could be a number of systems where we run out of memory if we buffer things. Then we need to designate a buffer amount and flush files when this is full.


With a broken frame do you mean a frame that has been partially written up to an EOF, or do you mean a frame that is corrupted for some other reason? The first case sounds like something that we should be able to deal with..




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20101013/310fa6e8/attachment.html>


More information about the gromacs.org_gmx-developers mailing list