[gmx-users] mdrun cpt

Pavan Ghatty pavan.gromacs at gmail.com
Tue Oct 29 01:25:58 CET 2013


Now /afterok/ might not work since technically the job is killed due to
walltime limits - making it not ok. So I suppose /afterany/ is a better
option. But I do appreciate your warning about spamming the queue and yes I
will re-read PBS docs.


On Mon, Oct 28, 2013 at 5:11 PM, Mark Abraham <mark.j.abraham at gmail.com>wrote:

> On Mon, Oct 28, 2013 at 7:53 PM, Pavan Ghatty <pavan.gromacs at gmail.com
> >wrote:
>
> > Mark,
> >
> > The problem with one .tpr file set for 100ns is that when job number
> (say)
> > 4 hits the wall limit, it crashes and never gets a chance to submit the
> > next job. So it's not really automated.
> >
>
> That's why I suggested -maxh, so you can have an orderly shutdown. (Though
> if a job can get suspended, that won't always help, because mdrun can't
> find out about the suspension...)
>
> Now I could initiate job 5 before /mdrun/ in job 4's script and hold job 5
> > till job 4 ends.
>
>
> Sure - read your PBS docs and find the environment variable to read so that
> job 4 knows its ID so it can submit job 5 with an afterok hold on job 4 on
> it. But don't tell your sysadmins where I live. ;-) Seriously, if you live
> on this edge, you could spam infinite jobs, which tends to get your account
> cut off. That's why you want the afterok hold - you only want the next job
> to start if the exit code from the first script correctly indicates that
> mdrun exited correctly. Test carefully!
>
> Mark
>
> But the PBS queuing system is sometime weird and takes a
> > bit of time to recognize a job and give back its jobID. So I could submit
> > job 5 but be unable to change its status to /hold/ because PBS does not
> > return its ID. Another problem is that if resources are available, job 5
> > could start before I ever get a chance to /hold/ it.
> >
> >
> >
> >
> > On Mon, Oct 28, 2013 at 11:47 AM, Mark Abraham <mark.j.abraham at gmail.com
> > >wrote:
> >
> > > On Mon, Oct 28, 2013 at 4:27 PM, Pavan Ghatty <pavan.gromacs at gmail.com
> > > >wrote:
> > >
> > > > I have need to collect 100ns but I can collect only ~1ns (1000steps)
> > per
> > > > run. Since I dont have .trr files, I rely on .cpt files for restarts.
> > For
> > > > example,
> > > >
> > > > grompp -f md.mdp  -c md_14.gro -t md_14.cpt -p system.top -o md_15
> > > >
> > > > This runs into a problem when the run gets killed due to walltime
> > > limits. I
> > > > now have a .xtc file which has run (say) 700 steps and a .cpt file
> > which
> > > > was last written at 600th step.
> > > >
> > >
> > > You seem to have no need to use grompp, because you don't need to use a
> > > workflow that generates multiple .tpr files. Do the equivalent of what
> > the
> > > restart page advises: mdrun -s topol.tpr -cpi state.cpt. Thus, make a
> > .tpr
> > > for the whole 100ns run, and then keep doing
> > >
> > > mdrun -s whole-run -cpi whateverwaslast -deffnm
> whateversuitsyouthistime
> > >
> > > with or without -append, perhaps with -maxh, keeping whatever manual
> > > backups you feel necessary. Then perhaps concatenate your final
> > trajectory
> > > files, according to your earlier choices.
> > >
> > > - To set up the next run I use the .cpt file from 600th step.
> > > > - Now during analysis if I want to center the protein and such,
> > /trjconv/
> > > > needs an .xtc and .tpr file but not a .cpt file. So how does
> /trjconv/
> > > know
> > > > to stop at 600th step?
> > >
> > >
> > > trjconv just operates on the contents of the trajectory file, as
> modified
> > > by things like -b -e and -dt. The .tpr just gives it context, such as
> > atom
> > > names. You could give it a .tpr from any point during the run.
> > >
> > > Mark
> > >
> > > If this has to be put in manually, it becomes
> > > > cumbersome.
> > > >
> > > > Thoughts?
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Sun, Oct 27, 2013 at 11:38 AM, Justin Lemkul <jalemkul at vt.edu>
> > wrote:
> > > >
> > > > >
> > > > >
> > > > > On 10/27/13 9:37 AM, Pavan Ghatty wrote:
> > > > >
> > > > >> Hello All,
> > > > >>
> > > > >> Is there a way to make mdrun put out .cpt file with the same
> > frequency
> > > > as
> > > > >> a
> > > > >> .xtc or .trr file. From here
> > > > >> http://www.gromacs.org/**Documentation/How-tos/Doing_**Restarts<
> > > > http://www.gromacs.org/Documentation/How-tos/Doing_Restarts>I see
> that
> > > we
> > > > >> can choose how often (time in mins) the .cpt file is written. But
> > > > clearly
> > > > >> if the frequency of output of .cpt (frequency in mins) and .xtc
> > > > (frequency
> > > > >> in simulation steps) do not match, it can create problems during
> > > > analysis;
> > > > >> especially in the event of frequent crashes. Also, I am not
> storing
> > > .trr
> > > > >> file since I dont need that precision.
> > > > >> I am using Gromacs 4.6.1.
> > > > >>
> > > > >>
> > > > > What problems are you experiencing?  There is no need for .cpt
> > > frequency
> > > > > to be the same as .xtc frequency, because any duplicate frames
> should
> > > be
> > > > > handled elegantly when appending.
> > > > >
> > > > > -Justin
> > > > >
> > > > > --
> > > > > ==============================**====================
> > > > >
> > > > > Justin A. Lemkul, Ph.D.
> > > > > Postdoctoral Fellow
> > > > >
> > > > > Department of Pharmaceutical Sciences
> > > > > School of Pharmacy
> > > > > Health Sciences Facility II, Room 601
> > > > > University of Maryland, Baltimore
> > > > > 20 Penn St.
> > > > > Baltimore, MD 21201
> > > > >
> > > > > jalemkul at outerbanks.umaryland.**edu <
> > jalemkul at outerbanks.umaryland.edu
> > > >
> > > > | (410)
> > > > > 706-7441
> > > > >
> > > > > ==============================**====================
> > > > > --
> > > > > gmx-users mailing list    gmx-users at gromacs.org
> > > > > http://lists.gromacs.org/**mailman/listinfo/gmx-users<
> > > > http://lists.gromacs.org/mailman/listinfo/gmx-users>
> > > > > * Please search the archive at http://www.gromacs.org/**
> > > > > Support/Mailing_Lists/Search<
> > > > http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> > > > > * Please don't post (un)subscribe requests to the list. Use the www
> > > > > interface or send it to gmx-users-request at gromacs.org.
> > > > > * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<
> > > > http://www.gromacs.org/Support/Mailing_Lists>
> > > > >
> > > > --
> > > > gmx-users mailing list    gmx-users at gromacs.org
> > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > > * Please don't post (un)subscribe requests to the list. Use the
> > > > www interface or send it to gmx-users-request at gromacs.org.
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > --
> > > gmx-users mailing list    gmx-users at gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-request at gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > --
> > gmx-users mailing list    gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



More information about the gromacs.org_gmx-users mailing list