[gmx-users] Problem with gromacs in Cluster
Richard Broadbent
richard.broadbent09 at imperial.ac.uk
Thu Apr 25 15:40:34 CEST 2013
I generally build a tpr for the whole simulation then submit one job
using a command such as:
mpirun -n ${NUM_PROCESSORS} mdrun -deffnm ${NAME} -maxh
${WALL_TIME_IN_HOURS}
copy all the files back at the end of the script if necessary then:
then resubmit it (sending out all the files again if necessary) this
time using the command
mpirun -n ${NUM_PROCESSORS} mdrun -deffnm ${NAME} -maxh
${WALL_TIME_IN_HOURS} -cpi
you can then keep resubmitting with that line till the job is finished.
In my case I have a maximum wall clock of 24hrs on some machines so this
gets used a lot
I also think that 1.6ns/day which is what you seem to be getting is very
low performance and you might want to consider using more processors.
Check the log file as the profiling information at the end will indicate
whether this might be beneficial.
Richard
On 25/04/13 13:58, Francesco wrote:
> You can split the simulation in different part (for example 5 ns each),
> every time you'll finish one you will extend it adding more time.
> http://www.gromacs.org/Documentation/How-tos/Extending_Simulations?highlight=extend
>
> My cluster uses a different "script system" than yours so I can't help
> with it, but basically you have to submit more than 1 job with different
> command to execute:
>
> 1) first production
> mpirun -n 8 mdrun -s md_test.tpr -deffnm md_test -np 8
> 2) modify the tpr file
> tpbconv -s previous.tpr -extend timetoextendby -o next.tpr
> 3) next X ns
> mpirun -n 8 mdrun -s next.tpr -cpi previous.cpt
> 4) modify the tpr file
> 5) production md
> and so on
>
> with qsub you can submit a depending job (-hold_jid) that will run only
> when the previous step will finish, in your case there should be a
> similar way to do it.
>
> cheers
>
> Fra
>
> On Thu, 25 Apr 2013, at 12:28 PM, Sainitin Donakonda wrote:
>> Hey all,
>>
>> I recently ran 20ns simulation of protein ligand complex on cluster i
>> used
>> following script to run simulation
>>
>> grompp -f MD.mdp -c npt.gro -t npt.cpt -p topol.top -n index.ndx -o
>> md_test.tpr
>>
>> mpirun -n 8 mdrun -s md_test.tpr -deffnm md_test -np 8
>>
>> *I saved this as MD.sh And then submited to cluster using following
>> script*
>>
>> #!/bin/bash
>> #BSUB -J testgromacs # the job's name/array job
>> #BSUB -W 120:00 # max. wall clock time in hh:mm
>> #BSUB -n 8,8 # number of processors Min,Max
>> #BSUB -o test/output_%J.log # output file
>> #BSUB -e test/errors_%J.log # error file
>> #BSUB -M 8192 #Memory limit in MB
>>
>> echo "Started at `date`"
>> echo
>>
>> cd test
>>
>> echo "Running gromacs test in `pwd`"
>>
>> ./MD.sh
>>
>> echo "Finished at `date`"
>>
>>
>> It gave result but when checked files .xtc and created RMSD plots in that
>> x-axis of this plot i see only 8ns ...but in MD.MDP file i specified
>> 20ns..
>>
>> Cluster Output says that "TERM_RUNLIMIT: job killed after reaching LSF
>> run
>> time limit.
>> Exited with exit code 140". i gave maximum cluster time 120 hours..still
>> it
>> is not sufficient ..
>>
>> Can any body tell me how do it split script i such that i will get all
>> 20ns simulation
>>
>>
>> Thanks in advance ,
>>
>> Sainitin
>> --
>> gmx-users mailing list gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
More information about the gromacs.org_gmx-users
mailing list