[gmx-users] Subject: Re: how to avoid multiple 'aprun' on batch job script?

Christopher Neale chris.neale at alum.utoronto.ca
Sat May 9 17:40:24 CEST 2015

If your sysadmin has a hard rule against multiple aprun instances then that is something that we obviously can't fix ;) However, I will note that (with mpirun) on some clusters multiple sequential instances will work and on other clusters it simply does not. The issue on some of these clusters is that once the first mpirun instance completes, then no other lines in the script get executed (even a simple echo command). If you sysadmin's rules are due to the latter case, then the workaround that I use in these cases is to replace

mpirun ...


  mpirun ...
} &

I have no idea what that works, but it does. I mostly use it when I want to do some analysis at the end of a job to decide whether to cancel the existing job chain (see below). If this is not the problem, then you could also just consider submitting job chains where each job is the same script and each job does only one invocation of aprun. By chaining them together with something like:

id=$(qsub job.sh)
for((i=1;i<=10;i++)); do
  id=$(qsub -W depend=afterany:$id job.sh)

Obviously you will need to figure out the exact nomenclature of the dependency flags on your cluster and also you may need to parse the output from qsub (or msub, sbatch, etc) becahse some clusters give you more that just the job id or even more than one line of output and what you want is simply the job id number.

You'll need to take care that you don't end up submitting the same job twice without dependencies (hash tag nightmare with gromacs backups). To avoid this, you might consider creating a job continuation script that outputs the $id to a file each time and also at the start checks to see if such a file exists and if it does then use its contents for the first dependency. In case it is useful, here is a job submission script that I use with PBS. Some clusters have issues with qsub occasionally failing and that can lead to problems with the simple script below. Therefore, be careful and consider adding checks for error conditions. However, I have pasted a simple version below because I figure that will be the most useful and easy to read.



# get the job name
job=$(head ${script} |grep "^#PBS -N "|awk '{print $NF}')

# get the list of jobs in the queue. The -v flag with the -n flag provides longer job names than -n does without the -v flag
n=$(showq -n -v |grep $(whoami)|grep "${job} "|wc -l)

# handle the initial submission differently
if ((n==0)); then
  # Make sure that the job really does not exist (possible problem with name length would be a disaster)
  if [ -e last_submitted_jid ]; then
    n=$(showq -n -v |grep $(whoami)|grep "$(cat last_submitted_jid) "|wc -l)
    if ((n!=0)); then
      echo "The job ${job} does not appear in the queue, but the jid $(cat last_submitted_jid) (in last_submitted_jid file) does appear in the queue. Error."
      cd - >/dev/null
  # if script gets here, then the job is not running or Idle at all so start a new chain
  id=$(qsub ${script})
  echo ${id} > ./last_submitted_jid
  let "n++"
  # The chain already exists. Ensure that the job id at the end of the chain is available and load it into the $id variable
  if [ ! -e last_submitted_jid ]; then
    echo "There are $n jobs of ${job} already running or queued but the last_submitted_jid file does not exist so the chain cannot be extended"
    cd - >/dev/null
  id=$(cat last_submitted_jid|awk -F '.' '{print $1}')
# Main loop for job chain submission
for((j=n;j<MAX;j++)); do
  nid=$(qsub -W depend=afterany:${id} ${script})
  echo ${nid} > ./last_submitted_jid

From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of Satyabrata Das <satyabratad04 at gmail.com>
Sent: 09 May 2015 02:41
To: gmx-users at gromacs.org
Subject: [gmx-users] Subject: Re: how to avoid multiple 'aprun' on batch job    script?

Thank you Justin, indeed wallclock limit is there, there are
heterogeneity on performance (40 ns to 120 ns for 24:00:00)
also and to avoid the very large trr file we do follow the smaller bin.
One need to submit few time same job.
Regarding heterogeneity: in order to balance PP:PME load imbalance
 I do use -npme to allocate required no. of cpus for PME. For
different run with similar load imbalance (~ <4%) overall performance
vary. Any suggestions?

With best regards,

Satyabrata Das
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.

More information about the gromacs.org_gmx-users mailing list